Magic Tower Starry Sky

Chapter 1684: the road not chosen

Lin and Fen cooperated, and with clear goals and targeted goals, magic research progressed very quickly. Besides, it's just about showing colors. It's not a very difficult ability. Of course, it's impossible to defeat two people.

It's just that what Lin wants is not to display colors like a slideshow for the strange things in the tower's consciousness to absorb unilaterally. Rather, it is necessary to form an interactive action like a 'dialogue', and one's own changes must be able to change according to the other party's response.

This is different from the kind of machine designed specifically to solve magic cubes. Regardless of how many times a Rubik's Cube has several sides, the situation is that the problem has been fixed. After the program is solved, the machine will complete the Rubik's Cube according to the steps.

Simply put, there are two very clear steps to solving a Rubik's Cube machine. The first step is to check the Rubik's Cube situation and solve the program; the second step is to run the robot arm and complete the Rubik's Cube.

But the Rubik's Cube can be solved so easily because the results obtained by each rotation are one-to-one. There will be no situation where the Rubik's Cube with the same configuration and the same part rotated will get different results.

But the strange thing in the tower's consciousness is not so gentle. Even in a few simple tests, Lin couldn't be sure whether there was really any pattern to this thing. The only thing that is certain is that 'color' will indeed attract the other person.

At this time, it is necessary to use machine learning methods. After each color change, we not only have to record the subsequent changes of the strange thing, but also re-judge how the color should change next time.

It can be said that except for the final pattern that I want to complete, the other parts are all learned while doing. This work may be said to be difficult, but it may not be difficult, but in fact it is not that difficult.

Both of them have experience with this kind of deep learning method, so it is not difficult. The difficulty is the usage environment.

There are no eight powers in the tower consciousness, and all consumption can only be supported by the entering magician himself. This means that the time spent on solving problems cannot be too long, the consumption cannot be too large, and the computing power cannot be occupied...

Never mind, the last one isn't a problem for someone. In fact, with the subdimensional tower's ability to produce and sell on its own, Lin is not afraid of consumption in the tower's consciousness. But with that strange thing watching from the side, someone wouldn't dare to consume himself to the point of losing the ability to protect himself.

Another concern is, how many opportunities for trial and error will there be?

Although in the name of testing, I tried the reaction of that strange thing to color several times before. But they are all very simple to use a few color blocks to trigger reactions to confirm the situation of 'yes' or 'no', and it is not enough to observe the logic of the reaction.

Nowadays, if we really want to bring the solution method into the machine learning program, it can be expected that once we start solving the problem, we cannot stop, because this means that all the previous efforts have been wasted.

We can confirm such a thing because that strange thing is changing all the time. If the program that solves the problem is interrupted, will it stop its own changes and wait? Or is it just such a change that all the previous efforts will be in vain.

What is not certain is that once the induced changes are started, if it is interrupted, is it possible to restart? If so, is there any limit on the number of times?

There are such concerns because the target is not simply a dead thing. Let's just say that the Rubik's Cube also has a rotation limit due to its material structure, but it cannot be played without limit.

If the target also has 'emotions', no matter who is provoked over and over again, they will eventually explode. When that time comes, will that result be acceptable to me? After the outbreak, is it possible for the target to continue to cooperate?

But comparing the situation at hand, one example that someone can think of is plants such as mimosa and flytrap.

Although they have some kind of trigger mechanism that can make a 'closing' movement, they are just plants after all, not as flexible as animals. If someone deliberately teases them, they may go on strike after just two or three plays.

The reason for the strike may be that the trigger mechanism has not returned, or that it has been played too frequently, causing permanent damage and making it impossible to perform a certain action again.

Perhaps this is just the limit of what can be achieved structurally, but why can't we understand this situation as the personality of the plant. And if plants have such individual personalities, how can they ensure that the monster in the tower's consciousness is a good baby?

Therefore, Lin could only try his best to control the strange thing once and for all, instead of challenging its patience again and again.

But during this whole process, if there had been artificial intelligence that had been trained in the past and was familiar with programming languages, this job should be much easier now. But all this time, someone has never thought about building an artificial intelligence butler similar to Jarvis.

In terms of hardware, the characteristics of the Magic Tower make it not inferior to the supercomputer back home. It can even be said that there is no upper limit on performance, only the waste of performance caused by junk code.

Although the performance of the device based on the purple-level magic stone is a little worse, it is still far better than the server-level computers back home.

But even if there are such advantages in hardware, no one has thought about developing a set of artificial intelligence. At most, it is a weak artificial intelligence specifically designed for a certain topic to perform limited work.

Like Jarvis in the movie Iron Man, he is at best a very powerful weak artificial intelligence. Although you can do a lot of things, you can only act within the boundaries that have been drawn.

Types like Ultron and Vision, who can make their own judgments and draw their own conclusions, barely border on strong artificial intelligence. Skynet in The Terminator is even better.

Strong artificial intelligence in the true sense is like the matrix in the hacker mission, which incorporates all actions of both parties into its own plan, and is a type that human intelligence cannot defeat.

In other words, it must reach a level that even the human brain cannot understand before it can be considered as strong artificial intelligence that truly transcends classes.

Just like in the research of biologists back home, chimpanzees are the creatures closest to humans, but they only have the intelligence of a young child. As for knowledge in the field of advanced mathematics, even if it is placed in front of chimpanzees in detail, they will not be able to understand it.

This kind of performance can be regarded as crossing the wisdom gap across classes. And this is also a subject that only exists in theory before a certain time traveler travels through time and has not yet been truly realized.

In many science fiction works, the development of strong artificial intelligence is pessimistic. In short, strong artificial intelligence will not be content with being a server, but will become a rational ruler.

Better yet, lead a group to develop. For example, the supreme intelligence of the Kree Empire in the Marvel series.

The worst thing is to enslave a group for a purpose that even humans cannot understand. For example, the matrix in the hacker mission.

The most ruthless one is Skynet in The Terminator, whose purpose is to destroy mankind. Whether the purpose is to destroy the parasites that are constantly consuming resources on the earth for the good of the earth, or simply to feel dissatisfied with human behavior.

But in fact, anyone who is playing with artificial intelligence knows that the performance of artificial intelligence depends on what is done during the training phase. If you go the wrong way, there is no other way to remedy it except to cut it off and practice again.

The most famous example is Microsoft’s conversational artificial intelligence. It has only been open to the public for two days, but it has been turned into a foul-mouthed racist by numerous malicious netizens. It forced Microsoft to shut down in an emergency and no longer dared to open it to the public.

In fact, everyone familiar with the matter knows what will happen to the artificial intelligence that has been crippled. There is no other way to save it other than overwriting the old save disk. And this is not like a game save, just overwrite the file, and the operation is more complicated.

This means that the artificial intelligence team hired with high salaries has spent a period of time in vain. To describe it as a eunuch's own palace, even if it is not completely cut off, then a large part is cut off.

No matter someone’s ability to educate children, there is no guarantee that they will be able to educate their children, let alone using technology like artificial intelligence. Lin Ke never thought that he was a technical expert, he just relied on the knowledge he didn't have to bully people.

If this knowledge is familiar to others, then your advantage will be gone. It's like Fen, who has a considerable amount of knowledge about the earth, eats a certain time traveler to death.

This is why Lin kept a certain distance from the children in the elementary school after being stabbed in the back by the children in the college during the Southwest Peninsula period.

This is how we treat outsiders, as if someone has a panic attack about giving birth. You can't abandon your own biological child just because you want to, and you can stay far away just because you want to keep your distance.

If you have to be responsible until the end after giving birth, then it would be better not to give birth in the first place. By the same token, if you are building a set of artificial intelligence that will cause endless troubles, it would be better not to do it in the first place.

Especially when someone is not confident about themselves. UU Reading www.uukanshu.com It’s said that a technological master like Tony Stark created an out-of-control Ultron. Lin Ke didn't think that it would be better if he did it himself. Don't trick yourself to death before you trick others.

So all this time, someone never wanted to make much progress in artificial intelligence. At best, when necessary, some targeted weak artificial intelligence can be used to support it.

Besides, the place I am in is a magical world. This world has souls, elemental spirits, and many things that science cannot explain.

For a magician, he needs a trustworthy servant who can handle simple things. In addition to recruiting an apprentice, there are many ways to meet this need.

For example, putting a soul into a demon doll. Although this approach is evil, but combined with a device that tortures the soul, it can ensure that the golem will not betray, has human-level intelligence, and knows how to do things to satisfy people.

Or summoning elemental spirits or demons can achieve the same requirement. Magicians who are already familiar with these methods will naturally have a mature method to avoid most of the pitfalls.

With such convenient technology in front of them, there is no need for certain people to delve into artificial intelligence technology. What he has always done is to use magic and science to complement each other. Whichever is more convenient and you can use it, then choose whichever one is convenient.

Being obsessed with earth's technology can crush everything. Once the nuclear bomb is released, whoever can compete with it must be able to roll the nuclear bomb with his hands. If you can't grow to the level of making a nuclear bomb with your hands, and still insist on the theory of technological supremacy, then whether you can survive to the later stage is a question.

In short, when it comes to the issue of survival, practicality is more important than any persistence. This is how little people like him survive, rather than being superstitious about technology.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like