Although the ten-year-old girl looks a bit unreliable at the moment, Fang Zheng gave her the body of the narrator imouto—after all, it was only for maintenance, and from the perspective of the dog, The technological capabilities of that world are also quite good, but the simple maintenance work should be no problem.

Fang Zheng returned to his room and began to analyze the program of the narrator imouto.

The reason why he intends to do this by himself instead of handing it over to Nymph is because Fang Zheng wants to analyze it through the program of the narrator imouto, and to understand the artificial AI Make some adjustments in manufacturing. Moreover, he also hopes to see what level the other world has developed for artificial AI technology. Although it is not to say that all of them are used, the other mountain’s stone can polish jade.

“Hoshino Dreams…”

Looking at the file name displayed on the screen, Fang Zheng was caught in a long thought. The parsing program itself is not difficult. Fang Zheng copied Nymph’s electronic intrusion capabilities, and he has been learning this knowledge from Nymph during this period, so it didn’t take much time for the analysis program itself.

However, when Fang Zheng disassembled the core of Hoshino Yumemi’s program and re-decomposed the functions into lines of code, he suddenly thought of a very special problem.

Where is the danger of artificial AI? Having said that, is artificial intelligence really dangerous?

Taking this narrator imouto as an example, Fang Zheng can easily find the underlying instruction codes of the three laws of robots in her program, and the relationship between these codes has also been proved to Fang Zheng , The one who spoke to him before was not a living being, but a robot. Her every move, her frown and smile, are all controlled by the program. Through the analysis of the scene in front of her, she makes the highest priority action she can choose.

To put it bluntly, in essence, this imouto’s approach is actually no different from those working robots on the assembly line or the npc in the game. You choose actions, and it responds to these actions. Just like in many games, players can increase the value of good or malicious according to their actions, and npc will react based on these accumulated data.

For example, it can be set that when the goodness value reaches a certain level, npc may make more excessive demands on the player, or it may be easier for the player to pass through a certain area. Conversely, if the malicious value reaches a certain level, then the npc may be more likely to yield to certain requirements of the player, or prevent the player from entering certain areas.

But this has nothing to do with whether NPCs like players or not, because the data is set in this way, and they do not have this judgment. In other words, if Fang Zheng changes the range of this value, then people can see an npc greet the evil players with a smile, but ignore the kind and honest players. This also has nothing to do with npc’s moral values, because this is the data setting.

So, returning to the previous question, Fang Zheng admitted that his first time meeting with Hoshino Yumemi was quite dramatic, and the narrator robot imouto was also very interesting.

Then let’s make an analogy. When this narrator imouto gave Fang Zheng a bouquet made of a large amount of non-burnable garbage, Fang Zheng suddenly became angry and smashed the bouquet of garbage. Into pieces, and then directly cut the robot imouto in front of you in half, then how will the robot imouto react?

She won’t cry or be angry. According to her program, she will only apologize to Fang Zheng, and think that her wrong behavior caused the guests to be dissatisfied with her, maybe she still Fang Zheng will be asked to find staff to repair it.

If you look at this scene in the eyes of other people, you will of course feel that the narrator imouto is pitiful, and that Fang Zheng is a nasty bully.

So, how did this difference come about?

In essence, this narrator robot is actually like automatic doors, escalators and other tools. It does its job by setting up programs. If an automatic door fails, do not open the door when it is time to open it, or close the “Pa” once when you walk over. You don’t think that automatic door is stupid, you just want to open it quickly. If he can’t open it, he may smash the broken door and walk away.

If this scene is seen in the eyes of other people, then they may think that this person is a bit rude, but they will not have any disgust with what he has done, let alone think that the other person is a bully.

There is only one reason, and that is interactivity and communication.

And this is also the biggest weakness of lifeform — emotional projection.

They will project their feelings on a certain item and expect it to respond. Would like to keep pets for the who class? Because pets respond to everything they do. For example, when you call a dog, it will come over and wag its tail at you. And a cat may just lie there and move around without being too lazy to care about you, but when you stroke it, it will still shake its tail, or some cute and cute will lick your hand.

But if you call a table and stroke a nail, even if you are full of love, they are impossible to respond to you. Because they have no feedback on your emotional projections, they will naturally not be taken seriously.

Similarly, if you have a TV, and then one day you want to replace it with a new one, then you will not have any hesitation. Perhaps the price and space will be your consideration, but the TV itself Not among them.

But on the other hand, if you add a person AI to the TV, every day when you get home, the TV will open your mouth to welcome you home, and it will also tell you what programs are available today. It will echo your complaints while watching the show. And when you decide to buy a new TV, it will complain, “Why, I’m not doing it well, don’t you plan to want me anymore?”

Then you are buying a new TV. When replacing, naturally hesitate. Because your emotional projection is rewarded here, and the artificial AI of this TV also has memories of all the time with you. If you don’t have a memory card to move it to another TV, would you hesitate or give up to replace it with a new TV?

It certainly will.

But be rational, brother. This is just a TV, everything it does is programmed, all of which are debugged by merchants and engineers specifically for user stickiness. They do this to ensure that you will continue to buy their products, and the pleading voice inside is only to prevent you from changing other brands of goods. Because when you say you want to buy a new TV, what this artificial AI thinks is not “He is going to abandon me and I am sad” but “Master wants to buy a new TV, but the new TV is not its own brand, so according to this logic In return, I need to start the’pray’ program to allow Master to continue to maintain stickiness and loyalty to its own brand.”

The truth is indeed the truth, the fact is also this fact, but will you accept it?

No.

Because life is emotional, and sensibility and reason are not separated from each other is a consistent manifestation of intelligent life.

Humans will always do many unreasonable things because of this.

So when they think AI is pathetic, it’s not because AI is really pathetic, but because they “feel” AI pathetic.

This is enough. As for the truth, no one cares.

This is why there is always conflict between the who class and AI. AI itself is not at all wrong. Everything it does is within the scope of its own program and logic processing, and this Everything is created by human beings and delineated for it. It’s just that during this process, humans’ emotional projections have changed, which gradually changed their minds.

They expect AI to respond more to their emotional projections, so they will adjust the processing range of AI to allow them to have more emotions and reactions and self-awareness. They think that AI has learned emotions (in fact not at all), so they can no longer treat them as machines, so they are given the right to be self-conscious.

However, when AIs have self-awareness, begin to awaken and act according to this setting, humans begin to fear.

Because they discovered that they had made things beyond their control.

But the problem is that “out of control” itself is also a setting instruction made by them.

They think that AI betrayed them, but in fact, from beginning to end, AI only acted according to the instructions they set. There is no betrayal, on the contrary, they are just confused by their own feelings.

This is a deadlock.

If Fang Zheng sets out to create an AI by himself, he may fall into it and cannot help himself. Suppose he creates a little girl’s AI, then he will definitely gradually improve her functions like his own child, and eventually give her some “freedom” because of “emotional projection”.

In this way, AI may react completely beyond Fang Zheng’s expectation because it is different from human logic.

At that time, Fang Zheng’s only thought was… he was betrayed.

But in fact, all of this was caused by him.

“…………………Maybe I should consider another way.”

Looking at the code in front of him, Fang Zheng was silent for a long time, then sighed.

He used to think that this is a very simple thing, but now, Fang Zheng is not so sure.

But before that…

Looking at the code in front of you, Fang Zheng extended the hand and put it on the keyboard.

Let’s do what you should do.


Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like