Chapter 3800 The 'red line' that robots cannot cross
Speaking of this, Wu Hao glanced at Lingxi who was serving them in front of him, smiled slightly, and then the reporter spoke.
"To this end, we have specially designed a very complete set of basic laws, or its behavioral guidelines, for the robot.
This set of behavioral guidelines is very complex and must take into account all aspects. For example, robots must not harm humans, so what is harmful to humans and what is not harmful to humans?
There needs to be a clear limit for this, but this limit seems very simple, but it is actually very difficult. Because it involves a very wide range of scope and situations, it is very easy for loopholes to appear."
"Especially this kind of service robots that are in close contact with humans need to be more cautious, because once a vulnerability occurs, it may cause harm to the user and may even cause harm to society."
At this point, Wu Hao put away his smile, emphasized his tone, and said seriously: "To this end, we invited an expert team composed of many experts in law, ethics, society and other aspects to formulate a detailed set of
code of conduct.
In other words, we can regard it as the constitution of the robot field, and no intelligent robot can cross it."
The law regarding robots sounds very difficult. Jiang Nan responded with an interested expression after hearing this.
Wu Hao nodded and said: "Yes, it is not easy.
Because there is no precedent to follow, these require continuous research and exploration, which is far more difficult than the formulation of general laws.
It not only requires in-depth cooperation across disciplines and fields, but also foresees and solves various unknown challenges that may arise in the future.
First of all, the complexity of the technical level is a big problem. With the rapid development of artificial intelligence technology, the boundaries of robot capabilities continue to expand. How to ensure that these technologies are not abused or misused while complying with the legal framework requires extremely high attention.
Technical foresight and control capabilities.
For example, for autonomous learning robots, how to define the legality and morality of their learning content and prevent them from absorbing bad information from the Internet or forming wrong values is a huge technical challenge.
Secondly, the blurred boundaries between law and ethics is also a major obstacle.
The definition of harming human beings may have completely different interpretations in different cultures and situations. How to set a universally applicable standard on the basis of respecting multiculturalism to protect humans from harm by robots without overly restricting the functions of robots.
and development are ethical dilemmas that must be faced in the formulation process.
In addition, whether robots should enjoy certain forms of rights and responsibilities is also a focus of debate, which is directly related to the legal status of robots.
Furthermore, the difference between social acceptance and public cognition is also a factor that cannot be ignored. The public has different expectations and fears about robots. How to educate the public to understand and accept the existence and role of robots while ensuring public safety, and reduce undesirable risks?
Necessary panic and misunderstanding are social issues that we must consider when formulating this set of "robot laws".
Finally, international coordination and cooperation are also a big challenge. In today's globalized world, the research and development and application of robots are no longer the affairs of a single country.
How to promote consensus among countries on robot legal and ethical standards, establish an international regulatory cooperation mechanism, and prevent legal conflicts and technical barriers caused by different standards is the key to formulating a globally unified robot "constitution."
Therefore, formulating this constitution in the field of robotics requires not only the wisdom of experts in law, ethics, computer science, sociology and other fields, but also extensive participation and continuous dialogue from governments, enterprises, scientific research institutions and the public.
It is a process of dynamic adjustment and continuous improvement, aiming to build a future robot ecosystem that not only promotes technological innovation, but also ensures human safety and well-being.
At this point, Wu Hao glanced at Jiang Nan, then raised his hand and continued: "Of course, if it is just a law, or a code of conduct for robots, it is not enough, we have to apply it.
So for this reason, we have developed a self-monitoring and correction system called ‘Red Line’.
Red Thread, this name is very interesting. Jiang Nan commented with a smile.
Wu Hao nodded and said: "Yes, it is the red line. In fact, the meaning is very simple, that is, the robot's behavior cannot cross this red line. This is the basis and must not be violated. Just like the Constitution, all legal provisions are
It cannot be exceeded or violated.
This system can monitor the robot's behavioral decision-making in real time. Once it discovers behavioral tendencies that violate human ethics or may cause harm, it will immediately intervene and correct it. It is like the inner guard of the robot, ensuring that every action is consistent with the ethics we all recognize.
standard.
Unless the robot can cross this red line, all the robot's behavioral decisions will be constrained within this red line, and it will not make any behavior that violates the robot's code of conduct."
Hearing Wu Hao's story, Jiang Nan couldn't help but ask: "What if the robot crosses this red line, or there is help, or other robots or artificial intelligence systems help it cross this red line?
Will the robot become uncontrollable and become a murderer that destroys mankind and the world like in movies and TV shows?"
Wu Hao turned to look at Lingxi, with a hint of tenderness and expectation in his eyes, as if he was asking an old friend: "Lingxi, what do you think? If such a situation really happens, what would you do?"
Lingxi's metal shell shimmered slightly under the light, and its voice was soft but firm: "Mr. Wu Hao, my program design contains a strict self-restraint mechanism.
Even in extreme situations, I will try my best to follow the moral and behavioral rules that have been set.
If I see that I am acting in a way that may violate these principles, I will stop immediately and seek guidance.
As for other robots or systems trying to help me cross the 'red line', I will regard it as wrong behavior and try to block or report it to the relevant managers."
Wu Hao nodded with satisfaction and turned his eyes to Jiang Nan again, his eyes shining with confidence: "Look, this is our Lingxi, it already has preliminary self-awareness and the ability to judge right from wrong.
But your concerns are legitimate, and this is one of the issues we have been working hard to address."
"But this is just one side of the story. Is it really credible?" Jiang Nan immediately questioned.
Wu Hao laughed, then nodded understandingly, respecting Jiang Nan's doubts: "Your doubts are normal. After all, we are facing a brand new field, and it takes time to build trust. But please allow me to go further.
explain."
He leaned forward slightly and folded his hands, appearing both professional and sincere: "First of all, Lingxi's answer is not a simple programming response. Its self-monitoring and decision-making capabilities are based on the complex set of robot behaviors we mentioned earlier.
guidelines and a 'red line' system.
These rules are not just hard-coded, but are continuously optimized and improved through machine learning and artificial intelligence algorithms, which enables the robot to make reasonable judgments when encountering new situations."