Four-arm surgical Robot
Surgical Robot Leonardo da Vinci. ?
Ken Goldberg, a robotics expert at the University of California, Berkeley, has a da Vinci robot stitch up surgical incisions, and da Vinci can learn on his own through “machine learning” software.
Offensive drones and shooting robots
American Magic Claw SWORDS Robot
Our reporter Chang Lijun
With the development of artificial intelligence and machine learning, robots are increasingly taking on more important tasks, such as surgery, autonomous driving, battlefield operations, and even determine the life and death of human beings. So – can we trust robots?
According to the popular understanding of robots, they are either near-perfect loyal stewards or psychopathic cunning killers, like C3PO in Star Wars,mechanicalAva from Ji, Robbie from Confinement Planet, or Hal from A Space Odyssey. While these depictions merely reflect humanity’s own hopes and fears, at least those fears are beginning to materialize.
In recent years, many celebrities in the tech world have expressed their concerns about the rampant artificial intelligence in the world: robots with superintelligence have built a new world in which humans are no longer the protagonists, they are enslaved, killed or even exterminated. These horrific scenes are not very different from the fantasies of science fiction writers decades ago, but they have attracted a lot of attention, even including Stephen Hawking, Bill Gates, Elon Musk and other tech celebrities.
Humans and machines are forming a new relationship.In the near future, we will begin toautomationof robotic systems entrusted with tasks such as driving cars, performing surgeries, and even choosing when to use lethal weapons in war. This is the first time humans have made life-or-death decisions by programming machines rather than directly controlling them in the face of complex, changing and disordered environments.
There is no doubt that robots are not perfect, they make mistakes and can even cause human casualties. But what is certain is that this emerging technology will bring unprecedented development prospects to mankind. By then, we will face challenges ranging from technical, regulatory, and even philosophical, and beyond code and policy issues, the new robots will force us to face deeper ethical dilemmas and even change our perceptions of ourselves. But in the end, the world will be a better place because of the age of robots.
Precise and efficient surgical robot
Currently, surgeons can use robotic arms to perform complex operations. Michael Steffman, director of the Center for Robotic Surgery at New York University, has performed thousands of robotic-assisted surgeries. He manipulates the robotic arms at the console. Each robotic arm is inserted into the patient’s body through a tiny incision about 5 mm wide. As long as he rotates his wrist and squeezes his fingers, the robotic arm inserted into the patient’s body will execute with precision. Do the same.
He guides two arms to tie a thread, manipulates the third arm to pass a needle through the patient’s kidney, sew together the hole left after the tumor was removed, and the fourth arm holds an endoscope to stab the patient’s abdominal cavity The situation inside is presented on the Display.
Stiffman is a highly trained specialist with great skill and judgment, however, he is spending his precious time stitching. This is just a follow-up to the main surgery, and if a robot can take over this monotonous mechanical task, the surgeon can free up his hands for more important things.
Today’s surgical robots have further enhanced capabilities. Not only are there no hand tremors during surgery, but a variety of options can be implemented. But at the end of the day, robots are just advanced tools that are directly controlled by humans. Dennis Fowler, executive vice president of surgical robotics company Titan Medical, who has been a surgeon for 32 years, believes that if robots can automatically make some decisions in place of humans and independently perform tasks assigned to them, the medical industry will be able to better serve humanity. “This technological intervention increases reliability and reduces human error.”
“Promotion” for robots is not out of reach, most of the technology needed will soon be researched andindustryDeveloped in experiments, the experimental robot uses rubber models of human tissue to practice suturing, cleaning wounds, removing tumors, and more. In some experiments, robots can match humans, and some are even more precise and efficient than humans. Just last month, a Washington hospital showed a robotic system sutured small intestine tissue from pigs, and a human performed the same procedure for comparison and found that the robot sutured more evenly and finely.
While these systems are not yet ready for use in patients, they represent the future of surgery. The same logic applies to operating rooms and assembly lines, and if high automation improves work performance, nothing can stop it.
Obesity doctor and lecturer at Imperial College London, Hudan Ashraffin, studied the outcomes of robotic surgery. He believes that in the foreseeable future, surgical robots will handle simple tasks on the orders of doctors. “Our goal is to improve postoperative outcomes. If using a robot can save lives and reduce risks, then using this device is a must.”
Going forward, the medical community will eventually use the next generation of robots – artificial intelligence with decision-making power. Not only can the robot handle routine tasks, it can also take over entire surgeries. While it may seem unlikely right now, technological innovation will naturally lead people there. Ashraffin said: “This is achieved step by step, although each step is not particularly big. Just as doctors 50 years ago could not imagine what it would be like to be in the operating room, 50 years from now, it is estimated that it will be another A scene.”
In fact, surgical robots have been able to make their own decisions, and their independence is stronger than people think. For example, in vision correction surgery, robotic systems cut out a small piece of a patient’s cornea and reshape its inner layer through a series of laser pulses; in knee replacement surgery, autonomous robots cut bone more precisely than doctors; in hair transplant surgery , the intelligent robot can identify and collect the firm hair follicles on the patient’s head, and then make precise small holes in the scalp of the bald spot, saving doctors a lot of monotonous and time-consuming repetitive labor.
Those procedures involving the thoracic, abdominal and pelvic regions present more complex challenges, as each patient’s physiology is different and autonomous robots must be able to identify a variety of wet and soft internal organs and blood vessels well Moreover, during the operation, various parts of the internal organs of the patient may change, so the robot must be able to continuously adjust the operation plan.
Robots also need to be able to handle crises reliably. For example, in the case of sudden massive hemorrhage during tumor resection, it must be dealt with promptly and correctly. There will be various unpredictable and complex situations during surgery, and robotic imaging systems and computer vision must first be able to identify and indicate the severity of the situation by squirting red liquid; The best solution to the problem, and then instruct the system to quickly put this solution into action; finally, the evaluation process evaluates the results and determines whether further action is required. Getting surgical robots to master every step of perception, decision-making, action, and evaluation represents a huge challenge for engineers.
Enter the practical “Da Vinci” system
In 2013, the California-based company Intuitive Surgery began donating the “Da Vinci” surgical robotic system to robotics researchers at universities around the world, costing as much as $2.5 million each. It is also a soft-tissue surgery system approved by U.S. regulators. Currently, more than 3,600 hospitals around the world have installed the “Da Vinci” system. But its road to business has not been smooth, and it has faced lawsuits over minor accidents. Despite the controversy, many hospitals and patients have embraced the technology.
“Da Vinci” was under the complete control of a human doctor, whose arms were nothing but lifeless pieces of plastic and metal unless the doctor grasped the joystick at the console. For now, they intend to keep it that way, said Simon Di Maio, the company’s manager of advanced systems research and development. But roboticists are looking to the future, allowing doctors to operate with more and more aids or computers to guide them.
Di Maio pointed out that research in this area is like the early days of self-driving cars, “the first step is to recognize road signs, obstacles, cars and pedestrians,” and the next step is to have the car help the driver. Smart cars, for example, sense the location of surrounding vehicles and alert drivers if they change lanes by mistake. Likewise, surgical robots can warn doctors when their surgical instruments deviate from their usual path.
Ken Goldberg, director of the Laboratory for Automation Science and Engineering at the University of California, Berkeley, is also training his “da Vinci” to perform surgical tasks independently. At present, its suture technology is quite dexterous. It can pull the thread through the two sides of the model wound with one arm, and the other arm pulls the needle to tighten the thread, and starts to sew the next stitch without human guidance. It uses position sensing and camera technology to calculate the best position of the needle in and out each time, and plan and track the needle trajectory. But it’s still a daunting task, and it’s currently reported to only be able to complete 50 percent of the four-stitch program without tying the thread.
Now, Goldberg says, they use machine-learning algorithms to collect visual and kinematic data, and divide each stitch into multiple steps, such as positioning and pushing the needle, and let Da Vinci process it in sequence. In this way, it is possible to learn to do any surgery.
In theory, the same procedure could guide real surgery. Goldberg believes that simple surgical tasks can be automated within the next 10 years. But even if the robot does perform better in routine surgery, he hopes the robot’s actions will be “autonomous” under the supervision of a human doctor. Letting robots do precise and consistent work over long periods of time is like sewing machines to hand sewing, he said, and only machines and humans can become super doctors.
Military robots redefined
In 1920, Czech writer Karel Capek published the sci-fi play “Russum’s Universal Robot” and coined the term “robot”, which originally meant synthetic human beings who worked hard for long hours in factories with low production rates. cost commodity. In the end, however, these robots killed humans. That’s what science fiction has always been about: robots out of control, turning into unstoppable killing machines.
Now, with the development of artificial intelligence and the advancement of robotics, coupled with the widespread use of drones and ground robots in the wars in Iraq and Afghanistan, people are beginning to worry that the fears in science fiction will become reality.
The world’s most powerful militaries are currently developing smarter weapons that will have varying degrees of automation and lethality, the vast majority controlled remotely by humans. But others believe that future artificial intelligence weapons will eventually operate fully autonomously, with integrated microchips and software to determine human life and death, which will become a watershed in the history of warfare.
As a result, the “robot killer” has sparked a heated debate: one side believes that robots may start a world war to destroy human civilization; the other side believes that these weapons are a new type of precision-guided weapons that will only reduce rather than increase casualties. Some of the leading researchers in the field of artificial intelligence have called for a ban on “autonomous offensive weapons beyond human control.”
Last year, three academic luminaries—Stuart Russell of the University of California, Berkeley, Marcos Tegmark of the Massachusetts Institute of Technology, and Toby Walsh of the University of New South Wales, Australia, participated in an artificial intelligence ( AI) organized a joint petition at the conference. In their open letter, they pointed out that these weapons will lead to a “global AI arms race” for “assassination, destabilization, repression of the masses, and the selective elimination of a minority group.” The letter has now exceeded 20,000 Signatures, including famous physicist Stephen Hawking, Tesla CEO Elon Musk and others. Musk also donated $10 million to a Boston-based institute. The institute’s mission is to “defend life” against possible malicious AI. This event has become the news of major media around the world, and even at the United Nations Conference on Disarmament in Geneva in December, about 100 countries participated in the discussion of this issue.
The debate also extends to the web. Various predictions and outlooks have been made about the future, and some people think that there may also be a “low-cost mass black market for the sale of lethal micro-robots, which buyers can set certain standards and kill indiscriminately.” Kill thousands of people who meet the criteria.”
The trio also noted: “Autonomous weapons are a potential weapon of mass destruction. While some states may not choose to use them for this purpose, it is extremely attractive to some states and terrorists .”
To build killing machines that are constantly upgraded in intelligence, autonomy and automation, no matter whether this arms race can better serve the interests of mankind, and no matter how controversial there is at present, a new round of AI arms race has actually begun.
The Quietly Emerging Intelligent Weapons and Equipment
Autonomous weapons have been around for decades, but in small numbers and mostly for defensive purposes. More recently, military suppliers have developed autonomous weapons that are considered offensive. Israel Aerospace Industries’ Harpy and Harop drones can fly at radio waves emitted by enemy air defenses, crashing them. The company says the drone has been widely sold around the world. South Korean defense contractor DoDAAM Systems has also developed a guard robot, the “Super aEgis II,” equipped with a machine gun that uses computer vision to automatically detect and shoot targets within 3 kilometers. The robot-equipped weapons have been tested in a demilitarized zone near the North Korean border, the South Korean military reported. DoDAAM Systems says it has sold more than 30 of these systems, with some buyers in the Middle East. At present, the number of such autonomous systems greatly exceeds that of robotic weapons.
Some analysts believe that weapons will become increasingly autonomous in the coming years. “War is going to be very different, automation is going to play a big role, and speed is key,” said Peter Singer, a robotic warfare expert at The New America, a nonpartisan research organization in Washington, D.C., predicting future war scenarios. Like a dogfight between drones, or an encounter between a robotic warship and an enemy submarine, weapons that provide an instant advantage will decide the outcome of the war. “It may be a high-density direct confrontation, and it is too late for humans to intervene, because everything has happened in only a few seconds.”
The U.S. military has detailed plans for this new type of warfare in a roadmap for unmanned systems, but its intentions for weaponization remain vague. Defense Secretary Robert Walker emphasized investment in AI and robotics at a forum last March, saying more and more automated systems on the battlefield were inevitable.
Asked about autonomous weapons, Walker insisted that the U.S. military “does not delegate the right to make lethal decisions to a machine.” But he also added that if “competitors” would prefer to give such power to a machine, We also had to decide how we could best compete.
In developing unmanned combat systems for land, sea and air, Russia follows the same strategy, but at least for now relies on human operations. Russia’s M platform is a small remote-controlled robot armed with a Kalashnikov rifle and grenade launcher, similar to the American “SWORDS” system (a ground robot equipped with M16 and other weapons). Russia has also built a larger unmanned combat vehicle, the Uran-9, equipped with a 30mm cannon and anti-tank missiles, and last year demonstrated a humanoid warrior robot.
The United Nations has insisted on discussing lethal autonomous robots for nearly five years, but its member states have yet to draft a deal. In 2013, the UN Special Rapporteur on human rights, Chris Haynes, wrote an influential report stating that countries around the world have a rare opportunity to discuss the risks of autonomous weapons before they are developed on a large scale.
The topic of lethal autonomous robots will be on the agenda at the five-year review conference of the United Nations Convention on Certain Conventional Weapons in December, but it will be impossible to pass a ban at the conference. This decision requires the unanimous consent of all participating countries, and there are still fundamental differences among countries on how to deal with the issue of pan-autonomous weapons that may emerge in the future.
Ultimately, the “killer robot” debate seems to be more about humans than robots, and autonomous weapons, like any other technology, should be used with caution, at least for the first time, or they can be chaotic and disastrous. A question like “Are autonomous combat robots a good idea?” might not be a good idea. A better answer would be, “Are we sure we can trust robots enough to live with them?”
Self-driving cars that balance logic and ethics
Imagine that one night in the future, a drunk pedestrian suddenly falls in front of a driverless car and is killed on the spot. If someone was in the car, it would be considered an accident, because the pedestrian was clearly at fault, and it would be difficult for a rational driver to avoid it in time. But in the 2020s, with the increasing popularity of driverless cars, the probability of a car accident will be reduced by 90%, and the legal standard of “rational man” corresponding to the driver’s fault will disappear, and the corresponding standard will be for “rational robots”.
The family of the deceased will take the automaker to court, arguing that the car was too late to brake, but it should have avoided pedestrians, crossed the double yellow line, and slammed into the empty driverless car in the next lane. And based on the analysis of the car’s sensor data, the reenactment of the crash scene is exactly the same. The plaintiff’s lawyer will ask the car’s head software designer: “Why doesn’t the car turn?”
But a court would never ask a driver why he didn’t take some emergency action just before a crash, because it would be pointless – the driver was too panicked to think and acted on instinct. But if the driver is a robot, it makes sense to ask “why”.
Amid human moral standards, flawed code cases, and all sorts of assumptions that engineers can’t imagine, the most important assumption is that a man of good judgment knows when to disregard the written law and actually uphold the law Spiritual supremacy. What engineers must do now is teach self-driving cars and other robots some basic judgmental factors.
Currently, in parts of the UK, Germany, Japan, 4 US states and the District of Columbia, fully autonomous vehicles are explicitly allowed by law to be tested, but with a test driver in the car. Google, Nissan, Ford and others have also said they expect to have truly driverless operations within the next five to 10 years.
Automated vehicles obtain environmental information through a range of sensors, such as video cameras, ultrasonic sensors, radar and lidar (laser ranging). In California, applying for a test license for an autonomous vehicle provides the DMV with all sensor data 30 seconds before a crash, which engineers can use to accurately reproduce crash scenarios. Using the car’s sensor recordings, the logic behind its decisions can be inferred. After a crash, regulators and lawyers were able to rely on those records to insist that autonomous vehicles had superhuman safety standards and passed rigorous scrutiny. Manufacturers and software developers will also justify the behavior of driverless cars in ways currently unimaginable to human drivers.
All driving involves risks, but how those risks are distributed among drivers, pedestrians, cyclists, and even the nature of the risks have an ethical component. What matters most, both to engineers and the general public, is the decision-making system of a self-driving car, which determines the moral component of the car’s behavior.
For morally ambiguous situations, minimize losses while following the law. This tactic is appealing because it allows developers to justify the “inaction” of the culprit, and also passes on lawmakers the responsibility to define ethical behavior. But unfortunately, the laws in this area are not perfect.
In most countries, the law relies on human common sense, and self-driving cars are programmed to obey the law: never cross double yellow lines, even if there is a risk of hitting a drunk, even if there are only empty drivers in the other lane Cars – It is difficult for the law to make exceptions for emergencies, and car developers have no way of knowing when it is safe to cross the double yellow line.
This is not an unsolvable problem in the ethics of road vehicle automation. Similar risks and benefits can be handled safely and reasonably, drawing on numerous examples from other fields. For example, donating organs to patients, depending on whether it will bring a better quality of life or a disabled life, has also added some occupations that are more useful to society. An even bigger challenge for self-driving vehicles is that they must make quick decisions with inaccurate information that programmers often fail to take into account, using rigid ethics coded into the software.
Fortunately, the public does not expect superhuman intelligence unduly, and given the complexities of ethics and morality, it is enough to have a reasonable judgment about the behavior of autonomous vehicles. A solution doesn’t have to be flawless, but it should be considered and reasonable.
The Links: SRDA-SDA14A01A-E 3HAC029818-001