Elon Musk And Over 2400 AI Scientists Sign Pledge Against Killer Robots
More than 2,400 AI scientists and researchers have signed a pledge which intends to deter military firms and nations from building lethal autonomous weapon systems (LAWS).
Are we as individuals, working together on shared concerns for all humans on the planet, starting to see the power we have to effect positive change? It is not an uncommon story in the history of science and technology: the most brilliant and innovative minds of their time discover, create, and invent technologies that can have hugely positive benefits for mankind as a whole. Inevitably, the largest and wealthiest ‘consumer’ of such technologies is the Military-Industrial Complex, and the main ways these technologies are produced in our world are as tools of control, warfare, and human suffering. In earlier times scientists and inventors didn’t have much say in how their work was used, and could often be persuaded that their use in military applications was actually for the benefit of humankind. Today, those naive days are gone, and the landscape is different. Some of the most prominent minds that are creating advanced technologies are starting to speak out more and more about how their work is being used in the world. Elon Musk of SpaceX and Demis Hassabis at Google DeepMind are among more than 2,400 signatories to the pledge which intends to deter military firms and nations from building lethal autonomous weapon systems, also known as LAWS.
The signatories are scientists who specialize in artificial intelligence, (AI) and have declared that they will not participate in the development or manufacture of robots that can identify and attack people without human oversight.
The pledge was created by the Future of Life Institute: LETHAL AUTONOMOUS WEAPONS PLEDGE Artificial intelligence (AI) is poised to play an increasing role in military systems.
There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI. In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine.
There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.
There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security. We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons.
These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.
The pledge hopes to amount to more than just words. In calling on countries to legislate laws, technology companies to not accept contracts, and individuals to voice their support against lethal autonomous weapons, they hope to sway public opinion overwhelmingly against LAWS, and in doing so shame any person or group who would go ahead with its development.
There is some precedent for this approach working, according to Yoshua Bengio, an AI pioneer at the Montreal Institute for Learning Algorithms: This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines. American companies have stopped building landmines.
The timing of this pledge is crucial.
The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge, had this to say about it: We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop. We cannot stop a determined person from building autonomous weapons, just as we cannot stop a determined person from building a chemical weapon. But if we don’t want rogue states or terrorists to have easy access to autonomous weapons, we must ensure they are not sold openly by arms companies. This pledge is but one example of how people are implicating themselves in the future of the planet. No longer are we waiting on the sidelines and leaving decisions up to corporations, the military, or our political leaders. When we identify ourselves not as a race, culture or nation but as a planet, where all of humankind is considered part of the family, then we get to have greater access to the power of our collective consciousness. Once we harness that power, no initiative that is for the benefit of humanity is beyond our abilities. .
Read the full article at the original website
References: