Robots and Scientific Innovation
Posted in 2014 The Gnovis Blog | Tagged Innovation, Technology & Information Policy
Robots and automated systems have become an important, yet controversial and misunderstood phenomenon. Due to the lack of understanding of robots and robolaw, innovation and consumer acceptance is being hampered. Robots have become more functional and multi-purpose which has led to problems of consumer protection and liability. As it stands, robots are multi-faceted and are interacting with humans in ways never done before (Richards & Smart 2012). As a result, the standards to which robots are held are greatly impacting how they evolve. I feel as though the public expects innovators and manufacturers to maintain an extremely high standard of safety for their robots and automated systems. Even though safety is an extremely important element to automated systems, it is hindering their natural development and experimentation. Experts are forced to pull automated systems off the market to address various standards concerns instead of creating new interesting automated systems. These products can be extremely beneficial and are transforming surveillance, warfare and information gathering in ways never done before.
There is a lot of variation in robots and automated systems. Therefore, the evolution of automated systems will not be homogeneous (Parasuraman & Sheridan 2000). The implementation and development of standards can vary depending on the products’ intent (Garcia). Automated systems and robots can fall under control and product standards. Safety, legality, and morality within the variations of automation greatly impact how these systems exist within their markets (Garcia). This variance is extremely beneficial in their growth. Automated systems can range from drones to housecleaning robots; if they all fell under the same standards and regulations, there could be greater setbacks in growth amongst these robotic systems.
For the case of robots and automated systems, standards are determining their safety, effectiveness, and quality, which play an important role in their functionality and evolution (Garcia). These technologies need standards in order for us to understand their role and the interactions they have with individuals and elements (Garcia). Brian Arthur, an economist who focuses on the theory of increasing returns, illustrated in his book The Nature of Technology, that in technology’s evolution, these standards come about through trial and error; it’s a long interactive process (2009). As safety and liability issues come to the forefront, diffusion of new technology will take longer. With that being said, the acceptance of robots and automated systems through established standards has a lot to do with how robots evolve and improve.
Even though many inventors and manufacturers are disheartened by the ambiguity of liability, these standards could actually be beneficial to the evolution of robots and automated systems. Arthur’s popular definition of innovation is “whenever some improvement is put into practice or some new idea is tried, no matter how trivial” (Arthur 2009). Based on this definition, the small improvements that inventors and manufacturers are making towards safeguards and ethical problems are actually improving the products but just not in the way they necessarily envisioned. For example, the Defense Department has issued a 10-year moratorium on autonomous weapons to investigate many of these problems; with so much controversy surrounding them, safety improvements could be beneficial both economically and socially (Keller 2013). By investigating these failures and legal problems through public opinion, which is both standardized and known, fear and uncertainty towards robots and automated systems could lead to further innovation (Arthur 2009).
As in the case of autonomous weapons, stigmas and standards are mediating our interactions with them. Human rights groups and arms control organizations are working to ban killer robots. Despite this push back, “for military commanders the appeal of autonomous weapons is almost irresistible and not quite like any previous technological advance” (Keller 2013). This conflict leads to less autonomy within the field, which in the end could impact its evolution (Garcia). This conflict over autonomous weapons makes me question who will eventually win this battle. Will a 10-year moratorium to address safety and ethical problems really alleviate the moral, technical, and visceral attitudes towards them (Keller 2013)? Can innovation over time completely change the attitudes towards technology? Despite strong public opinion towards safety standards and liability issues, which are hampering their evolution, I believe addressing these concerns will overtime lessen the stigma associated with them. People will begin to have more faith in their accuracy and effectiveness. This could eventually lead to these technologies playing a larger and more important role in society.
Neil M. Richards and William D. Smart, How Should the Law Think About Robots? WeRobot (2012), http://robots.law.miami.edu/wp-content/uploads/2012/03/RichardsSmart_HowShouldTheLawThink.pdf
Raja Parasuraman and Thomas B. Sheridan, A Model for Types and Levels of Human Interaction with Automation, 30:3 Systems Man and Cybernetics-Part A: Systems and Humans, IEEE Transactions on 286 (2000). https://wiki.engr.illinois.edu/download/attachments/133300226/ParasuramanSheridanWickens2000.pdf?version=1&modificationDate=1282760938000
Bill Keller, “Smart Drones,” NY Times (Mar.16,2013), http://www.nytimes.com/2013/03/17/opinion/sunday/keller-smart-drones.html?pagewanted=all&_r=0