Artificial Intelligence has the potential to disrupt so many different dimensions of our society that the White House Office of Science & Technology Policy recently announced a series of four public workshops to look at some of the possible impacts of AI. The first of these workshops happened at the University of Washington on Tuesday, May 24th, and I was there to cover how some of these discussions may impact the virtual reality community. The first AI public workshop was focused on law and policy, and I had a chance to talk to three different people about their perspectives on AI. I interviewed the White House Deputy U.S. Chief Technology Officer Edward Felten about how these workshops came about, and the government's plan for addressing the issue. [gallery type="square" ids="48147,48148,48146"] I also talked with workshop attendee Sheila Dean, who is a privacy advocate about the implications of AI algorithms making judgments about identified individuals, as well as Ned Finkle, who is the Vice President of External Affairs at NVIDIA about the role of high-end GPUs in the AI revolution. LISTEN TO THE VOICES OF VR PODCAST [audio mp3="http://voicesofvr.com/wp-content/uploads/2016/05/Voices-of-VR-369-AI-Law-Policy.mp3"][/audio] There are a number of take-aways from this event that are relevant to the VR community. First of all, there are going to be a number of different privacy issues, like the biometric data that could be collected from virtual reality technologies including eye tracking, attention, heart rate, emotional states, body language, and even EMG muscle data or EEG brainwaves. There are a number of companies now using machine learning techniques in order to analyze and make sense of these raw data streams. Storing this type of biometric data and what it means could have some privacy implications. For example, Conor Russomanno warned me that EEG data could have a unique fingerprint and so there could be implications of storing anonymized brainwave data because it could still get tracked back to you. I also discussed tracking user behavior and data with Linden Lab's Ebbe Altberg, where we talked about the potential dangers of having companies ban users based upon observed behavior. Will there be AI approaches that either grant or deny access to virtual spaces based upon an aggregation of behavioral data or community feedback? [caption id="attachment_36771" align="alignright" width="325"] See Also: Facebook-like ‘Code of Conduct’ Governs Voice Communication in Oculus Social Alpha[/caption] Sheila Dean was concerned that she didn't hear a lot of voices that were advocating for privacy rights of users in the context of some these AI-driven tracking solutions. She sees that we're in the middle of a battle where our privacy awareness and rights are eroding, and that users need to be aware of what's at stake when AI neural nets start to flag us as targets within these databases. She says that consumers need to advocate for data access, privacy notice consent, privacy controls, and for people to be more aware of their privacy rights. We have the right to ask companies and the government to send us a copy of the data that they have about us because we still own all of our data. Sheila also had a strong reaction to Oren Etzioni's presentation. Etzioni is the CEO of the Allen Institute for Artificial Intelligence, and he had a rather optimistic take on AI and the risks. He had a slide that labeled SkyNet as a 'Hollywood Myth', though Sheila notes that SkyNet is a very real NSA program. She cites an article by the Intercept that there's an actual NSA program called SKYNET that uses AI technologies to identify terrorist targets. At the same time, SkyNet is kind of seen as the 'Hitler' of AI discussions, and we could probably adapt Godwin's Law to say, "As an online discussion [about AI] grows longer, the probability of a comparison involving [SkyNet] approaches 1." https://twitter.com/adurdin/status/735227827759505408 There have been a lot of overblown fears about AI seeded by dystopian sci-fi dramas coming out of Hollywood. These fears have the potential to prevent AI from contributing to the public good in many ways, from saving lives to making us smarter. Microsoft Research's Kate Crawford sees that discussions that jump straight to 'SkyNet' can make practical, nuanced discussions difficult. She was advocating for stronger ethics within the computer science community, as well as a more interdisciplinary approach to encompass many different perspectives with the AI as possible. In Alex McDowell's presentation at Unity's VR/AR Vision Summit, he argued that VR represents a return to valuing multiple perspectives. Stories used to be transmitted through many generations through oral traditions where tribes would add and adapt the story based upon their own recent personal stories and experiences. Alex says that the advent of print, film, and TV, marked a shift where we started to see canonical versions of stories that were told primarily from a singular perspective. But VR has the potential to show us the vulnerability of the first-person perspective, and as a result put more emphasis on ensuring that our machine learning approaches include a diversity of perspectives across many different domains. Right now AI is very narrow and focused on specific applications, but moving towards artificial general intelligence means that we're going to have to discover some of the underlying principles that are transferable to building up a common sense framework for intelligence. Artificial general intelligence is one of the unsolved and hard problems in AI, and so far no one knows how to do this yet. But it's likely that it's going to require cross-disciplinary collaboration, holistic thinking, and other ingredients that have yet to be discovered. Continue Reading on Page 2 Another takeaway from this AI workshop for me is that VR enthusiasts are going to have the hardware required to train AI networks. Anyone who has a PC capable of running Oculus Rift or HTC Vive is going to have a high-end graphics card; the GTX 970, 980, or 1080 are the same architectures used in NVIDIA's even higher-end GPUs that are used to train neural networks. When VR gamers are not playing a VR experience, then they could be using their computer's massively parallel-processing capability to train neural networks. Gaming and virtual reality have been a couple of the key drivers of GPU technology, and so AI and VR have a very symbiotic relationship in the technology stack that's enabling both the AI and VR revolution. Self-driving cars are also going to have very powerful GPUs as part of the parallel-processing brains that will enable all of the computer vision sensor and continuous training of the neural net methods of driving. There will likely be a lot of unintended consequences of these new driving platforms that we haven't even thought of yet. Will we be playing VR driven by the GPU in our car? Or will be using our cars to train AI neural networks? Or will we even be owning cars in the future, and instead switch over to autonomous transportation services as our primary mode of travel? Our society is also within the midst of moving from the Information Age to the Experiential Age. In the Information Age, computer algorithms were written in logical and rational code that could be debugged and well-understood by humans. In the Experiential Age, machine learning neural networks are guided through a training "experience" by humans. Humans are curating, crafting, and collaborating with these neural networks throughout the entire training process. But once these neural networks start making decisions, humans can have a hard time describing why the neural net made that decision, especially in the the cases where machine learning processes start to exceed human intelligence. We are going to need to start to create AI that is able to understand and explain to us what other AI algorithms are doing. Because machine learning programs need to be trained by humans, AI carries the risk that some of our own biases and prejudices could be transferred into computer programs. This a year-long investigation into Machine Bias by Pro Publica, and they found some evidence that the software that predicts future criminals was "biased against blacks." AI presents a lot of interesting legal, economic, and safety issues that has the Ed Felten, the Deputy U.S. CTO saying, "Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions." There are going to be a whole class of jobs that are replaced by AI, and one that is probably the most at risk are probably truck drivers. Pedro Domingos said that AI is pretty narrow right now, and so the more diverse set of skills and common sense that's required to do a job, then the safer your job is right now. With a lot of jobs potentially being displaced by AI, then VR may have a huge role to play in helping to train displaced workers with new job skills. AI will have vast implications on our society, and the government is starting to take notice and taking a proactive approach by soliciting feedback and holding these public workshops about AI. This first AI workshop was on Legal and Governance Implications of Artificial Intelligence, and it happened this past Tuesday in Seattle, WA. Here are the three other AI workshops that are coming up: June 7, 2016: Artificial Intelligence for Social Good in Washington, DC June 28, 2016: Safety and Control for Artificial Intelligence in Pittsburgh, PA July 7: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term in New York City Here's the livestream archive of the first Public Workshop on Artificial Intelligence: Law and Policy Here's a couple of write-ups of the event: First White House AI workshop focuses on how machines (plus humans) will change government What to Do When a Robot Is the Guilty Party Darius Kazemi is an artist who creates AI bots, and he did some live tweeting coverage with a lot of commentary starting here (click through to see the full discussion): https://twitter.com/tinysubversions/status/735198254648819712 If you have thoughts about the future of AI, then you should be able to find the Request for Information (RFI) on the White House Office of Science & Technology Policy blog here very shortly.