Another takeaway from this AI workshop for me is that VR enthusiasts are going to have the hardware required to train AI networks. Anyone who has a PC capable of running Oculus Rift or HTC Vive is going to have a high-end graphics card; the GTX 970, 980, or 1080 are the same architectures used in NVIDIA’s even higher-end GPUs that are used to train neural networks.

When VR gamers are not playing a VR experience, then they could be using their computer’s massively parallel-processing capability to train neural networks. Gaming and virtual reality have been a couple of the key drivers of GPU technology, and so AI and VR have a very symbiotic relationship in the technology stack that’s enabling both the AI and VR revolution.

Self-driving cars are also going to have very powerful GPUs as part of the parallel-processing brains that will enable all of the computer vision sensor and continuous training of the neural net methods of driving. There will likely be a lot of unintended consequences of these new driving platforms that we haven’t even thought of yet.

Will we be playing VR driven by the GPU in our car? Or will be using our cars to train AI neural networks? Or will we even be owning cars in the future, and instead switch over to autonomous transportation services as our primary mode of travel?

Our society is also within the midst of moving from the Information Age to the Experiential Age. In the Information Age, computer algorithms were written in logical and rational code that could be debugged and well-understood by humans. In the Experiential Age, machine learning neural networks are guided through a training “experience” by humans. Humans are curating, crafting, and collaborating with these neural networks throughout the entire training process. But once these neural networks start making decisions, humans can have a hard time describing why the neural net made that decision, especially in the the cases where machine learning processes start to exceed human intelligence. We are going to need to start to create AI that is able to understand and explain to us what other AI algorithms are doing.

SEE ALSO
The Most Anticipated VR Games of 2024 Coming to Quest, PSVR 2, and PC VR

Because machine learning programs need to be trained by humans, AI carries the risk that some of our own biases and prejudices could be transferred into computer programs. This a year-long investigation into Machine Bias by Pro Publica, and they found some evidence that the software that predicts future criminals was “biased against blacks.”

AI presents a lot of interesting legal, economic, and safety issues that has the Ed Felten, the Deputy U.S. CTO saying, “Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions.”

There are going to be a whole class of jobs that are replaced by AI, and one that is probably the most at risk are probably truck drivers. Pedro Domingos said that AI is pretty narrow right now, and so the more diverse set of skills and common sense that’s required to do a job, then the safer your job is right now. With a lot of jobs potentially being displaced by AI, then VR may have a huge role to play in helping to train displaced workers with new job skills.

AI will have vast implications on our society, and the government is starting to take notice and taking a proactive approach by soliciting feedback and holding these public workshops about AI. This first AI workshop was on Legal and Governance Implications of Artificial Intelligence, and it happened this past Tuesday in Seattle, WA.

Here are the three other AI workshops that are coming up:

SEE ALSO
The First $100 You Should Spend on Meta Quest Games

Here’s the livestream archive of the first Public Workshop on Artificial Intelligence: Law and Policy

Here’s a couple of write-ups of the event:

Darius Kazemi is an artist who creates AI bots, and he did some live tweeting coverage with a lot of commentary starting here (click through to see the full discussion):

If you have thoughts about the future of AI, then you should be able to find the Request for Information (RFI) on the White House Office of Science & Technology Policy blog here very shortly.

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • Lots of interesting points there, but as a Glass explorer, I’m now hyper-sensitive to how people throw around the “Privacy” topic and I think it’s a bit abused here. Privacy: “the state or condition of being free from being observed or disturbed by other people” and “the state of being free from public attention.” Since one cannot be unobserved in public, “privacy” cannot be an issue in public. That means that every time someone tries to make issues about privacy that don’t relate to a “private” (lacking public visibility) context, it makes no sense and it seems that MUCH of the “privacy” talk these days is guilty of just that. As it applies to AI and biometrics, this is true. In your home, this kind of observation would violate your privacy IF it was uninvited. Outside your home (be it permanent or temporary) and perhaps a bathroom, there is little to no expectation of privacy though.

    • Bruno

      Privacy is a right of everyone. You want it even in public. A micro-example: you wear clothes for privacy (beside other reasons, of course)

  • DonGateley

    For novelist (and all around smart guy) Neil Stephenson’s take on what a more benign Skynet derivative could be, see the reticulum in his novel Anathem.

  • Michael Cordes

    Imao the real danger of skynet is not the domination by the machines themself – that is certainly a contradictory and crazy idea and furthermore is hiding the real danger: the domination of man; e.g. a single or a certain group of man with the machines as their tool