Autonomous vehicle technology has been progressing over the years. So, too, has the evolution of Artificial Intelligence (AI) systems that give a vehicle the ability for its computers to understand its surroundings and make decisions based on relevant information.
Today, most advanced driver-assistance systems based on radar and cameras are not capable of accurately detecting and classifying objects, such as other vehicles, bicycles, pedestrians, etc. That will be necessary for autonomous driving, though.
AI offers many possibilities, especially when combined with computers that “self learn.” But there also comes along the potential for an AI system to intentionally or unintentionally cause great harm. Think about those novels or movies where robots take over.
What If?
Looking ahead, I have a couple of questions about autonomous vehicles and Artificial Intelligence.
- How can you test and validate AI systems that change their behavior?
- Should vehicles be allowed to change their behavior without some form of authorization?
- What happens if an autonomous vehicle is programmed to do something and it develops a harmful method for achieving it?
- Can security protocols be developed to make autonomous vehicles and AI systems “hack proof?”
- Will AI systems be able to detect malicious hacks and learn how to block them?
- Could AI systems become more intelligent than any human? If so, would we be able to predict how the systems would behave?
Only time and research will provide the answers to these questions.
I welcome your thoughts and opinions.