Dangers of Artificial Intelligence in the Future

Dangers of Artificial Intelligence-min

Artificial intelligence is a fantastic development tool right now. It has revolutionized technology across all industries and solved many of humanity’s problems. However, AI is still in its infancy, and if not properly managed, it can cause significant harm. Artificial Intelligence can pose a threat to humans in a variety of ways, and it is best if these threats are discussed now so that they can be anticipated and managed in the future.

Table of Content

  • Dangers of Artificial Intelligence in the Future
  • Privacy Violations
  • Autonomous Weapons
  • Loss of Human Employment
  • Terrorism by Artificial Intelligence
  • Bias in Artificial Intelligence

Stephen Hawking, the legendary physicist, said this at a tech conference about “What are the dangers of AI in future”. And he is entirely correct. With this in mind, consider the following five threats that Artificial Intelligence may pose in the future.

Privacy Violations

Everyone is entitled to privacy as a fundamental human right. Artificial Intelligence, on the other hand, may result in a loss of privacy in the future. Even today, it is simple to track you as you go about your daily routine. Face recognition technology, for example, can help you stand out in a crowd, and all security cameras are equipped with it. Because of AI’s data-gathering capabilities, a timeline of your daily activities can be created by combining data from various social networking sites.

In fact, China is currently developing an Artificial Intelligence-driven Social Credit System. This system will assign a score to all Chinese citizens based on their behaviour. Defaulting on loans, playing loud music on trains, smoking in non-smoking areas, playing too many video games, and so on are examples of this type of behaviour. And a low score could mean being barred from travelling, having a lower social status, and so on. This is a prime example of how Artificial Intelligence can lead to total loss of privacy and access in all aspects of life.

Autonomous Weapons 

Autonomous weapons, also known as “killer robots,” are military robots that can search for and aim at targets on their own, following pre-programmed instructions. Almost every technologically advanced country on the planet is working on developing these robots. Indeed, a senior executive from a Chinese defence firm has stated that future wars will not be fought by humans, and that the use of lethal autonomous weapons will be unavoidable.

However, having these weapons comes with a slew of risks. What if they go rogue and murder people who aren’t even related to them? What if they can’t tell the difference between their targets and innocent people and kill them by accident? Who would then be held accountable for the situation? Even worse, if these “killer robots” are developed by governments who are unconcerned about human life. Then destroying these robots would be a challenge! With these issues in mind, it was decided in 2018 that autonomous robots would still require a human’s final command before attacking. However, as technology advances, this problem could become exponentially worse.

Loss of Human Employment

As Artificial Intelligence improves, it will inevitably take over jobs that were previously performed by humans. According to a report published by the McKinsey Global Institute, automation could result in the loss of 800 million jobs globally by 2030. But then there’s the question of “What about the people who have lost their jobs as a result of this?” Some people believe that Artificial Intelligence will create a large number of jobs, which will help to balance the scales. People may be able to transition from physically demanding jobs to jobs that require creative and strategic thinking. People with less physically demanding jobs may also have more time to spend with their friends and family.

However, this is more likely to happen to people who are already educated and wealthy. This could widen the gap between rich and poor even more. If robots are employed in the workforce, this means they are not required to be compensated in the same way that human employees are. As a result, the owners of AI-driven businesses will reap all of the benefits and become even wealthier, while the humans who were replaced will become even poorer. As a result, a new societal setup will be required in order for all humans to be able to earn money in this scenario.

Terrorism by Artificial Intelligence

While Artificial Intelligence can make a significant contribution to the world, it can also assist terrorists in carrying out terrorist attacks. Drones are already used by many terrorist organizations to carry out attacks in other countries. In fact, ISIS launched its first successful drone attack in Iraq in 2016, killing two people. This would be a terrifying type of terrorist attack aided by technology if thousands of drones were launched from a single truck or car with programming to kill only a specific group of people.

Terrorists could also use self-driving cars to deliver and detonate bombs, or create guns that can track movement and fire without the need for human intervention. These weapons are already in use on the border between North and South Korea. Another concern is that terrorists could gain access to and use the aforementioned “killer robots.” While governments may still be ethical in their efforts to prevent the deaths of innocent people, terrorists will have no such morals and will use these robots to carry out terror attacks.

Bias in Artificial Intelligence

Humans, unfortunately, are sometimes prejudiced against other religions, genders, nationalities, and other groups. And this bias may find its way unconsciously into the Artificial Intelligence Systems created by humans. Bias may also be introduced into systems as a result of erroneous data generated by humans. Amazon, for example, recently discovered that their Machine Learning-based recruiting algorithm was sexist. This algorithm was developed using data from the number of resumes submitted and candidates hired over the previous ten years. The algorithm favoured men over women because the majority of the candidates were men.

In a separate incident, Google Photos used Facial Recognition to label two African-Americans as “gorillas.” This was a clear indication of racial bias, as the algorithm incorrectly labelled humans. The question then becomes, “How do we deal with this Bias?” How can we ensure that Artificial Intelligence, like some humans, is not racist or sexist? The only way to deal with this is for AI researchers to manually remove bias while developing and training AI systems, as well as when selecting data. So here we end our today topic about “What are the dangers of AI”. Stay connected with IBlogger for more interesting Technology Guest blog posts.

About Author

Leave a Reply

Your email address will not be published.