Site icon Wonderful Engineering

Cyberattacks Could Become Deadly In Just 5 Years’ Time, New Report Says

Cyberattacks are becoming more common and have been identified as one of the most strategic risks confronting the world today. In recent years, we have seen digital attacks on governments and owners of critical infrastructure, large and small private corporations, educational institutions, and non-profit organizations. Not only is no sector immune from cyberattacks, but the level of sophistication of the threats they face is continually increasing.

A new class of subtle and stealthy attackers has recently emerged to drive the future of cybersecurity. Their goal is to manipulate or change data rather than steal it. There is little doubt that attackers will use artificial intelligence (AI) to drive the next major upgrade in cyber weaponry and will eventually pioneer the malicious use of AI.

AI’s fundamental ability to learn and adapt will usher in a new era of scalable, highly customized, and human-like attacks. “Offensive AI,” or highly sophisticated and malicious attack code, will be able to mutate as it learns about its environment and will be able to expertly compromise systems with little chance of detection.

AI-powered cyberattacks are not a hypothetical future concept. All the required building blocks for the use of offensive AI already exist highly sophisticated malware, financially motivated and ruthless criminals willing to use any means possible to increase their return on investment, and open-source AI research projects that make highly valuable information available in the public domain.

According to a recent cyber-analytical report, artificial intelligence (AI)-enabled cyberattacks, which have been relatively limited so far, may become more aggressive in the coming years. The consequences of these developing attack methods could be highly destructive and even life-threatening. By undermining data integrity, these stealthy attacks cause trust in organizations to falter and may even cause systemic failures to occur.

Imagine an oil rig using faulty geo-prospecting data to drill for oil in the wrong place or a physician making a diagnosis using compromised medical records. As the AI arms race continues, we can only expect this circle of innovation to escalate.

Exit mobile version