How AI will impact the future of security

How AI will impact the future of security

The speed of innovation has rapidly accelerated since we became a digitized society, and some innovations have fundamentally changed the way we live — the internet, the smartphone, social media, cloud computing.

As we’ve seen over the past few months, we are on the precipice of another tidal shift in the tech landscape that stands to change everything – AI. As Brad Smith pointed out recently, artificial intelligence and machine learning are arriving in technology’s mainstream as much as a decade early, bringing a revolutionary capability to peer deeply into massive data sets and find answers where we’ve formerly only had questions. We saw this play out a few weeks ago with the remarkable AI integration coming to Bing and Edge. That innovation demonstrates not only the ability to quickly reason over immense data sets but also to empower people to make decisions in new and different ways that could have a dramatic effect on their lives. Imagine the impact that kind of scale and power could have in protecting customers against cyber threats.

As we watch the progress enabled by AI accelerate quickly, Microsoft is committed to investing in tools, research, and industry cooperation as we work to build safe, sustainable, responsible AI for all. Our approach prioritizes listening, learning, and improving.

And to paraphrase Spider-Man creator Stan Lee, with this massive computing potential comes an equally weighty responsibility on the part of those developing and securing new AI and machine learning solutions. Security is a space that will feel the impacts of AI profoundly.   

AI will change the equation for defenders.

There has long been a perception that attackers have an insurmountable agility advantage. Adversaries with novel attack techniques typically enjoy a comfortable head-start before they are conclusively detected. Even those using age-old attacks, like weaponizing credentials or third-party services, have enjoyed an agility advantage in a world where new platforms are always emerging.

But the asymmetric tables can be turned: AI has the potential to swing the agility pendulum back in favor of defenders. Al empowers defenders to see, classify and contextualize much more information, much faster than even large teams of security professionals can collectively triage. Al's radical capabilities and speed give defenders the ability to deny attackers their agility advantage.

If we inform our AI properly, software running at cloud scale will help us find our true device fleets, spot the uncanny impersonations, and instantly discover which security incidents are noise and which are intricate steps along a more malevolent path — and it will do so faster than human responders can traditionally swivel their chairs between screens.

Al will lower the barrier to entry for careers in Cybersecurity.

According to a workforce study conducted by (ISC)2, the world's largest nonprofit association of certified cybersecurity professionals, the global cybersecurity workforce is at an all-time high, with an estimated 4.7 million professionals, including 464,000 added in 2022. Yet the same study reports that 3.4 million more cybersecurity workers are needed to secure assets effectively.

Security will always need the power of humans and machines, and more powerful AI automation will help us optimize where we use human ingenuity. The more we can tap AI to render actionable, interoperable views of cyber risks and threats, the more space we create for less experienced defenders who may just be starting their careers. In this way, AI opens the door for entry-level talent while also freeing highly skilled defenders to focus on bigger challenges.

The more AI serves on the front lines, the more impact experienced security practitioners and their priceless institutional knowledge can have. And this also creates a mammoth opportunity and call to action to finally enlist data scientists, coders, and a host of people from other professions and backgrounds deeper into the fight against cyber risk.

Responsible AI must be led by humans first.

There are many dystopian visions warning us of what misused or uncontrolled AI could become. How do we as a global community ensure that the power of AI is used for good and not evil, and that people can trust that AI is doing what it's supposed to be doing?

Some of that responsibility falls to policymakers, governments and global powers. Some of it falls to the security industry to help build protections that stop bad actors from harnessing AI as a tool for attack.

No AI system can be effective unless it is grounded in the right data sets, continually tuned and subjected to feedback and improvements from human operators. As much as AI can lend to the fight, humans must be accountable for its performance, ethics and growth. The disciplines of data science and cybersecurity will have much more to learn from each other — and indeed from every field of human endeavor and experience — as we explore responsible AI.

Microsoft is building a secure foundation for working with AI.

Early in the software industry, security was not a foundational part of the development lifecycle, and we saw the rise of worms and viruses that disrupted the growing software ecosystem. Learning from those issues, today we build security into everything we do.

In AI’s early days, we’re seeing a similar situation. We know the time to secure these systems is now, as they are being created. To that end, Microsoft has been investing in securing this next frontier. We have a dedicated group of multi-disciplinary experts actively looking into how AI systems can be attacked, as well as how attackers can leverage AI systems to carry out attacks.

Today the Microsoft Security Threat Intelligence Team is making some exciting announcements that mark new milestones in this work, including the evolution of innovative tools like Microsoft Counterfit that have been built to help our security teams think through such attacks.

Al won't be "the tool" that solves security in 2023, but it will become increasingly important that customers choose security providers who can offer both hyperscale threat intelligence and hyperscale Al. Combined, those are what will give customers an edge over attackers when it comes to defending their environments.

We must work together to beat the bad guys.

Making the world a safer place is not something any one group or company can do alone. It is a goal we must come together to achieve across industries and governments.

Each time we share our experiences, knowledge and innovations, we make the bad actors weaker. That's why it's so important that we work toward a more transparent future in cybersecurity. It’s critical to build a security community that believes in openness, transparency and learning from each other.

Largely, I believe the technology is on our side. While there will always be bad actors pursuing malicious intentions, the bulk of data and activity that train AI models is positive and therefore the Al will be trained as such.

Microsoft believes in a proactive approach to security — including investments, innovation and partnerships. Working together, we can help build a safer digital world and unlock the potential of AI.

Albert Jr Solis😎💯 Gen AI

--Technology and Innovation and Motavtion.

4mo

It's shouldn't matter as long as your using the Windows to redict your https and have the correct server. To include your customer id you should be good to go. I'm Albert Solis Jr 😎💯

Like
Reply
Didem Ün Ateş

Data, AI / GenerativeAI , XR | Responsible & Sustainable AI | tinyML Foundation Board Member | WEF | Forbes Tech Council | TechWomen100 Champion | Trailblazer 50 | Driven to Create Positive Social Impact with Technology

11mo
Allen Nieman

VP Product @ Prescryptive Health | Reimagining consumer healthcare

11mo

Judging simply by the amount of obvious phishing attempts that still get delivered to my Microsoft-powered inbox, there remains a significant amount of opportunity for improvement and innovation in how A.I. will improve information security.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics