News & Analysis

Can AI Define Right and Wrong?

For now the answer appears to be a big and resounding NO thought there may be some light in the dark tunnel

AIOps

AI started to make its way into society years ago, in fact according to a McKinsey Global survey on artificial intelligence conducted in 2022, that in 2017, 20% of respondents reported having adopted AI in at least one area of business, while a year ago, that figure stood at 50%. 

This growth is expected to continue as organizations have perceived the value of AI for their businesses – according to a Forbes Advisor Survey, over 60% of business owners believe AI will increase productivity. Specifically, 64% stated that AI would improve business productivity, and 42% believe it will streamline job processes, denoting a wide market of acceptance for use of AI within the business space.

ChatGPT, an AI platform by OpenAI, has become an launch partner of AI as it has been the vehicle to present to users all over the world a new scenario of opportunities – from writing papers and notes for students, business emails, scientific and medical resolutions, to, unfortunately, the new activities of cybercriminals. These tools offer essential advantages: 

  • Skill and Experience is not needed: one of the biggest advantages that AI has brought to cybercriminals is its ease of use, which allows more users to carry out malicious activities including would-be cybercriminals who may not have had the skills to begin with. Over the past few months new small groups of cybercriminals have emerged that are capable of more sophisticated cyberattacks thanks to the options they have found in this new technology.
  • Improving cyberattacks: the fact that it is available under an unlimited free access model, attackers have taken advantage of the speed and accuracy of this tool to create malicious code and cyberattacks, such as phishing campaigns. These AI tools can generate content that is virtually identical or very difficult to detect. Moreover, the autonomous learning models of these tools allow them not only to answer questions, but also to create all kinds of content (images, videos or even audios). This has unfortunately led to the misuse of so-called Deepfakes, hyper-realistic imitations used to spread disinformation, with well-known cases of spoofed country leaders as Barack Obama, Joe Biden, Voldymyr Zelensky and even Vladimir Putin.
  • The introduction of automated cyberattacks: this technology has led to a significant increase in the use of bots and automated systems to carry out online attacks, allowing cybercriminals to be more successful. Something demonstrated by the rise in cyberattacks globally, which experienced a 38% increase over last year. Also, cybercriminals can use AI-powered botnets to launch massive DDoS attacks to overwhelm their targets’ servers and disrupt their services.
  • Cloned artificial intelligence tools: despite the fast response of developers, who are updating the bugs in the language, cybercriminals have found fast ways around restrictions. Earlier this year, CheckPoint Software research team reported how cybercriminals were already distributing and selling their own modified ChatGPT APIs on the Dark Web. The recent release of ChatGPT4.0 came with the identification of a new campaign to steal and sell premium accounts, granting full and unlimited access to the tool’s new features.

There’s need to regulate Artificial Intelligence learning

While AI is undoubtedly the future, a wider regulation is demanded by experts, including Elon Musk, one of the co-founders of OpenAI itself, that have created a public petition through which they seek to temporarily stop the development of these tools until ethical training is ensured, while international bodies such as the European Union are already developing their own AI laws with proposals that emphasize cybersecurity and data security needs.

The challenge lies in the fact that, once learned, knowledge is virtually impossible to “remove” from these models. This means that security mechanisms focus on preventing collecting or revealing certain types of information process, rather than eradicating knowledge altogether.

But not all news surrounding AI is negative. Today, artificial intelligence and machine learning are two of the main pillars that help cybersecurity capabilities improve. The fact is that the degree of complexity and dispersion of the current corporate systems make traditional and manual monitoring, supervision and risk control insufficient.

These technologies allow for a much more accurate and comprehensive threat analysis and maintain round-the-clock security around the clock, bypassing other barriers imposed by human limits.

Today we can already see some examples of the application of this technology in cybersecurity in Check Point’s ThreatCloud AI, the brains behind all Check Point Software products, an AI-based threat prevention solution capable of making 2 billion security decisions daily, scanning websites, emails, IoT devices, mobile apps and more.

“The race against cybercriminals continues to be one of the main priorities, we must maintain an updated and prepared environment to deal with all current and future threats” shares Rebecca Law, Country Manager, Singapore, Check Point Software Technologies. 

“Today we have several tools that exemplify the possibilities of artificial intelligence in the field of cybersecurity. However, to mitigate the risks associated with advanced AI, it is important that researchers and policymakers work together to ensure that these technologies are developed in a safe and beneficial way.”

Leave a Response