Disclosure: PasswordManager.com earns a commission from referring visitors to some products and services using affiliate partnerships.

Since the launch of sophisticated AI-driven tools such as ChatGPT and Google’s Bard, reports have emerged that indicate these tools could help hackers steal passwords and phish sensitive information even more effectively than before.

In order to learn how much of a threat this poses to the average American, in April, PasswordManager.com surveyed 1,000 cybersecurity professionals.

Key findings:

  • 56% are concerned about hackers using AI-powered tools to steal passwords
  • 52%say AI has made it easier for scammers to steal sensitive information
  • 18% say AI phishing scams pose a “high-level” threat to both the average American individual user and company

More Than Half of Cybersecurity Professionals Concerned About AI’s Ability to Steal Passwords and Sensitive Information

When survey respondents were asked to rate their level of concern when it comes to people using AI tools to hack passwords, 56% say they are ‘somewhat’ (26%) or ‘very’ (30%) concerned about this possibility.

Similarly, 58% of respondents say they are ‘somewhat’ (26%) or ‘very’ (32%) concerned about people using AI-powered tools to create phishing attacks.

“ChatGPT is a tool with many excellent capabilities, and there is no discussion about that. But many people don’t know it is also a powerful tool that hackers or scammers can use,” comments Marcin Gwizdala, Chief Technology Officer at Tidio. “One of the threats that appeared by using AI, in general, is phishing scams. ChatGPT can be easily mistaken for an actual human being because it can converse seamlessly with users without spelling, grammatical, and verb tense mistakes. That’s precisely what makes it an excellent tool for phishing scams,” he explains.

“Of course, as we know, those attacks require immediate attention and actionable solutions,” Gwizdala continues. “The best way to do that is to equip your IT team with tools that can determine what’s ChatGPT-generated vs. what’s human-generated, explicitly geared toward incoming “cold” emails.”

1 in 4 Say AI Tools Have Made it “Much Easier” to Steal Sensitive Information

Survey respondents were then asked to give their opinions on whether or not ChatGPT and similar tools have made it overall easier for hackers to steal passwords and other sensitive information.

Fifty-two percent of cybersecurity professionals surveyed say AI tools have made it ‘somewhat’ (27%) or ‘much easier’ (25%) for people to steal sensitive information, while 51% say it has made it ‘somewhat’ (28%) or ‘much easier’ (23%) for people to hack passwords.

“The threat of AI as a tool for cybercriminals is dire,” says Steven J.J. Weisman, Esq. Weisman is a lawyer, author, professor specializing in white collar crime, and one of the country’s leading authorities on scams, identity theft, and cybersecurity.

“Phishing and spear phishing emails are a large part of how cybercrimes, data breaches and scams begin and now these phishing and spear phishing emails and text messages will be able to be made more believable,” Weisman explains. “In particular, many scams originate in foreign countries where English is not the primary language and this is often reflected in the poor grammar and spelling found in many phishing and spear phishing emails and text messages coming from those countries.”

“Now however, “ he continues, “through the use of AI, those phishing and spear phishing emails and text messages will appear more legitimate. In addition, the ability to use AI to clone voices with only a small sample of the voice of the person to be impersonated is another danger. Calls may appear to come from trusted sources within a company, but are made by a scam artist.”

Over One-Third Say AI Tools Pose a “Medium-” or “High-Level” Threat to Both Individuals and Businesses

When asked how much of a threat hackers using these tools to steal passwords poses, 36% say they pose a ‘medium-level’ (22%) or ‘high-level threat’ (14%) to the average American individual, while the plurality also says this situation poses a ‘medium-’ (20%) or ‘high-level’ (16%) threat to the average American company.

Similarly, 39% of respondents say AI tools used to create phishing scams pose a ‘medium-’ (21%) or ‘high-level’ (18%) threat to individuals and 36% say this poses a ‘medium-’ (19%) or ‘high-level’ (18%) threat to companies.

“Most hacks are caused by human error. People need to educate themselves on best practices for keeping their information safe online,” explains Zo DiGiovanni, president of Remi IT Solutions. “They should also employ security tools like password managers and next generation antivirus software. Everyone can be proactive and invest in an identity theft solution that will help safeguard their identity and warn them should a breach occur,” he says.

“Businesses need to create a security-minded culture where every employee plays a role in keeping the business safe. Businesses should develop a cybersecurity plan and conduct regular training and awareness programs,” DiGiovanni continues. “It’s up to businesses to protect themselves by conducting regular vulnerability assessments and employing the latest defense technologies available to their industry.”

“It will not be uncommon to see businesses using AI against these dark agents,” he adds. “Investing in AI-powered defense solutions will help ensure compliance and can detect and respond to threats in real-time. AI can be used to automate routine cybersecurity tasks, such as network monitoring and vulnerability assessments, freeing up employees to focus on more complex tasks. All businesses need to get serious about cybersecurity thanks to AI.”

Examples of AI-Generated Scams and How to Protect Yourself

When we asked survey respondents to give examples of AI-generated scams they had seen circulating, responses included:

  • “Your voice is being processed out of sight by AI, making it a useful tool for scammers to trick people around you into sending money to ‘you’ online.”
  • “Scammers could use AI language models to generate convincing phishing emails that are tailored to the recipient’s personal information and interests.”
  • “I have seen fake currency trading platforms that claim to have developed a trading system with artificial intelligence predictive capabilities to attract investors, but no such system actually exists.”
  • “I have seen them use artificial intelligence to steal other people’s information quickly, which is very convenient.”

PasswordManager.com’s Subject Matter Expert, Daniel Farber Huang, offers the following tips for individuals and businesses to keep themselves safe from AI-generated scams.

“With the speed at which AI applications are being developed and released, an overabundance of caution is prudent for those concerned with digital security and potential exploitation by bad actors. Here are 5 healthy habits to keep in mind,” he writes.

  1. Assume any unsolicited communication – email, text, DM or other – is a potential scam and exercise basic precautions when reviewing messages.
  2. If there is a compelling reason to respond to an incoming communication, it is safest to contact the sender or organization directly rather than hitting “Reply”. Find the official phone number or email from the company website and contact them directly to ensure you are communicating with the authorized representative.
  3. Understand that basic bots are used for all types of solicitation and are trained to appear human and personable, including on sites like LinkedIn.
  4. If and where possible, consider adding an icon or emoji to your listed name on social media sites. LinkedIn, for example, allows you to add emojis in your profile name. Real human beings will not manually insert a graphic into their individual message to you, but a bot will automatically do so, which can serve as a red flag that you are being mass solicited.
  5. Recognize that voicemail messages, text exchanges, and even chat room conversations can be AI generated to fool you into thinking you are communicating with a real person, with the goal of trying to manipulate you into revealing personal information or sensitive data.

Methodology

This survey was commissioned by PasswordManager.com and conducted online by the survey platform Pollfish on April 27, 2023. In total, 1,000 participants in the U.S. completed the full survey. All participants had to meet demographic criteria ensuring they were age 25 or older, currently self-employed or employed for wages, had a household income of $50,000/year or more, and have a career in security, software, information, or scientific/technical services.

Additionally, respondents were screened to include only those who specifically identified their job role as cybersecurity, worked full-time in this job role, and were somewhat or very familiar with AI tools such as ChatGPT and Bard.

The survey used a convenience sampling method, and to avoid bias from this component Pollfish employs Random Device Engagement (RDE) to ensure both random and organic surveying. Learn more about Pollfish’s survey methodology or contact [email protected] for more information.