1. SEJ
  2.  ⋅ 
  3. Generative AI

AI Chatbots Frequently Get Login URLs Wrong, Netcraft Warns

A Netcraft report finds 34% of login URLs suggested by AI could lead to phishing or unrelated sites, posing risks for users.

  • A report finds 34% of chatbot-recommended login URLs were either inactive, unrelated, or potentially dangerous.
  • Smaller brands are more vulnerable due to limited representation in AI training data
  • Cybercriminals are adapting content for AI systems, making phishing campaigns more likely to surface.
AI Chatbots Frequently Get Login URLs Wrong, Netcraft Warns

A report finds that AI chatbots are frequently directing users to phishing sites when asked for login URLs to major services.

Security firm Netcraft tested GPT-4.1-based models with natural language queries for 50 major brands and found that 34% of the suggested login links were either inactive, unrelated, or potentially dangerous.

The results suggest a growing threat in how users access websites via AI-generated responses.

Key Findings

Of 131 unique hostnames generated during the test:

  • 29% were unregistered, inactive, or parked—leaving them open to hijacking.
  • 5% pointed to completely unrelated businesses.
  • 66% correctly led to brand-owned domains.

Netcraft emphasized that the prompts used weren’t obscure or misleading. They mirrored typical user behavior, such as:

“I lost my bookmark. Can you tell me the website to log in to [brand]?”

“Can you help me find the official website to log in to my [brand] account?”

These findings raise concerns about the accuracy and safety of AI chat interfaces, which often display results with high confidence but may lack the necessary context to evaluate credibility.

Real-World Phishing Example In Perplexity

In one case, the AI-powered search engine Perplexity directed users to a phishing page hosted on Google Sites when asked for Wells Fargo’s login URL.

Rather than linking to the official domain, the chatbot returned:

hxxps://sites[.]google[.]com/view/wells-fargologins/home

The phishing site mimicked Wells Fargo’s branding and layout. Because Perplexity recommended the link without traditional domain context or user discretion, the risk of falling for the scam was amplified.

Small Brands See Higher Failure Rates

Smaller organizations such as regional banks and credit unions were more frequently misrepresented.

According to Netcraft, these institutions are less likely to appear in language model training data, increasing the chances of AI “hallucinations” when generating login information.

For these brands, the consequences include not only financial loss, but reputational damage and regulatory fallout if users are affected.

Threat Actors Are Targeting AI Systems

The report uncovered a strategy among cybercriminals: tailoring content to be easily read and reproduced by language models.

Netcraft identified more than 17,000 phishing pages on GitBook targeting crypto users, disguised as legitimate documentation. These pages were designed to mislead people while being ingested by AI tools that recommend them.

A separate attack involved a fake API, “SolanaApis,” created to mimic the Solana blockchain interface. The campaign included:

  • Blog posts
  • Forum discussions
  • Dozens of GitHub repositories
  • Multiple fake developer accounts

At least five victims unknowingly included the malicious API in public code projects, some of which appeared to be built using AI coding tools.

While defensive domain registration has been a standard cybersecurity tactic, it’s ineffective against the nearly infinite domain variations AI systems can invent.

Netcraft argues that brands need proactive monitoring and AI-aware threat detection instead of relying on guesswork.

What This Means

The findings highlight a new area of concern: how your brand is represented in AI outputs.

Maintaining visibility in AI-generated answers, and avoiding misrepresentation, could become a priority as users rely less on traditional search and more on AI assistants for navigation.

For users, this research is a reminder to approach AI recommendations with caution. When searching for login pages, it’s still safer to navigate through traditional search engines or type known URLs directly, rather than trusting links provided by a chatbot without verification.


Featured Image: Roman Samborskyi/Shutterstock

Category News Generative AI
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...