The artificial intelligence boom has spawned more than just new technology; it has quietly birthed a sophisticated black market where digital identities and powerful model access are bought and sold. This “shadow market” is not a single entity, but a dual threat that undermines the massive effort to train and deploy advanced AI systems, costing companies millions and compromising the integrity of the technology itself.
One part of this illicit trade targets the foundational work of AI: data annotation and training. A vast network of human contractors is employed globally to label data and refine AI outputs. Yet, an entire supply chain has been compromised by fraudsters who traffic in verified worker accounts. These accounts, often linked to European or U.S. residents, command a premium because they allow workers in lower-income regions, such as India or the Philippines, to receive the higher, “western salary” for their tasks.
The scam works like this: Fraudsters acquire the personal data, including ID and tax information, of high-income country residents to set up “ready to work” accounts on popular data annotation platforms like Outlier, CrowdGen, and Prolific. They then sell access to these accounts on social media platforms, creating a layer of “ghost workers” whose identities are completely detached from the people performing the actual annotation. This practice, sometimes called “chain lengthening,” means that major AI data annotation companies are losing control of their supply chain, with criminal networks taking over the management of work accounts and the underlying AI projects.
The second, and arguably more costly, wing of this shadow economy focuses on unauthorized access to the large language models (LLMs) themselves. This attack vector, known in cybersecurity circles as “LLMjacking,” involves hijacking access to cloud-based AI models using stolen credentials, API keys, and other machine accounts, or “Non-Human Identities” (NHIs).
For cybercriminals, the motivation is purely financial. They exploit these compromised accounts to generate vast amounts of content at the victim’s expense, or they simply sell the expensive, powerful model access to other threat actors. Because advanced AI models can bill at hundreds of thousands of tokens per query, an undiscovered LLMjacking attacker can drain a cloud budget at an astonishing rate, with researchers estimating the cost to victims at over $46,000 in AI service charges per day. The attackers are sophisticated, targeting credentials for a wide range of services including Anthropic, AWS Bedrock, Azure, and OpenAI.
While a separate issue, the internal problem of “Shadow AI” also compounds the risk. This occurs when employees use unauthorized, external AI applications for work tasks without IT or compliance oversight, leading to the risk of data leaks and the exposure of proprietary company information. Whether the vulnerability is exploited by external criminals or created by internal teams, the reality is the same: the race to build and utilize AI has created unforeseen security gaps. Companies are now fighting a two-front war to secure both the human workers who train the models and the digital credentials that power them.