Shadow AI vs. Shadow IT

Posted by:

|

On:

|

Within the corridors of modern enterprises, a lesser-known dichotomy lurks in the shadows: the juxtaposition of Shadow Artificial Intelligence (AI) and Shadow Information Technology (IT). These clandestine counterparts represent both the promise and peril of technological autonomy, each possessing the potential to revolutionize workflows or disrupt established protocols. Follow along as we illuminate the path through the nebulous realm where AI meets IT, and discover the pivotal role they play in shaping the future of work. 

What is Shadow AI? 
Similar to the practice of Shadow IT, where employees will utilize unauthorized tools and procedures to complete tasks otherwise not approved by the company’s IT team, Shadow AI works in the same manner. By avoiding set requirements made by company policies, tools such as ChatGPT are being welcomed into the workforce behind the scenes, creating a nightmare for IT governance. While the potential for Shadow IT can bring about concerns of security breaches due to employees utilizing personal computers without proper security protection, Shadow AI is arguably a threat on a much larger scale opening the possibility of sensitive information being leaked on open-source information platforms.  

What does this mean?  
Due to its nature as a Natural Language Processing (NLP) tool, ChatGPT relies on the input that it is fed by both its guidelines and users around the world to produce its responses as a machine learning model. By taking what it is fed and comparing it with previous data, large language models (LLMs) are subject to error just as much as humans are, leaving room for the possibility of inaccurate results, otherwise known as hallucinations which can lead to data poisoning. Referring to the nature of its ability to spit out quick and concise answers at top speeds, hallucinations appear more often than you would think. According to Bar Lanyado of Voyager18, you must be wary that the information produced by LLMs could be “plausible but fictional information, extrapolating beyond their training and potentially producing responses that seem plausible but are not necessarily accurate”.  

When IT professionals utilize these tools to assist in the completion of their day-to-day work, the possibility of supply chain malware attacks among others becomes more probable. Lanyado also explains how in the developer world, “AIs will also generate questionable fixes to CVEs and offer links to coding libraries that don’t exist — and the latter presents an opportunity for exploitation” as attackers have been found utilizing AI to help generate code. In the mentioned scenario, AI could release information not meant to be known by the public to said attackers, aiding them to create their own malicious package based off the information they were given. Even though the problem this scenario presents is one that relies on both ends of the spectrum utilizing AI platforms, the possibilities are endless as generative AI continues to grow in the cyber realm and more IT professionals and threat actors alike are utilizing its unique properties.  

Statistics  
It goes without saying that AI has skyrocketed in the world over the past two decades and that more people are turning to its high IQ score to generate responses for everything from schoolwork to cyber security professionals using it to generate code. Forbes magazine released an article in May of 2024 with research stating that “about 49% of people have used generative AI, with over one-third using it daily” showing the importance of IT governance training. In the same article Forbes mentioned how their research leads to prove the hypothesis that over half of the 49% of people have been utilizing AI at an increased amount since the first time they discovered it. With this in mind it is easy to assume that in another two decades, without any governance training, the possibility of new attack vectors and data breaches will only grow due to simple negligence and data poisoning.  

Mitigation Techniques 
While banning the use of generative AI platforms would subsequently prove to stop the possibly of these attacks in their tracks, the use of Shadow IT and Shadow AI is still at large creating the need for further mitigation techniques. Ensuring that centralized management and usage of AI tools are set in place by companies will help to reduce the possibility of the unknown. Companies can weigh the pros and cons of certain AI platforms and decide what works best for them, instead of simply denying all use, in hopes that employees will stay on path and not become victim to Shadow IT practices. Additionally, by conducting consistent sensitivity training and reminding employees what classification means regarding products and the day-to-day workload, employees will gather a better sense of what information should not be fed into any type of open-source software. It is important to remember the positives that AI has brought to the world and the specific sectors of business while considering all the ways to help keep companies and employees safe from the future of AI assisted attacks.