Mitigating Shadow AI Risks by Implementing the Right Policies
Posted: Sun Feb 09, 2025 5:23 am
With the rise of artificial intelligence, security is becoming increasingly important in every organization, which creates a new problem - "shadow AI" applications that need to be checked for compliance with security policies, writes Kamal Srinivasan, senior vice president of product and program management at Parallels (part of Alludo), on the Network Computing portal.
The shadow IT problem of recent years has evolved into the shadow AI problem. The growing popularity of large language models (LLM) and multimodal language models has led to product teams within organizations using these models to create use cases that improve productivity. Numerous tools and cloud services have emerged that make it easier for marketing, sales, engineering, legal, HR, and other teams to create and deploy generative AI (GenAI) applications. However, despite the rapid adoption of GenAI, security teams have yet to figure out the implications and policies. Meanwhile, product teams building applications are not waiting for security teams to catch up, creating potential security issues.
IT and security teams are grappling with unauthorized nepal mobile database that can lead to network intrusions, data leaks, and disruptions. At the same time, organizations must avoid a too-rigid approach that could stifle innovation and prevent breakthrough product development. Enforcing policies that prevent users from experimenting with GenAI applications will negatively impact productivity and lead to further silos.
The Shadow AI Visibility Problem
Shadow IT has created a community of workers who use unauthorized devices to support their workloads. It has also given rise to “citizen developers” who can use no-code or low-code tools to build applications without going through official channels to obtain new software. Today, we have citizen developers using AI to build AI applications or other types of software.
The shadow IT problem of recent years has evolved into the shadow AI problem. The growing popularity of large language models (LLM) and multimodal language models has led to product teams within organizations using these models to create use cases that improve productivity. Numerous tools and cloud services have emerged that make it easier for marketing, sales, engineering, legal, HR, and other teams to create and deploy generative AI (GenAI) applications. However, despite the rapid adoption of GenAI, security teams have yet to figure out the implications and policies. Meanwhile, product teams building applications are not waiting for security teams to catch up, creating potential security issues.
IT and security teams are grappling with unauthorized nepal mobile database that can lead to network intrusions, data leaks, and disruptions. At the same time, organizations must avoid a too-rigid approach that could stifle innovation and prevent breakthrough product development. Enforcing policies that prevent users from experimenting with GenAI applications will negatively impact productivity and lead to further silos.
The Shadow AI Visibility Problem
Shadow IT has created a community of workers who use unauthorized devices to support their workloads. It has also given rise to “citizen developers” who can use no-code or low-code tools to build applications without going through official channels to obtain new software. Today, we have citizen developers using AI to build AI applications or other types of software.