NHI Foundation Level Training Course Launched
NHI Forum

Notifications
Clear all

Securing Generative AI: Real-World Practices to Mitigate Emerging Threats


(@astrix)
Trusted Member
Joined: 8 months ago
Posts: 22
Topic starter  

Read full article here: https://astrix.security/learn/blog/tips-for-genai-security/?utm_source=nhimg  

 

As the cyber landscape rapidly evolves, one truth remains: not everything that glitters in technology is gold. Generative AI has transformed how organizations operate, offering automation, efficiency, and creativity at scale. Tools like ChatGPT, Jasper, and Otter.ai are now mainstream, enabling teams to generate code, write content, and summarize meetings in seconds.

Yet, beneath this convenience lies a growing security blind spot. Enterprises are integrating AI without fully understanding how these tools manage data, connect to internal systems, or impact the broader attack surface. To protect the enterprise, leaders must look beyond innovation hype and establish clear strategies for AI risk management.

 

Supply Chain Risks in Generative AI Adoption

Generative AI introduces two major categories of supply chain risk:

  1. Data Sharing and Retention Exposure
    Most AI tools rely on sending data to external servers for processing. This creates multiple security questions:
    • Where is enterprise data stored and processed?
    • How long is it retained?
    • Who can access it?

A notable example is the Samsung incident, where engineers unintentionally leaked proprietary code through ChatGPT prompts. Such exposures occur when employees use AI tools without clear governance or visibility, turning convenience into a security liability.

  1. Unverified and Shadow AI Tools
    Beyond major platforms like ChatGPT, there’s an explosion of unverified AI apps integrated into workspaces—Slack bots, Chrome extensions, or productivity plugins. A single unvetted integration can become an entry point for attackers or create uncontrolled access to sensitive systems. Once embedded, tracking or isolating these tools becomes nearly impossible, underscoring the need for strict connection and permission policies.

 

How to Stay Ahead of Attackers While Embracing AI Innovation

Security leaders can’t block AI adoption—but they can guide it responsibly. Here are five practical actions to build AI resilience within the enterprise:

  1. Establish a Centralized AI Inventory
    Maintain visibility across all AI tools in use. Build onboarding and offboarding workflows that define:
    • Who owns each tool
    • What data it accesses
    • Whether it integrates with core systems

Upon offboarding, ensure the tool is fully disconnected and that any enterprise data stored externally is deleted.

  1. Internalize When Possible
    If resources allow, develop internal AI models or host them locally to maintain data control. When using third-party AI, classify data sensitivity first and match it with the appropriate deployment option—public cloud, private cloud, or on-premises.
  2. Reduce the AI Attack Surface
    Apply least privilege principles to all AI connections. Regularly monitor usage logs, revoke unnecessary access, and remove idle integrations.
  3. Continuously Monitor Data Movement
    Deploy monitoring to understand how AI tools use, retain, and transfer your data. Detect anomalies and track any deviations in data flow over time.
  4. Test Before Scaling
    Pilot new AI tools internally. Evaluate their performance, security posture, and compliance before applying them in customer-facing environments.

 

Secure the Future — Start Now

According to Gartner, by 2025, nearly 30% of enterprises will integrate AI-driven development and testing strategies. That means AI security will no longer be optional—it will be foundational.

Security leaders must act now:

  • Build policies before adoption scales.
  • Audit existing integrations.
  • Educate teams on responsible AI use.

Generative AI’s promise is undeniable—but its risks are equally real. With proactive governance, structured monitoring, and a focus on data protection, organizations can harness AI’s power safely while keeping their security posture intact.



   
Quote
Share: