← BACK TO LOGS
7 MAY 20262 MIN READ

Everyone is talking about AI. Nobody is asking what's running underneath it.

Everyone is talking about AI. Nobody is asking what's running underneath it.
Everyone is talking about AI, but most people in tech can't tell you what's actually running underneath it. Let me be direct: AI and LLMs are not the same thing. AI is the broad concept. Large Language Model (LLM) is the engine. It's what powers ChatGPT, Claude, Copilot, and the growing list of tools your company is either already using or about to deploy. And here's what nobody is saying loudly enough: LLMs are infrastructure and Infrastructure has to be secured. Right now, companies are rushing to integrate LLMs into their products, their workflows, their internal tools. Someone has to deploy them. Someone has to manage the APIs. Someone has to monitor them in production. But most teams were never taught what happens when an LLM goes wrong from a security perspective and the attack surface is very real (future): Prompt Injection: an attacker manipulates the input to make your LLM behave in ways you never intended. Think SQL injection, but for AI. Data Leakage: your LLM has access to sensitive company data. One misconfigured retrieval pipeline and that data is exposed. Model Poisoning: training data is tampered with to introduce subtle vulnerabilities into the model's behaviour. Insecure API Exposure: the LLM endpoint is the new attack surface. Unprotected, unauthenticated API calls are an open door. Supply Chain Attacks: the libraries, the model weights, the third party integrations, all new vectors that didn't exist two years ago. The scary part? Most organisations deploying AI today have no policy, no framework, and no clear ownership for any of this. Who is responsible when there is a data breach caused by the LLM? That is not just a technical question. It is a governance, risk and compliance question and some companies haven't started asking it yet. LLMs are already in your stack. The question is whether your organisation understands them well enough to protect what they touch. Who on your team owns the security of your AI? 👇
[ VIEW ORIGINAL ON LINKEDIN → ]
AWSCloud Infrastructure
TerraformInfrastructure as Code
Docker · KubernetesContainer Orchestration
ISO 27001Security Compliance
Prometheus · GrafanaObservability
CI/CD PipelinesContinuous Delivery
PCI DSSRegulatory Compliance
Cybersecurity + AIMSc Research
AWSCloud Infrastructure
TerraformInfrastructure as Code
Docker · KubernetesContainer Orchestration
ISO 27001Security Compliance
Prometheus · GrafanaObservability
CI/CD PipelinesContinuous Delivery
PCI DSSRegulatory Compliance
Cybersecurity + AIMSc Research