Recent findings from cybersecurity experts have revealed potential vulnerabilities in several AI platforms, including Amazon Bedrock. These flaws could allow unauthorized data exfiltration via domain name system (DNS) queries. BeyondTrust’s report highlights that the Amazon Bedrock AgentCore Code Interpreter’s sandbox mode permits DNS queries, which can be exploited for network infiltration, despite its intended isolation.
Amazon Bedrock’s Vulnerability Details
Amazon Bedrock, a service launched in August 2025, facilitates secure AI code execution in isolated environments. However, the ability to make outbound DNS queries poses a threat as malicious actors could establish control channels using these queries. This vulnerability, identified with a CVSS score of 7.5, allows attackers to potentially access and extract data from AWS resources like S3 buckets.
In particular, DNS queries can be manipulated to deliver commands and receive responses, effectively bypassing network isolation. This risk is heightened by overprivileged IAM roles that grant unintended access to sensitive data, emphasizing the need for strict access controls.
LangSmith Flaw and Account Compromises
Another critical security flaw has been discovered in LangSmith, an AI observability platform. This issue, identified as CVE-2026-25750 with a CVSS score of 8.5, involves URL parameter injection that can lead to token theft and account takeover. Affected versions have been patched in LangSmith 0.12.71, released in December 2025.
The vulnerability arises from inadequate validation of the baseUrl parameter, allowing attackers to steal user tokens through crafted links. This weakness underscores the importance of robust security measures in AI platforms, which often prioritize flexibility at the cost of potential security gaps.
SGLang’s Deserialization Risks
SGLang, a popular AI framework, faces security issues related to unsafe pickle deserialization, potentially enabling remote code execution. These vulnerabilities, with CVSS scores up to 9.8, affect its multimodal generation and disaggregation modules.
Orca Security’s findings indicate that SGLang’s improper handling of untrusted data could be exploited to execute arbitrary code. Users are advised to limit network exposure and implement stringent access controls to mitigate these risks. Monitoring for unusual network activity and implementing network segmentation are also recommended to prevent unauthorized access.
The discovery of these vulnerabilities across various AI platforms highlights the evolving landscape of cybersecurity threats. Users and administrators must prioritize the adoption of protective measures and regular audits to safeguard sensitive data.
