Key Takeaways
- AI agents can harvest, synthesize, and exfiltrate trade secrets through more than a dozen evasion techniques that generate no conventional security alert.
- Employees can photograph AI-generated screen displays with personal devices, bypassing every layer of digital monitoring entirely.
- Companies must act now: update AI acceptable-use policies, mandate prompt logging, restrict personal devices in sensitive work areas, and build AI review into departure protocols.
I. The Threat: A New and Undetectable Form of Trade Secret Theft
Artificial intelligence (AI) has fundamentally changed the insider threat calculus for companies that rely on trade secrets. Traditional employee misappropriation – bulk file downloads, USB transfers, personal email exfiltration – leaves a recognizable forensic footprint. Data loss prevention (DLP) tools, endpoint detection agents, and network monitoring platforms were built to find exactly these patterns, and they have become reasonably effective at doing so.
AI agents break this model entirely. A modern AI agent can receive a single natural-language instruction, autonomously query dozens of proprietary repositories, synthesize the results across systems, and deliver a formatted competitive intelligence document – all without triggering a single conventional security alert. There is no file copy event, no anomalous data transfer volume, and no prohibited keyword in a monitored field. The agent simply does what it was designed to do, at the direction of an employee who has authorized access, in a way that is forensically indistinguishable from legitimate work.
The evasion techniques available to a misappropriating employee are numerous and compounding:
- Synthesis without copying. An agent can read hundreds of proprietary documents and produce a synthesized output – a competitor briefing, a technical specification summary, a pricing strategy analysis – that contains the substance of the trade secret but matches no source file by hash or fingerprint. It looks like original work product.
- Incremental harvesting. Small, unremarkable queries spread across weeks or months, each innocuous in isolation, collectively assemble a comprehensive picture of a company's most sensitive information – without ever crossing an anomaly-detection threshold.
- Cross-system correlation. A single prompt can correlate data from enterprise resource planning (ERP), customer relationship management (CRM), financial, and product management systems simultaneously, synthesizing a trade secret that exists in no individual source document and that triggers no access control violation.
- Personal AI accounts. Employees can copy proprietary content into personal AI subscriptions on personal devices, entirely outside the corporate network and endpoint visibility. No corporate system records that the event occurred.
- Automated pipelines with external endpoints. Custom agent workflows can be configured to automatically route AI-processed outputs to personal email addresses or cloud storage accounts on a recurring basis, operating indefinitely without generating any individual triggering alert.
II. The Screen-and-Photograph Vector: Closing the Analog Gap
Among the most dangerous and least understood misappropriation techniques is one that bridges the digital and physical worlds: using an AI agent to display synthesized trade secret content on a screen, then photographing that display with a personal smartphone.
The mechanics are straightforward. The employee prompts an AI agent – on a company system or a personal account – to query proprietary repositories and format the results into a clean, legible output optimized for readability. The synthesized content is displayed on the employee's monitor. The employee photographs the screen with a personal device. The photographs upload automatically to personal cloud storage within seconds. The entire sequence generates no network event, no endpoint alert, no DLP flag, and no file transfer record. The company's entire digital security infrastructure is completely blind to it.
What makes this technique particularly dangerous in the AI era is scale. Without AI, screen photography is limited to what an employee can manually navigate to and display. With AI, a single carefully crafted prompt can assemble and format the precise combination of trade secrets most valuable to a competitor – cost structures correlated with pricing strategy, proprietary formulations compared to known alternatives, customer concentration data paired with contract terms – presented in a format that makes the photographs immediately actionable. The intelligence quality of AI-synthesized screen photographs far exceeds anything a document-by-document photography campaign could produce.
The evidentiary challenges in these cases are significant. Digital rights management systems record a display event – authorized and unremarkable – and nothing more. Endpoint detection agents observe no file system activity. Forensic reconstruction requires correlating AI agent session logs with physical security camera footage and device proximity data to establish that a personal device was present at the workstation during the relevant query sessions. Even then, obtaining the photographs themselves requires a legal process directed at the employee's personal device and cloud accounts, implicating privacy law considerations that can complicate and delay relief.
III. What Companies Should Do
The reasonable measures a company takes to protect its trade secrets are a legal prerequisite for trade secret protection under the Defend Trade Secrets Act and applicable state law – and they are increasingly scrutinized by courts evaluating both the validity of the trade secret claim and the availability of emergency injunctive relief. An AI governance program is no longer a best practice; it is a legal necessity.
Companies should take the following steps:
- Mandate prompt logging now. Most enterprise AI deployments do not retain full prompt logs, output records, or tool call histories by default. Companies must configure their AI systems to capture and retain complete interaction records with any system containing trade secrets – including query text, synthesized outputs, and tool invocations – for a minimum of three years, in write-once format, integrated with existing security information and event management (SIEM) and legal hold infrastructure.
- Update acceptable-use policies. Existing IT acceptable-use policies almost certainly do not address agentic AI. Policies must be updated to expressly prohibit using AI agents to aggregate or synthesize trade secret information for external use, to use personal AI accounts to process company confidential information, and to photograph or otherwise physically capture AI-generated displays. Written employee acknowledgment is required.
- Personal AI Tools. Consider policies around using personal AI tools for company information – particularly information that is a trade secret. Policies should mandate that personal AI tools be disclosed to the Company, and then that the employee agree to not upload anything confidential to these tools unless they are Company-approved.
- Implement tiered access controls at the data layer. Policy restrictions are insufficient without corresponding technical controls. Trade secret repositories must be governed by access permissions that specifically limit agentic queries to employees with a documented need, enforced at the system level, not merely by policy instruction. Suspending the ability to upload information can also be considered in the right circumstances.
- Address the screen-and-photograph vector physically. Deploy monitor privacy filters on workstations used to access sensitive AI systems. Establish personal device restrictions in areas where employees regularly interact with AI agents and confidential information. Ensure security camera coverage of those workstations. Consider display-layer watermarking that embeds the querying employee's identity in every rendered output.
- Build AI review into departure protocols. Departure reviews must now include systematic examination of the departing employee's AI agent interaction history – prompt logs, output inventories, tool call records, and cross-system query patterns – for the 90-day period preceding notice. Exit interviews should specifically ask whether the employee photographed AI-generated displays or used personal AI accounts to process company information. Separation agreements should require representation that no AI-synthesized outputs have been retained or transmitted.
How We Can Help
Baker Donelson's Intellectual Property and Artificial Intelligence Groups advise companies on trade secret protection programs, AI governance frameworks, and acceptable-use policy design, and represent clients in trade secret litigation and emergency injunctive proceedings. If you have questions about the issues discussed in this alert, please contact Edward Lanquist, Jr., Clinton P. Sanko, or any member of Baker Donelson's Intellectual Property or Artificial Intelligence Practice Groups.