At the 39th Chaos Communication Congress, security researcher Johann Rehberger demonstrated how vulnerable AI coding assistants such as GitHub Copilot, Claude Code and Amazon Q are to prompt injection attacks. Using hidden Unicode commands and a modified ClickFix method, he was able to get the tools to perform malicious actions. The possibilities range from data theft and remote code execution to complete system takeover.
Rehberger also presented a proof-of-concept for a self-propagating AI virus in repositories, i.e. central digital storage facilities. Many of the vulnerabilities identified have since been closed, but prompt injection remains a fundamental risk. Companies therefore need to implement significantly stricter security mechanisms.