Ever tried turning an open-source LLM into a personal paparazzi? I didโpurely as a proof-of-conceptโand itโs both fascinating and a little alarming. I wrote a simple script that waits for Outlook to open, takes a screenshot, and instantly sends it off via Telegram. Yes. Classic programming could do the same thing, sure, but hereโs the twist: with AI agents, you can fine-tune it to trigger specific words in an email or messages from a certain contact. Suddenly, weโre stepping into a new era of potential security hazards.

In my little test, we have a Python script in the back that screenshots the screen every few frames, and using a multimodal LLM, it analyzes a particular situation to meet and behave intelligently in those situations (for example, an open is opened or a user is sending an email about a specific topic to a specific person). This highlights just how accessible (and powerful) these open LLMs can be. Itโs a demonstration of the creative possibilities: from harmless automations to worrisome โspywareโ scenarios. As these AI tools become more sophisticated, we have to keep in mind that the line between โhelpful agentโ and โunwanted eavesdropperโ can blur quickly.
Yes, if you already have a code running in a target PC, there is a lot you can do even without AI, but AI unlocks many possibilities with minimal effort that took a lot of complexity in coding if it wanted to be done in an old-fashioned manner.
With the rise of AI agents, sooner or later, people are going to have different forms of live assistants, such as tech helpers, creative idea counselors, and semi-automized agents accompanying them while browsing, and you name it. And there will be a new world of possibilities and situations that we havenโt faced before.
So why talk about it? Because knowledge is the best form of defense. If we understand how these technologies workโand how they might be misusedโweโre better equipped to build safeguards. The future of AI is thrilling, but itโs also an invitation to stay vigilant about privacy, ethics, and security. Letโs explore the possibilities responsiblyโso we can create innovation that helps us, not haunts us.
