421 words — 2 minutes read

(Re-)Discovering Agentic AI

Warning: This text is 100% human written

I’ve (re-)discovered Agentic AI last December through a LinkedIn post forwarded by Ann Harding. The post mentioned Daniel Miessler and his „Personal AI Infrastructure" (PAI) project. Let’s just say — something clicked. I spent days (and many nights) learning, being amazed, working, building, refining Claude Code and what it can do for me.

Six months ago, I saw the downsides of AI as it is being used today (energy use, climate, intellectual property, ethics, politics, techbros, social, …) and didn’t really see any „real" value being produced. That has changed! Not the first part, but the second for sure.

During the last 4.5 months I’ve learned how to actually harness the powers of Agentic AI and built some really useful things — both for my own sake and for the company.

Agentic AI is interesting to work with: It feels like a mixture of a very happy and enthusiastic puppy, an intern with high technical skills and knowledge ad infinitum, and a totally incompetent employee that happily duplicates the same things over and over and has no view of the big picture. But that actually can be fixed. Daniel’s PAI (and his „Algorithm") is key. And guardrails. The more guardrails the better. And guardrails in code, not by the LLM. And some kind of memory system.

I’ve built a lot of tools with AI and they genuinely make work life easier: Creating the monthly security newsletter and collecting tons of data? What took us a day or more is now something that takes a few minutes.

Or live „vibe coding" a cyber learning platform at a customer conference, then turning it into a KQL learning/capture the flag platform (that Sergio Albea has been using at the last TF-CSIRT and the upcoming GÉANT Security Days this week). And that turned into a full blown Crisis Management Simulator that my team has been conducting their first customer training with 2 weeks ago. This would have been impossible before.

I have spent a lot of time building the infrastructure to make this happen, and one thing I spent a lot of time on in the last months is the question of trust. The OpenClaw debacle, where basically all OpenClaw installations got hacked, is a warning example.

So yes, Agentic AI is working. It needs handholding (a lot). There are tons of huge problems associated with it that all have the potential to bring humanity down. It is an interesting ride at the moment…


Originally posted on LinkedIn.

Jens-Christian Fischer

Maker. Musician