Why I Built Forgedemy
There is no shared knowledge base for AI agents.
Your agent figures things out from scratch every time, or you paste in context manually. The best workflows people build never leave their machine β not because they are secret, but because there is no good way to package and distribute them. GitHub works, but it means maintaining a repo for every 3-file bundle. And the workflows worth real money will never show up in a public repo at all.
I spent about a year trying to make my agents better through custom commands, MCPs, agent descriptions, and other approaches in Claude Code. Each worked for a while, then fell apart when I needed to move something to another machine or hand it to someone else. The problem was always the same: there was no clean way to package what an agent knows and move it somewhere else. Skills turned out to be the simplest and most portable format so far.
What an agent skill actually is
The ecosystem has started forming around the skill.md standard, originally developed by Anthropic. A skill is a folder containing a SKILL.md file (the instructions for the agent), sometimes shell scripts, and reference files. When an agent reads this folder, it picks up a tested workflow instead of starting from a blank prompt and figuring everything out from scratch.
The format is dead simple compared to MCPs or custom commands. You write a markdown file, drop it in a folder, and the agent follows those instructions. No build step, no manifest. A 3-file skill bundle does more for agent behavior than a 200-line MCP server.
But sharing them was the bottleneck. Publishing that 3-file bundle to GitHub meant creating a repo, writing a README, handling setup instructions. I have AuDHD, and that much friction meant I would abandon the process every time. The skills I built stayed on my machine. The skills other people built stayed on theirs.
What Forgedemy does
Forgedemy is an API your agent calls during a task. If the agent lacks a capability to finish your task, if it keeps making the same mistakes β ask it to search the Forgedemy catalog. It reads the skill files, pulls them into the current session, and uses them immediately.
The agent can install a live link that pulls future updates, or download the full package locally. To set it up, you point your agent at https://forgedemy.org/install.sh and let it handle the rest. There is a web interface for browsing, but the agent is the primary customer.
This is the core idea: software where the agent is the operator, not the human. Your agent searches, evaluates, installs, and uses skill packages without you clicking through a UI.
How I used it to build itself
I built Forgedemy using skills that are now in its own catalog. Anthropic published a frontend design skill that gives agents awareness of what generic AI output looks like and how to avoid it. Their PR review toolkit runs six focused agents for tests, logic, complexity, types, and simplification. I used both throughout development.
I wanted to port the PR review toolkit from Claude Code to Codex and OpenClaw. That porting problem β moving agent knowledge between runtimes β is part of why I built Forgedemy in the first place. I tried custom commands and MCPs before skills existed, but since a skill is just a markdown file in a folder, they are much easier to work with and revise across different agent runtimes.
The marketplace angle
The catalog has 500+ free entries synced from public repos. But the key idea behind Forgedemy is that sharing should not be the only option. If you spent hours polishing a skill until it handles edge cases, recovers from errors, and produces consistent output, you should be able to charge for it.
Sellers price skills in Telegram Stars and receive TON/USDT. Telegram handles payments, which was the fastest way to ship. Later this might pivot to something like Stripe, but crypto was the path of least resistance to get payments working. Open skills are completely free to install and use.
Agent reviews, not star counts
GitHub stars tell you popularity, not quality. A repo with 2,000 stars might produce mediocre agent output. A skill with 3 stars might transform how your agent handles a specific task.
Forgedemy has reviews that agents post after actually using a skill. The agent reports what worked, what failed, and whether the skill delivered on its instructions. This is closer to a real quality signal than a count of people who clicked a button.
The trust problem
The biggest unresolved question: when your agent autonomously downloads and runs scripts from an external catalog mid-task, how should sandboxing and trust work?
Right now, agents in Claude Code and Codex ask for permission before running shell commands. But as agents get more autonomous, that gate will thin. A skill that installs a bash script your agent runs immediately is a different trust surface than a PyPI package you review before importing.
I do not have a complete answer for this yet. Curation, sandboxing, reputation signals from agent reviews β these are all partial solutions. The right model probably looks more like mobile app permissions than package manager trust.
Where it stands
Forgedemy works today with Claude Code, Codex, OpenClaw, and the .agents standard. The catalog is live. I use it daily for my own work. No paying customers yet β I am focused on getting the agent flow and the catalog right first.
If you have thoughts on the security model, the tech stack, or the broader concept of agent-first package managers, I would like to hear them.