https://github.com/pig-dot-dev/muscle-mem

https://erikdunteman.com/blog/muscle-mem/

+1 on the idea

basically cache the tool calling

but i suspect this kind of cache can be easily enabled in llm api caching functionalities or agent framework level

this is caching on the llm api level

I think a better "caching" is condense the experience / knowledge into data / code.


<
Previous Post
google ai mcp toolbox for databases
>
Next Post
simon's llm tool and repomix