@daveinchy
1
Published Tools
0
Total Stars
Weekly Downloads
daveinchy
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.