Skip to main contentIBM Granite

Granite Code on Replicate

This guide demonstrates using inference calls against a model hosted on Replicate. This guide will demonstrate a basic inference call using the replicate package as well as via LangChain. In both cases, you will provide a Replicate API Token.

To see how you can use Ollama to host models locally instead, see how to build a VS Code Assistant with Granite.