Boost Your Workflow: Setting Up OpenCode with LM Studio
If you are looking to reduce cloud costs and keep your data private while coding, running Large Language Models (LLMs) locally is the way to go. Two tools that shine in this ecosystem are LM Studio for hosting models and OpenCode for integrating AI into your editor.
In this guide, we’ll walk through how to connect OpenCode to LM Studio so you can leverage powerful local models without relying on a central cloud server.
Prerequisites
Before we begin, ensure you have the following installed:
- LM Studio (Downloaded and installed)
- OpenCode Extension/Tool (Installed in your IDE or CLI)
- A downloaded LLM model (e.g.,
Llama-3orMistral)
Step 1: Prepare LM Studio
First, we need to get a model running on LM Studio.
- Open LM Studio.
- Navigate to the Search tab and download a model you like. For coding tasks, models with “Code” in the name (like
Llama-3-Coder) often perform better. - Once downloaded, switch to the Chat or Play tab.
- Select your model from the dropdown menu at the top.
Step 2: Start the Local Server
This is the most critical step. LM Studio needs to act as a server for OpenCode to talk to it.
- In LM Studio, click on the Server icon (usually looks like a plug or network symbol) in the left sidebar.
- Click the green Start Server button.
- Note the port number displayed. By default, this is usually
1234.
Note: Keep LM Studio running in the background while you use OpenCode. If you close it, your AI assistant will stop working!
Step 3: Configure OpenCode
Now we need to tell OpenCode where to find our local model.
For VS Code / IDE Users
If you are using the OpenCode extension within an editor:
- Open the Settings for the OpenCode extension.
- Look for the API Provider or Connection setting.
- Select Custom API or Localhost.
- Enter the URL provided by LM Studio. It typically looks like this:
http://localhost:1234/v1 - Save your settings and restart the extension if prompted.
For CLI Users
If you are using OpenCode via command line, ensure you pass the API endpoint flag:
opencode --api-url "http://localhost:1234/v1"
Step 4: Test the Connection
Ask OpenCode a simple question like “What is the current date?” or “Write a python function to sort a list.”
If LM Studio is running and the port matches, you should see a response generated almost instantly.
If it hangs or fails, check that your firewall isn’t blocking
localhost:1234
Troubleshooting Common Issues
“Connection Refused” Error
This usually means LM Studio’s server has stopped. Go back to the Server tab in LM Studio and ensure the green button says “Stop Server” (indicating it is currently running).
Slow Response Times
If the model feels sluggish, try switching to a smaller quantization version of your model in LM Studio (e.g., Q4_K_M instead of Q8_0). This reduces memory usage and speeds up inference.
Conclusion
By pairing OpenCode with LM Studio, you gain full control over your AI coding assistant. You can swap models on the fly without changing any code configurations, making it perfect for experimenting with different intelligence levels depending on the task at hand.
Happy Coding!