It’s been a while since my last post, and I finally want to talk about LLMs!
I have been using AI/LLMs extensively for research purposes in my work, such as looking up Excel formulas. However, my experience had been entirely online through a browser—until recently, when I tried running a query against a model offline for the first time.
I recently discovered a software called LM Studio, and it has been a delight to use!
As we can see from the advertised descriptions, with LM Studio we can:
🤖 • Run LLMs on our laptop, entirely offline
📚 • Chat with your local documents
👾 • Use models through the in-app Chat UI or an OpenAI compatible local server
📂 • Download any compatible model files from Hugging Face 🤗 repositories
🔭 • Discover new & noteworthy LLMs right inside the app's Discover page
I use a MacBook, and installing LM Studio was a straightforward process. Since DeepSeek R1 took the world by storm just a few days ago, it was the first model I downloaded after initializing LM Studio, followed by the usual suspect, Llama 3.2.
I ran queries against DeepSeek R1, and it took approximately 1 minute and 4 seconds to respond (for my second query). During this time, my laptop’s performance slowed due to the compute-intensive nature of running queries locally. However, the results were decent—good enough to make me want to continue using it.
I also ran a query against a PDF document, this time using the Llama model, and the results were quite good as well:
That’s it for now! While I still prefer online models and chatbots for their speed, functionality, and answer quality, offline LLMs offer valuable benefits such as enhanced privacy and the ability to work without an internet connection. It will be interesting to see how AI continues to evolve. Let’s keep an eye on this space as it develops!