IMatch 2025.2: Why LM Studio and Gemma?

Started by Mario, Today at 07:34:53 PM

Previous topic - Next topic

Mario

LM Studio

I've added support for LM Studio (https://lmstudio.ai/) in this release for IMatch AutoTagger.

LM Studio is like Ollama, but with a nice user interface. Which allows you to actually chat with locally installed AI models, like you do with ChatGPT, Mistral, Gemini or Copilot.

During my tests, I've also found that LM Studio runs more "stable" than Ollama (at least on my computer) and allows for better utilization of available hardware.

Gemma 3

Last week Google released their latest model, Gemma 3, for public non-commercial use. The awesome open source AI community immediately processed the model and made it available in different tools and versions. It became available for Ollama and LM Studio just a few days ago :)


And it beats the previous models available for AutoTagger (LLaVa and Llama Vision) in all disciplines. I get much better descriptions from it, better keywords and traits.

Even the 4B version of Gemma 3 which needs about 5GB RAM on the graphic card is very good. The 12B version is better, but needs 12 to 16 GB RAM on the graphic card. Which makes Gemma 3 4B a good replacement for LLaVA 7B, with less hardware requirements.

You can use Gemma 3 with both Ollama and LM Studio.
I have updated the AI Service Providers help topic and explain there how to install the Gemma 3 model in Ollama and how to install LM Studio and download Gemma 3.

Give it a try. I've made Gemma 3 the new default model for IMatch AutoTagger.
Of course it cannot always beat the 500B models of OpenAI and Mistral (bigger is better, in the current AI sense). But it might be good enough for what you need.

axel.hennig

Wanted to read about the LM Studio and how to install, but the linked page AI Service Provider does seem to contain anything about LM Studio or Gemma 3.

Mario

It does: LM Studio(Local AI)

Image1.jpg


Press Ctrl+F5 a couple of times to clear the cache of your browser?

axel.hennig