Privacy Concerns and Local LLMs: My Journey to Jan

I’ve been increasingly concerned about privacy issues with publicly available LLMs, and recent news has only heightened these worries. When Sam Altman from OpenAI stated that anything you type into ChatGPT could potentially be discoverable, and considering that people are using these platforms as therapists whilst sharing deeply private information, I felt compelled to share my experiences and findings.

The Wake-Up Call

My suspicions about LLM privacy first emerged when I started using Llama shortly after its initial release. I began noticing an unsettling pattern: conversations I’d had with Llama about products I was considering seemed to translate into targeted advertising on other platforms. This experience reminded me of the WhatsApp phenomenon, where topics discussed over lunch would mysteriously appear as targeted ads later. It was this very issue that drove me away from Facebook’s platforms some time ago, and I immediately stopped using Llama after recognising the same concerning behaviour.

For anyone wondering why Mark Zuckerberg has been investing so heavily to secure Facebook’s position in the LLM space, this privacy-to-advertising pipeline provides a telling explanation.

The Familiar Pattern

Looking across other platforms, a familiar pattern emerges: if the product is free, then you are the product.

The connections are becoming increasingly clear:

  • Microsoft maintains strong ties with ChatGPT
  • Google has significant connections with Claude

Whilst I haven’t observed the latter two examples manifesting privacy concerns in the same overt way as Llama and Facebook, the underlying pattern remains unmistakable. This realisation prompted me to investigate whether it would be possible to run smaller models locally on my somewhat dated Mac M1.

Discovering Jan

With my primary focus being security and privacy, and after researching various options (including discussions with Perplexity), I settled on a platform called Jan.ai. Jan.ai allows you to download various smaller models and run them entirely locally on your computer.

Whilst these models are nowhere near as advanced or capable as the latest large-scale models, they’re improving continuously. Interestingly, their limitations can sometimes actually improve overall results, as they require more engagement and creativity from the user compared to the more capable bleeding-edge models that can sometimes encourage passive consumption.

Performance Considerations

Hardware performance does play a significant role in how well these models function. Whilst I was reasonably satisfied running Jan on my five-year-old MacBook Air M1, I’m now considering upgrading to handle larger and more capable models.

This trend might represent a significant opportunity for hardware manufacturers, as privacy-conscious users seek the computing power necessary to run sophisticated models locally. It provides a compelling reason to upgrade hardware beyond the typical replacement cycle.

The Path Forward

The shift towards local LLM deployment represents more than just a technical preference; it’s a fundamental choice about data ownership and privacy. As these models continue to improve and become more accessible for local deployment, we may see a significant portion of users migrating away from cloud-based solutions.

For now, Jan.ai provides an excellent entry point for anyone looking to maintain control over their data whilst still benefiting from LLM capabilities. The trade-offs in model sophistication seem increasingly worthwhile when weighed against the privacy benefits of keeping your conversations entirely local.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.


Discover more from Real Velona

Subscribe to get the latest posts sent to your email.

I’m Paul

Hi, I’m Paul Velonis, a Melbourne-based executive and entrepreneur. Welcome to Real Velona—my digital space for exploring business strategy, innovation, leadership, and technology. It’s a kaleidoscope of my passions, blending my curiosity and insight.

Discover more from Real Velona

Subscribe now to keep reading and get access to the full archive.

Continue reading