💸 Saving on monitoring costs -CNDO #64

💸 Saving on monitoring costs -CNDO #64

Newsletter

Since I can remember, we've had two main models for deploying centralized monitoring, logging, and tracing solutions. Host it all yourself, or pay a SaaS to host it all. This weeks live show will discuss Groundcover's new hybrid model of storing the data on your systems, with a SaaS frontend.


🗓️ What's new this week

🔴 Live show: Kubernetes observability startup created a cheaper architecture for deploying it

What if you could drastically reduce your monitoring, logging, and tracing costs and complexity by using a SaaS product that stores all its data in your clusters and only needs one agent per host for full observability and APM? Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced.

Cloud native observability tool for monitoring Kubernetes: Groundcover (Ep 272)
What if you could drastically reduce your monitoring, logging, and tracing costs and complexity by using a SaaS product that stores all its data in your clus…

Groundcover CEO and Co-Founder Shahar Azulay joins me to discuss their new approach to fully observe K8s and its workloads with a “hybrid observability architecture” that can put most, if not all, of the solution in your cloud and clusters while still remaining an easy-to-manage SaaS at its heart. We’ll dig into the deployment, architecture, and how it all works under the hood.

Click the dinner bell 🔔 to get your reminder. You can also add it to your calendar here

🎧 Podcast

Ep 163: Local GenAI LLMs with Ollama and Docker

In this latest podcast, Nirmal and I are joined by friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama.

We've designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.

Matt walks us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.

Be sure to check out the video version of this episode for any demos.

This episode is from our YouTube Live show on April 18, 2024 (Stream 262).

👀 In case you missed the last newsletter

Read it here.