High-Fivers hangout & MLOps - CNDO #63

High-Fivers hangout & MLOps - CNDO #63

Newsletter

Wednesday the High-Fivers crew is meeting in Discord for our monthly DevOps water-cooler chat. Thursday's live show is all about MLOps vs. DevOps.


🗓️ What's new this week

🔴 Live show: MLOps Engineering for DevOps people

What does it take to operate Machine Learning workloads as a DevOps Engineer? Maria Vechtomova, a MLOps Tech Lead, joins us to discuss the obvious and not-so-obvious differences between a MLOps Engineer and traditional DevOps jobs. She's also the co-founder of Marvelous MLOps.

MLOps Engineering for DevOps people (Ep 271)
What does it take to operate Machine Learning workloads as a DevOps Engineer? Maris Vechtomova, a MLOps Tech Lead, joins us to discuss the obvious and not-so…

Click the dinner bell in YouTube 🔔 to get your reminder. You can also add it to your calendar here

👋 Monthly High Fivers Chat (membership benefit)

Our High Fiver Chat is tomorrow (19th) at 12:00 PM US EDT (UTC-4).
We'll use the High Fivers Discord voice channel. Our monthly High Fiver chat is a group call with me once a month to talk about whatever's on your technical mind, get feedback on your tech stack, and learn about what others are working on. You can join High Fivers on YouTube or Patreon.

🎧 Podcast

Ep 163: Local GenAI LLMs with Ollama and Docker

We released another great podcast last Friday (6/14) where Nirmal and I talk with our friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama and Docker's "GenAI Stack," to build apps on top of open source LLMs.

We designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.

Matt walks us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.

You can also check out the live recording of the complete show from April 18, 2024 on YouTube (Ep. 262). 

🐦 Tweet of the week

I missed the Docker Captain Summit due to a bad cold, but they made sure I wasn't forgotten. Watch this short!

👀 In case you missed the last newsletter

Read it here.