Building AI Agents Using Docker cagent and GitHub Models
(Thu, 18 Dec 2025)
The landscape of AI development is rapidly evolving, and one of the most exciting developments in 2025 from Docker is the release of Docker
cagent. cagent is Docker’s open-source multi-agent runtime that orchestrates AI agents through declarative YAML configuration. Rather than managing Python environments, SDK versions, and
orchestration logic, developers define agent behavior in a single configuration file and execute it with cagent run.
In this article, we’ll explore how cagent’s integration with GitHub Models delivers true vendor independence, demonstrate building a real-world podcast generation agent that leverages multiple
specialized sub-agents, and show you how to package and distribute your AI agents through Docker Hub. By the end,
you’ll understand how to break free from vendor lock-in and build AI agent systems that remain flexible, cost-effective, and production-ready throughout their entire lifecycle.
>> Read More
When DNS Breaks The Internet: Lessons From The Amazon Outage
(Thu, 18 Dec 2025)
Have you ever had an “Oh boy” moment when your favorite application does not load and you assume there is a fault with your Internet connection? In October 2025, this occurred on a global scale —
but in point of fact, it was not your Internet connection that failed; it was Amazon’s.
A slight misconfiguration of DNS on behalf of Amazon Web Services (AWS) caused a nationwide catastrophe on the Internet, taking with it such corporate behemoths as Fortnite, Alexa, and, not
forgetting, the mobile ordering facility at McDonald’s.
>> Read More
Vision Language Action (VLA) Models Powering Robotics of Tomorrow
(Thu, 18 Dec 2025)
The robotics industry is undergoing a fundamental transformation. For decades, robots have been confined to narrow,
pre-programmed tasks in controlled environments — assembly lines, warehouses, and labs where predictability reigns.
Vision-language-action (VLA) models represent a critical breakthrough in this evolution by combining visual perception, language understanding, action generation, and the potential for
generalization. VLA models are poised to redefine what machines can do in the physical world. We will go over different VLA models in the industry today that you can leverage in your work.
What Are Vision-Language-Action (VLA) Models
Vision-language-action (VLA) models combine visual perception and natural language understanding to generate contextually appropriate actions. Traditional computer vision models are designed to
recognize objects, whereas VLA models interpret scenes, reason about them, and guide physical actions in real-world environments.
>> Read More
We Taught AI to Talk — Now It's Learning to Talk to Itself: A Deep Dive
(Thu, 18 Dec 2025)
A Master Blueprint for the Next Era of Human-AI Interaction
In the rapidly evolving world of artificial intelligence, prompt engineering has become a crucial component
of effective human-AI interaction. However, as large language models (LLMs) become increasingly complex, the traditional human-focused approach to prompting is reaching a critical point. What was
once a delicate skill of crafting precise instructions is now becoming a bottleneck, causing inefficiencies and subpar results. This article explores the concept of AI-generated intent, arguing
that the future of human-AI collaboration hinges not on humans becoming more proficient at crafting prompts, but on AI's learning to generate and refine their prompts and those of their peers.
I. The Breaking Point: Why Human Prompting is Failing
The inherent limitations of human language and cognitive biases often restrict the full potential of advanced AI models. While early LLMs responded well to carefully crafted human prompts, the
growing sophistication of these models, particularly in multi-step reasoning tasks, has exposed the limitations of this approach. The issue isn’t a lack of human ingenuity, but rather the
fundamental mismatch between human communication styles and the optimal operational logic of AI.
>> Read More
DevOps Cafe Ep 79 - Guests: Joseph Jacks and Ben Kehoe
(Mon, 13 Aug 2018)
Triggered by Google Next 2018, John and Damon chat with Joseph Jacks (stealth startup) and Ben Kehoe (iRobot) about their public disagreements — and agreements — about Kubernetes and
Serverless.
>> Read More
DevOps Cafe Ep 78 - Guest: J. Paul Reed
(Mon, 23 Jul 2018)
John and Damon chat with J.Paul Reed (Release Engineering Approaches) about the field of Systems Safety and Human Factors that studies why accidents happen and how to minimize the occurrence and
impact.
Show notes at http://devopscafe.org
>> Read More
DevOps Cafe Ep. 77 - Damon interviews John
(Wed, 20 Jun 2018)
A new season of DevOps Cafe is here. The topic of this episode is "DevSecOps." Damon interviews John about what this term means, why it matters now, and the overall state of security.
Show notes at http://devopscafe.org
>> Read More