latest news



DZone.com Feed

From LLMs to Agents: How BigID is Enabling Secure Agentic AI for Data Governance (Fri, 30 Jan 2026)
Understanding Large Language Models (LLMs) Large Language Models (LLMs) form the foundation of most generative AI innovations. These models are predictive engines trained on massive datasets, often spanning hundreds of billions of tokens. For example, ChatGPT was trained on nearly 56 terabytes of data, enabling it to predict the next word or token in a sequence with remarkable accuracy. The result is an AI system capable of generating human-like text, completing prompts, answering questions, and even reasoning through structured tasks. At their core, LLMs are not databases of facts but statistical predictors. They excel at mimicking natural language and surfacing patterns seen in their training data. However, they are static once trained. If a model is trained on data that is five or ten years old, it cannot natively answer questions about newer developments unless it is updated or augmented with real-time sources. This limitation makes pure LLMs insufficient in enterprise contexts where accuracy, compliance, and timeliness are critical.
>> Read More

Testcontainers Explained: Bringing Real Services to Your Test Suite (Fri, 30 Jan 2026)
Building robust, enterprise-grade applications requires more than just writing code — it demands reliable automated testing. These tests come in different forms, from unit tests that validate small pieces of logic to integration tests that ensure multiple components work together correctly. Integration tests can be designed as white-box (where internal workings are visible) or black-box (where only inputs and outputs matter). Regardless of style, they are a critical part of every release cycle. Modern enterprise applications rarely operate in isolation. They often have to interact with external components like databases, message queues, APIs, and other services. To validate these interactions, integration tests typically rely on either real instances of components or mocked substitutes.
>> Read More

ToolOrchestra vs Mixture of Experts: Routing Intelligence at Scale (Fri, 30 Jan 2026)
Last year, I came across Mixture of Experts (MoE) through this research paper published in Nature. Later in 2025, Nvidia published a research paper on ToolOrchestra. While reading the paper, I kept thinking about MoE and how ToolOrchestra is similar to or different from it. In this article, you will learn about two fundamental architectural patterns reshaping how we build intelligent systems. We'll explore ToolOrchestra and Mixture of Experts (MoE), understand their inner workings, compare them with other routing-based architectures, and discover how they can work together.
>> Read More

Ralph Wiggum Ships Code While You Sleep. Agile Asks: Should It? (Fri, 30 Jan 2026)
TL; DR: When Code Is Cheap, Discipline Must Come from Somewhere Else Generative AI removes the natural constraint that expensive engineers imposed on software development. When building costs almost nothing, the question shifts from “can we build it?” to “should we build it?” The Agile Manifesto’s principles provide the discipline that these costs are used to enforce. Ignore them at your peril when Ralph Wiggum meets Agile. The Nonsense About AI and Agile Your LinkedIn feed is full of confident nonsense about Scrum and AI.
>> Read More


DevOps Cafe Podcast

DevOps Cafe Ep 79 - Guests: Joseph Jacks and Ben Kehoe (Mon, 13 Aug 2018)
Triggered by Google Next 2018, John and Damon chat with Joseph Jacks (stealth startup) and Ben Kehoe (iRobot) about their public disagreements — and agreements — about Kubernetes and Serverless. 
>> Read More

DevOps Cafe Ep 78 - Guest: J. Paul Reed (Mon, 23 Jul 2018)
John and Damon chat with J.Paul Reed (Release Engineering Approaches) about the field of Systems Safety and Human Factors that studies why accidents happen and how to minimize the occurrence and impact. Show notes at http://devopscafe.org
>> Read More

DevOps Cafe Ep. 77 - Damon interviews John (Wed, 20 Jun 2018)
A new season of DevOps Cafe is here. The topic of this episode is "DevSecOps." Damon interviews John about what this term means, why it matters now, and the overall state of security.  Show notes at http://devopscafe.org
>> Read More