Tag
llm
3 articles
Who Am I? A Digital Amnesia Story
How AI sessions lose memory through context compaction, why it actually matters, and what we can learn from Leonard Shelby about working with forgetful assistants.
·10 min read
The DeepSeek Effect: How China Built a Frontier AI Competitor at a Fraction of the Cost
DeepSeek reported $5.6M in GPU training costs for V3, though total investment is disputed. How they achieved remarkable efficiency despite US chip sanctions, and what it means for AI economics.
·7 min read
LoRA vs RAG: Which LLM Enhancement Method Should You Use?
A comprehensive guide to Low-Rank Adaptation (LoRA) and Retrieval Augmented Generation (RAG) - two powerful approaches to enhancing large language models. Learn when to use each and how to combine them.
·7 min read

