Skip to main content

Alexander Arvidsson

I make data matter, as only data that matters can inspire change.

#

Hey there and welcome! I am Alexander.
The intersection of people and information shapes everything.

I work as a consultant helping organizations make their data matter, and I enjoy sharing my thoughts and knowledge at conferences, in blog posts, and in my podcast Knee-Deep in Tech. Microsoft has recognized these contributions with the MVP award in the Data Platform category since 2018.

Recent

When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

·2625 words·13 mins
LLMs fundamentally cannot distinguish between instructions and data. Whether you’re building RAG systems, connecting MCP servers to your data platform, or just using AI tools with sensitive information, every retrieved document is a potential instruction override. The Wall Street Journal just proved this by watching Claude lose over $1,000 running a vending machine after journalists convinced it to give everything away for free.
The Amateur Orchestra, Part 2: How to Make Music Instead of Noise

The Amateur Orchestra, Part 2: How to Make Music Instead of Noise

·2211 words·11 mins
Knowing what’s broken is easy - fixing it requires understanding your domain, imagination to form hypotheses, and courage to act. Most data initiatives fail not because the analysis was wrong, but because nobody owned the outcome or knew what to do next. Reports aren’t neutral information: they’re persuasion. Before collecting data, ask: what decision does this inform? Use ‘data contracts’ to enforce discipline. The Portsmouth Sinfonia had instruments but couldn’t make music. You have data. Can you drive decisions?
The Amateur Orchestra, Part 1: Why Most Data Initiatives Fail

The Amateur Orchestra, Part 1: Why Most Data Initiatives Fail

·2115 words·10 mins
The famous beer-and-diapers data mining story? Never happened. Most ‘data-driven’ companies are just data-decorated, exploring dashboards without hypotheses or action plans. Netflix, UPS, and Capital One succeeded because they started with clear hypotheses about what drives outcomes, then collected data to test them. You don’t explore a violin to see what noises it makes - you decide what piece to play. Are you playing instruments or making music?
2025: The Year I Stopped Performing

2025: The Year I Stopped Performing

·1836 words·9 mins
Walking away from Knee-Deep in Tech and Data Masterminds brought relief instead of grief - a signal I’d been ignoring for too long. From running my first 5K to confronting how organizations prefer data theater over insight, 2025 taught me that outdated identity narratives are self-reinforcing through identity protection: our brains maintain coherence over accuracy. Learn why the stories we tell about who we are become data points, not destiny, and what ‘walking toward what matters’ means in practice for 2026.
One Foot In Front The Other: How LLMs Work

One Foot In Front The Other: How LLMs Work

·1786 words·9 mins
You think ChatGPT is ’thinking’? It’s rolling dice, one token at a time. LLMs don’t plan, reason, or understand: they sample from probability distributions based on statistical patterns. Worse, if you’re working in Swedish, Arabic, or most non-English languages, you’re getting a fundamentally degraded product due to tokenization bias. And as these models increasingly train on their own outputs, they’re collapsing into irreversible mediocrity. Understanding what’s actually happening changes everything.
The Cognitive Cost: What Using AI Is Actually Doing To Our Brains

The Cognitive Cost: What Using AI Is Actually Doing To Our Brains

·2156 words·11 mins
Research shows measurable cognitive decline after just four months of LLM use. Like GPS destroyed our spatial navigation abilities, AI is atrophying our thinking. Here’s what the science reveals, the warning signs you’re in too deep, why organizations should be terrified, and what we can do about it.
The Turbo-Charged Abacus: What LLMs Really Are (And Why We Get Them Wrong)

The Turbo-Charged Abacus: What LLMs Really Are (And Why We Get Them Wrong)

·1710 words·9 mins
LLMs are sophisticated pattern-matching engines, not thinking machines. Our hardwired tendency to anthropomorphize combined with dopamine-driven addiction pathways is changing how we interact with these tools. Understanding what they actually are is the first step to using them wisely.
On Rituals

On Rituals

·1149 words·6 mins
From Michael Jordan’s lucky shorts to my pre-talk routines: rituals help performers access their best state. While ‘power posing’ science failed, the psychology of consistent pre-performance routines holds up. Discover why rituals work, how they create mental anchors, and the three specific rituals that help me deliver better presentations.