thinking {with ai}
Summary of a studio talk
Today, we held an internal Artificial Intelligence workshop at Moonlight, starting with a simple question: how does everyone feel about AI right now?
The answers were mixed but familiar: excitement about speed and convenience, worry about regulation, and a fear that AI might dilute our creativity. Some people love using AI tools every day; others are still feeling their way around what it means for their work and their voice.
We then explored three lenses for looking at AI:
AI as a paradigm shift. There is a “pre‑AI” and “post‑AI” world, just like there was a pre‑internet and post‑internet world. It’s changed how we search, work, and create.
AI as an industry. Startups, GPUs, models, infrastructure — a fast‑growing ecosystem of companies building on top of this technology.
AI as new software. Old tools like Google Docs now have AI baked in, and new tools let you make images, videos, and prototypes just by writing in plain language.
The core idea: AI is giving us new ways to experience software — from sifting the internet for us, to drafting copy, to running tasks while we sleep.
But the heart of the workshop wasn’t “how to use AI”, it was “how to think with AI.”
We spent a big chunk of time on critical thinking: what it is, why it’s eroding for some people, and how to protect it.
We broke critical thinking down into four “objects” we deal with every day:
Beliefs – what we privately accept as true.
Claims – what we put into the world as statements.
Arguments – claims backed by reasons: “X is true because Y.”
Judgments – the position we take after weighing information, context, and our own experience.
The key reminder: AI doesn’t actually think. It predicts. It produces language that sounds confident and structured, but it doesn’t weigh evidence or hold values. That job stays with us.
We also named a couple of the mental traps that get worse in an algorithmic world:
Confirmation bias – only paying attention to information that agrees with us, which algorithms happily reinforce.
Belief bias – accepting an argument just because we like the conclusion, without checking if the logic actually holds.
Sharpening critical thinking in the age of AI, for us, came down to three habits:
Make judgments, not just opinions. Don’t stop at “I feel this.” Gather information that both supports and challenges your view, then decide.
Use mental models. Ask “why?” multiple times; break problems down with who/what/where/when/why; slow yourself down before reacting.
Reflect before you accept. Especially when an AI answer feels instantly right, pause and inspect the structure behind it.
From there we got practical: how do we actually work with AI day‑to‑day without outsourcing our brains?
Some of the principles that stood out:
Try multiple tools. Different tools shine at different things — original writing, heavy data processing, image generation, etc. It’s worth experimenting instead of marrying one tool for everything.
Write better prompts. Clear role, clear context, examples, and questions tend to beat “write me an essay on X”. We can start rough, but the goal is to iterate into sharper instructions as our own thinking clarifies.
Plan before execution. Talk to the tool first: “Here’s what I’m trying to do, what should I consider?” Use that to design the prompt and the shape of the final output.
Manage context, don’t drown it. Too much history and too many conflicting instructions in one chat can actually make outputs worse. Sometimes the best move is to summarize and start a fresh conversation.
Build systems, not one‑offs. When something works, save the prompt and the pattern. Over time, this turns into personal “mini‑apps” for invoices, research, content drafts, and more.
Underneath all of this was a simple stance: use AI as an editor, amplifier, and collaborator, not as an oracle. Draft first if you can, then bring AI in to critique, improve, and stress‑test your thinking — the same way you’d ask a sharp colleague for feedback.
We closed with three simple rules:
Ask clarifying questions. Before you push AI to produce, talk to it about the problem.
Trust, but verify. Always know where your “research assistant” got its facts from.
Do not stop thinking. AI can be eloquent, but it’s not wiser, more responsible, or more human than we are.
Slides here: drive.google.com

