WEST STACK AI
Back to Blog

Just Talk To It: What the Creator of OpenClaw Can Teach Wealth Managers About AI

Just Talk To It: What the Creator of OpenClaw Can Teach Wealth Managers About AI

Peter Steinberger was traveling when he connected WhatsApp to an AI agent. It took about an hour. No enterprise procurement process, no vendor evaluation, no committee approval. He just built it.

That side project became OpenClaw — the fastest-growing open-source project in GitHub history, with over 180,000 stars. Steinberger was recruited by OpenAI and recently appeared on Lex Fridman Podcast #491, where he laid out a philosophy that challenges the assumption that AI requires elaborate infrastructure and months of planning.

His core message is disarmingly simple: just talk to it.

That phrase might sound like developer shorthand, but it carries a direct lesson for wealth managers, family offices, and financial services firms trying to figure out where AI fits into their operations. The firms that will win with AI are the ones that stop over-engineering the process and start with simple, focused conversations about real problems.

The Anti-Complexity Manifesto

Steinberger has written extensively about his approach to building with AI, and three of his observations translate directly from the developer world into the enterprise world. If you're evaluating AI solutions for your firm, these are worth understanding.

Most AI platforms are thin wrappers

In a post titled "Just Talk To It," Steinberger makes a pointed observation: most of the AI tools and platforms flooding the market are thin wrappers around the same underlying models. They add a user interface, some integrations, and a price tag — but the intelligence underneath is identical.

This translates directly to enterprise AI. Many of the platforms being pitched to financial services firms are selling you access to the same foundational models (GPT-4, Claude, Gemini) with a layer of configuration on top. Some of that configuration is genuinely valuable — compliance guardrails, data residency controls, audit logging. But much of it is cosmetic complexity that obscures a simpler truth: the underlying model is doing the heavy lifting, and you're paying a premium for packaging.

Before signing a six-figure platform contract, ask a straightforward question: what does this platform do that we couldn't accomplish with the model directly, a lightweight workflow engine, and our existing data infrastructure?

MCP servers are just a checkbox

Steinberger is especially critical of MCP (Model Context Protocol) servers — a technology that lets AI models connect to external data sources and tools through a standardized interface. His take: they're "something for marketing to make a checkbox."

For a non-technical audience, MCP is essentially a way to give an AI model access to your files, databases, or applications so it can pull in context while working. It sounds impressive in a vendor demo. But Steinberger's argument is that in practice, most of these integrations add complexity without proportional value. The model can often accomplish the same outcome through simpler means — direct file access, straightforward API calls, or just pasting the relevant information into the conversation.

The enterprise translation: be skeptical of long feature lists. When a vendor shows you a slide with twenty integrations and connectors, ask which ones map to specific problems you actually have. If the answer is vague, you're paying for checkboxes, not capabilities.

A quality model and simple tools are enough to start

Steinberger's own setup is almost aggressively simple: a terminal, version control, and a quality AI model. That's it. No elaborate toolchains, no multi-platform orchestration. He argues that a good model with minimal tooling outperforms a mediocre model buried under layers of infrastructure.

For your firm, the equivalent is: a quality model (Azure OpenAI, deployed in your own environment), a workflow engine (n8n, Power Automate, or similar), and your existing data. You don't need a dedicated "AI platform" to get started. You need a clear problem, a capable model, and someone who knows how to connect the two.

Start Conversations, Not Specifications

How Steinberger builds

Steinberger's approach to building software with AI is the opposite of traditional enterprise methodology. Instead of writing detailed specifications before starting, he begins conversations with the AI, iterates live, and lets solutions emerge through interaction. He intentionally under-specifies, trusting that the back-and-forth will surface the right requirements faster than a document ever could.

In "Shipping at Inference-Speed," he describes a workflow where screenshots replace lengthy prompts. Rather than writing paragraphs describing what he wants, he shows the AI what he's looking at and says, in effect, "fix this" or "make this better." The visual context communicates more than words alone.

What this means for your firm

The traditional approach to AI adoption in financial services looks something like this: form a committee, write an RFP, evaluate six vendors, negotiate contracts, run a 12-month implementation, and hope the requirements you defined at the start still match your needs at the end.

Steinberger's approach suggests something radically different: spend four weeks on a proof of concept with real data. The conversation with the technology — watching it handle your actual transactions, documents, and workflows — teaches you more than any specification document ever will.

We've seen this firsthand. Our fraud detection case study went from concept to working system in four weeks for $6,000. The client learned more about what AI could do for their operations in that first month than they would have in six months of vendor evaluations.

The screenshot principle

Steinberger's preference for screenshots over written descriptions has a business parallel. When scoping AI projects, we've found that showing the messy spreadsheet, the manual workflow, or the 47-tab Excel workbook communicates the problem better than any written brief. If you're thinking about where AI fits into your operations, start by showing someone the ugliest, most time-consuming process your team deals with every day. That's your starting point.

The Identity Shift: From Implementer to Architect

What developers are experiencing

Steinberger describes a transformation in how he works that goes beyond productivity. "I don't read much code anymore," he writes. "I watch the stream." He's shifted from writing code line by line to directing AI systems — reviewing output, course-correcting, and exercising judgment about what's good enough and what needs rethinking.

He compares this to the industrial revolution: the nature of the work is changing, not disappearing. Developers aren't being replaced; they're becoming architects and creative directors who guide AI systems rather than manually producing every artifact.

The financial services parallel

Every role that involves routine information processing faces the same transformation. Compliance analysts reviewing transaction reports. Associates generating client performance summaries. Operations staff reconciling data across systems. The manual production of these deliverables is exactly the kind of work that AI handles well.

The analysts who thrive in this environment won't be the ones who can review the most documents per hour. They'll be the ones who can direct AI systems effectively — who know the right questions to ask, who can evaluate AI output against their professional judgment, and who can catch the edge cases that models miss.

This is an investment in people, not just technology. The firms that pair AI deployment with genuine training — teaching their teams to work alongside these tools, not just use them — will outperform firms that treat AI as a simple cost-cutting exercise.

A word of caution: "Just one more prompt"

Steinberger is refreshingly honest about the risks of his own approach. In a post titled "Just One More Prompt," he describes how AI-assisted productivity can become addictive. The ability to accomplish so much so quickly creates a compulsive cycle: just one more feature, just one more improvement, just one more iteration. The result, if unchecked, is burnout — not efficiency.

For firms adopting AI, this is worth taking seriously. The goal isn't to maximize the volume of AI-processed work. It's to free your team's time for the judgment-intensive work that actually creates value. AI adoption without intentional boundaries leads to scope creep and exhaustion, not transformation.

Practical Takeaways for Wealth Managers and Family Offices

If Steinberger's philosophy resonates, here's how to put it into practice:

  1. Start with a conversation, not a contract. Pick one real problem — the most tedious, manual, error-prone process your team deals with — and explore it with AI for a few hours. You'll learn more from that conversation than from any vendor pitch deck.

  2. Resist the urge to over-engineer. A quality model deployed in your Azure environment plus a lightweight workflow engine will outperform a bloated platform that takes six months to configure. Start simple. Add complexity only when you've earned the right to — meaning you've proven the basic approach works first.

  3. Evaluate vendors on outcomes, not feature lists. When a vendor shows you a wall of integrations and capabilities, ask one question: "Which of these solves a specific problem we have today?" If the features can't map to your actual pain points, you're paying for marketing checkboxes.

  4. Invest in your people's judgment, not just the technology. Budget for training your analysts and associates to evaluate and steer AI output. The model produces drafts; your people provide the expertise, context, and professional judgment that make those drafts valuable. Skipping this step is how firms end up with expensive AI tools that nobody trusts.

  5. Set boundaries before you scale. Define what success looks like for a pilot before you expand scope. What specific metrics will you measure? What's the threshold for moving to Phase II? Without these boundaries, AI adoption becomes an open-ended experiment that's hard to evaluate and easy to abandon.

The Conversation Starts Somewhere

Peter Steinberger didn't build the most popular open-source project in GitHub history by assembling the perfect toolchain. He built it by staying close to the problem, using simple tools, and having a conversation with the technology.

The firms that succeed with AI in financial services will follow the same pattern. Not the ones with the biggest budgets or the most elaborate platforms — the ones willing to start with a real problem, a quality model, and an honest conversation about what's possible.

At WestStack, we help firms find that starting point. We run focused proofs of concept with real data, in your environment, on your timeline. No six-month implementations. No bloated platform commitments. Just a clear-eyed assessment of where AI creates value for your specific operations, and a practical path to get there.

Ready to start the conversation? Book a consultation to discuss your firm's specific challenges.


About the Author: Adam Daum is the founder of West Stack, specializing in AI implementation for wealth managers and family offices. He helps financial services firms adopt AI solutions that respect data privacy, integrate with existing workflows, and deliver measurable ROI.

Just Talk To It: What the Creator of OpenClaw Can Teach Wealth Managers About AI