Two Paths
AI Collaboration
Build clearer thinking, stronger boundaries, stable decisions, and cleaner communication under speed.
Explore AI Collaboration →Relational Intelligence
Reduce escalation. Strengthen communication. Build the relational capacity leadership depends on when pressure rises.
Explore Relational Intelligence →Who This Is For
Leaders with Capital
You set direction, own outcomes, and operate at the leverage point where every decision multiplies — or compounds a mistake.
Operators & Integrators
You hold the organization together — translating vision into execution, managing the leadership team, and absorbing the friction others don’t see.
Tech Talent in Transition
You have real technical depth. The question now is what sits above it — the layer that makes technical excellence compound instead of commoditize.
The Method
Somabase works through live cohorts, structured practice, community calibration, and behavioral visibility through MS360. This is how judgment, communication, and leadership capacity become more reliable under velocity.
6–8 week programs with live facilitation, real pressure, and direct feedback.
Live relational feedback that makes communication patterns visible and workable.
Behavioral visibility that surfaces patterns affecting judgment, communication, and leadership under pressure.
For Organizations
Custom cohorts for leadership teams where AI adoption is moving faster than human readiness. Built to improve decision quality, communication coherence, escalation reduction, and responsible AI collaboration.
Enterprise Inquiry →The Core Principle
Clear thinking produces better output. Unclear thinking produces faster confusion. AI scales judgment and it scales distortion. Your relationship to the tool shapes the value you get from it.
Outcomes
Structure
Participants leave with a clearer way to think, decide, and lead with AI.
Enterprise
Somabase delivers custom internal cohorts built around the human patterns shaping AI performance inside your organization.
Enterprise Inquiry →Partnership
MS360 provides a behavioral translation layer — a behavioral mirror that surfaces invisible patterns. Built into every Somabase cohort through our partnership with Carinda Salomon.
Learn more about MS360 →Recognition
Relational patterns are consistent. They surface in your partnerships, your team, your intimacy, and your self-talk — until something shifts.
Outcomes
Core Domain
How you relate under pressure reveals how you lead everywhere — escalation patterns, boundary dynamics, communication clarity, self-trust. Somabase treats relational integrity as a core domain of leadership development, using it as a precision mirror for the patterns that shape every professional and personal relationship you have.
Structure
Participants leave with a stable relational framework.
What It Is
MS360 provides behavioral pattern visibility through biometric translation. It surfaces signals that are otherwise invisible — helping individuals and teams see patterns in real time.
A calibration tool. A behavioral mirror. MS360 gives you information — the interpretation and action remain yours.
How It Works
For Individuals
MS360 is built into every cohort — a feedback layer that runs alongside your development, supporting awareness without creating dependency. You use it as a mirror, then set it down.
For Teams
In enterprise cohorts, MS360 offers behavioral pattern visibility. Useful for decision stability tracking, escalation pattern recognition, communication coherence assessment.
Who This Is For
Leadership teams moving quickly with AI while decision quality, communication, and trust are under strain. High-escalation environments where reversals, fragmentation, and reactive leadership are costing real money and talent.
What's at Stake
Decision quality drops. Communication fragments. Escalation increases. Teams move faster without getting clearer. AI adoption adds output without improving alignment.
The Process
Diagnostic conversation. Identify where velocity, decision drift, escalation, and communication breakdown are limiting AI value.
Custom 6–10 week cohort mapped to your organization's specific dynamics and needs.
Live sessions. Structured practices. MS360 behavioral visibility throughout.
Executive summary. Findings and recommendations. Continuation options.
Inquiry
Tell us about your organization. We'll respond within 48 hours to schedule a discovery call.
Received
We'll be in touch within 48 hours to schedule a discovery call.
Learn about MS360 →What This Is
Somabase community is a live relational environment where members practice stability, receive feedback, and calibrate together. Active participation, not passive consumption.
Real-time relational dynamics that mirror your patterns back to you in a structured, supported container.
Integration happens when you practice with others, not alone. The community is the practice environment.
The community holds the container. You bring the work. Structured support. Real accountability.
Why carefully constructed communities are the most important thing we can create in a digital world.
By Erik Horbacz · February 2026Why maturity is the bottleneck for intelligent collaboration.
Read →Maintaining agency in an AI-mediated world.
Read →How to hold decisions when everything accelerates.
Read →What instability costs organizations adopting AI.
Read →This is not self-help. It is operational capacity.
Read →A precision mirror for relational patterns.
Read →Principles for the next decade of working with intelligence.
Read →What It Is
Somabase helps leaders and organizations build the human advantage required to create real value with AI. We focus on judgment, decision stability, communication coherence, and relational capacity under speed.
The platform operates through live cohorts, structured practice, community calibration, and integrated behavioral visibility through MS360 (developed by Carinda Salomon).
Three Pillars
Somabase develops three integrated capacities: stability under pressure, decision quality with AI, and relational clarity under velocity. These capacities determine whether speed creates leverage or chaos.
Stability holds pressure. Judgment guides decisions. Relational clarity keeps trust and communication intact.
What Somabase Is
In Practice
Somabase develops the judgment, communication, and stability leaders need when AI increases speed, volume, and pressure.
Application Received
Next step: Schedule a 20-minute consult to discuss your application.
Schedule ConsultWhile you wait, read: "AI Amplifies Your State" →
Application Received
Next step: Schedule a 20-minute consult to discuss your application.
Schedule ConsultWhile you wait, read: "Relational Intelligence as Infrastructure" →
What to expect
This is a 20-minute discovery conversation. We'll discuss where you are, what you're working on, and whether Somabase is the right container for your next stage of development. Come with honesty about where you're stuck. No preparation required beyond that.
Before You Book
Answer a few questions so we can make the most of our time together.
Why carefully constructed communities are the most important thing we can create in a digital world
There is something distorted and inflated, and we all feel it.
Scroll through any feed for ten minutes, and you can taste it — the hollow aftertaste of connection that isn't. Thousands of followers, dozens of group chats, notifications piling up like leaves in a gutter. We are more networked than any generation in human history, and somehow more alone.
I'm not pointing fingers. I've been in these spaces. I've never worked inside tech or inside someone else's company — I've been an entrepreneur since college, building things from the outside, which means I don't think the way most people in these industries think. But I've spent real time in digital communities — meme-coin Discords, self-help groups, creator circles, crypto communities, and forums that burned bright for six months and then went dark. I've watched communities form around missions and leaders and content — and I've watched most of them collapse the moment the leader lost energy, the content dried up, or the first real conflict surfaced.
The pattern is always the same. A group comes together with enthusiasm. Everyone is polite. Everyone agrees. It feels electric — like something real is forming. Then someone says something uncomfortable. A disagreement. A tension that can't be smoothed over with an emoji reaction. And instead of leaning in, the group scatters. Back to the safety of surface-level engagement. Back to the community's performance without its substance.
And here is the part that bothers me most: almost every community I've been in lives and dies by its leader or its content. People join as long as there is value being handed to them. They consume. They extract. And when the value stream slows, they leave. Almost nobody shows up to contribute as who they are. Almost nobody brings the value — they wait for it to be delivered. Communities function like audiences with membership fees.
Only when they move past that, do they arrive at true community. Deep listening. Real trust. Collective intelligence that none of the individuals could have reached alone.
Most digital communities never make it past the first stage. They are dressed up as tribes but function as audiences. They have members but not relationships. Channels but not conversations. Content but not culture.
This is the dark part. This is what is real.
And it matters more than we think.
Because we are entering an era where the ground under every institution is shifting. Intelligence is being industrialized. AI systems can now do in seconds what took teams of knowledge workers months. Energy is getting cheaper. Labor as we know it is being redefined. The economic models that shaped the last century — scarcity-based, production-measured, shareholder-first — are groaning under pressures they were never built to absorb.
Ray Dalio mapped the cycles. Great powers rise, consolidate, decay, and are replaced. Raoul Pal pointed to 2030 as the moment when the convergence of AI, blockchain, and abundant energy makes the old economic rules unrecognizable. Peter Diamandis talks about “solving everything” — using industrial intelligence to crack disease, energy, and materials science. Kurzweil charted the exponential curve and said the singularity is not a distant fantasy but a near-term reality. I would argue that we are here.
They are all pointing at the same horizon. And they are all, in their own way, missing the same thing.
Technology can transform industries, domains, and specific problems. But we can't rely on it to evolve us. That work is ours to do — deliberately and with intention.
We are building powerful machines without enough human readiness for the speed they create. We can automate the how of nearly everything, but we still have weak infrastructure for the who. Who are we becoming? What do we value? How do we coordinate when the old rules stop working? How do we stay human while the pace keeps rising?
These are not philosophical luxuries. They are survival questions. And the answer — the only answer that has ever worked across civilizations, across centuries, across every tradition that left something worth inheriting — is community. Not the word. Not the brand. Not the Slack channel. The real thing.
Real community is the hardest thing to build. That's why almost nobody does it.
Real community requires you to show up as you actually are — not the curated version, not the professional persona, not the optimized personal brand. It requires conflict. It requires sitting in the discomfort of disagreement with people you've committed to, and choosing to stay instead of scroll away.
Peck described the stage before true community as “emptiness” — a space where each person lets go of their need to fix, control, convince, or perform. It looks like failure. It feels like loss. And it is the only doorway to the thing everyone says they want but almost no one is willing to earn.
I've been through something like the dark side of this personally. I was born two months early, brought back to life by the hands of strangers. I grew up with ADHD in the basement of a Catholic school, building worlds in my head that nobody else could see. Every time I tried to share what was inside me, I learned something about connection — and about what it feels like when that connection doesn't land. So I know what avoidance tastes like. I know what it means to have a voice and be afraid to use it.
In a world of high-speed extraction, the quiet ones, the ones with the most depth and the most real substance often sit it out because the cost of engagement feels too high.
This is backward. This is the imbalance we need to correct.
So what does a carefully constructed community actually look like in a digital space?
It starts with a promise. Not a mission statement carved into a boardroom wall. A living, dynamic promise — what I call a Compelling Aligning Promise, or CAP. It is specific enough to pull action, meaningful enough to matter, and shared enough to align people around a common direction. The individual who makes the promise and the community that holds the promise operate in parallel. The founder is not above the community — they are the first member. Their model of personal alignment is the community's foundation.
Beneath that promise lives identity — values, energy, trajectory. Who are we? How do we show up? Where are we heading? This is not brand language. This is the honest accounting of what we actually stand for, the energy we bring into rooms, and where our current behavior is really taking us. When identity is clear, it becomes a filter. The right people feel it and lean in. The wrong people self-select out. No sales pitch required.
Then comes voice — authentic, resonant communication. Not marketing. Not content strategy. The actual sound of a community speaking its truth. Marshall Ganz calls it the story of self, the story of us, and the story of now. Seth Godin calls it the smallest viable audience. I call it the difference between attention and belonging. You don't need millions of followers. You need the right people hearing the right signal.
Then priorities. Then the network structure. Then rituals — the daily and weekly practices that turn a group of strangers into a culture. Then, at the root, individual ownership — each person doing their own work, taking responsibility for their own growth, feeding what they learn back into the collective.
This applies at every scale — from a small cohort to a global network. It is fractal. The same principles that govern a family or a small team can be applied to a neighborhood, a company, and an economy. The same geometry at every scale. Fractal alignment.
I want to be honest about something: I have not successfully built a community. Not yet. I've studied this, researched it deeply, lived inside communities that didn't work, and spent years developing a framework for how it could work. But the proof is still being written. I'm building now — with Corvia, an AI music community exploring emotional development, and with Somabase, a platform for human-technology collaboration and relational intelligence. Both are early. Both are laboratories for the hypothesis that intentional community, carefully constructed around shared values and honest practice, is the most powerful structure humans have for navigating change. The foundation I'm building runs deeper than I can explain in a single blog post. It will take time and context to show. But I'd rather be honest about where I am than pretend I've already arrived.
Here is where the light breaks through.
When a community gets this right — when it moves through pseudo-community and chaos and emptiness and arrives at the real thing — something extraordinary happens. The collective becomes smarter than any individual in it. Problems that seemed unsolvable from the inside become obvious from the shared perspective. People who feel lost find direction. People who feel voiceless find a frequency that is unmistakably theirs. Ventures emerge not from market analysis but from genuine need, identified by people who trust each other enough to be honest about what is missing.
The Beatles didn't just make music. They created a cultural field that millions of people stepped into and were changed by. That field was community — unstructured, messy, emergent, but real. Imagine what becomes possible when we apply everything we've learned about human development, organizational design, tokenomics, AI collaboration, and consciousness research to intentionally build that same kind of field.
Our current systems are designed to extract intelligence rather than concentrate it. Where the definition of value shifts from what you produced to what you contributed to the flourishing of the whole.
This is not utopian fantasy. The Economic Space Agency is building protocol infrastructure for exactly this kind of postcapitalist coordination. DAOs have already demonstrated that decentralized governance can work — imperfectly, messily, but really. Tokenized ownership turns shared infrastructure into shared income. The pieces exist.
What has been missing is human readiness. The capacity to hold disagreement without fragmenting. The discipline to use technology intentionally. The ability to value contribution over extraction.
The real work is practical. Stronger identity boundaries. Better behavior under pressure. Cleaner coordination. Better judgment around the tools shaping how we think, work, and relate.
We are in the water right now.
The waves of technological disruption are coming whether we are ready or not. The old boats — the corporations, the institutions, the governments designed for a slower world — are taking on water. Some will adapt. Many will not. The question is not whether change is coming. The question is what vessel you are in when the waves hit.
The boat is community. Carefully constructed. Values-aligned. Honest about the chaos. Willing to pass through emptiness. Built not for speed but for resilience. Crewed not by employees but by owners — people with real stakes, real voices, real accountability to each other.
This is the thing I am dedicating my work to. Not because I figured it all out — I am figuring it out in real time, messily, publicly, with the same fears and doubts as everyone else. But because the alternative is worse. The alternative is drowning alone in a sea of information, clutching a phone full of followers and wondering why none of it feels real.
If you are reading this and you feel the pull toward something more honest, more intentional, more human than what the platforms are currently offering — know that you are not alone. The avoidant ones, the quiet ones, the people who have done the inner work but haven't yet found a space worthy of their voice: you are exactly who we need. The narcissistic networks have had their turn. It is time for the people with substance to show up.
Not to perform. Not to optimize. To build something real. To construct the community carefully — with shared values, clear direction, honest communication, and the willingness to stay when it gets hard.
That is how we ride the waves.
That is how we build the boat together.
By Erik Horbacz · February 2026
There’s a premise baked into almost every AI adoption conversation happening in organizations right now: the bottleneck is technical. If we just train people on the tools, adopt the right platforms, build the right workflows — the results will follow. What we’re discovering, slowly and sometimes painfully, is that this premise is wrong.
The bottleneck isn’t technical. It’s human.
More specifically, it’s the quality of the internal state the human brings to the collaboration. And AI doesn’t just work around that state — it amplifies it.
I want to be precise about what I mean by that, because it’s easy to read “internal state” and immediately drift toward something vague or psychological. I’m not talking about mood. I’m talking about something more structural: the degree to which a person can maintain clarity under pressure, hold a decision without immediately reversing it, and sustain independent judgment when a sophisticated system is producing confident-sounding output at high velocity.
That’s what I mean by state. And when you bring that quality — or its absence — to AI collaboration, the AI doesn’t moderate it. It magnifies it.
Here’s the dynamic in practice. When you sit down to work with an AI system and you’re scattered — bouncing between competing priorities, running on sleep debt, carrying unresolved tension from an earlier conversation — the AI will produce output that reflects the fragmentation of your input. The prompts will be unclear. The outputs will be loosely formed. You’ll accept them anyway, because in a scattered state you don’t have the discriminating capacity to evaluate what you’re receiving. The output feels like it’s helping because it’s producing something, filling the space, generating motion. But it’s producing the shape of your own confusion back to you, and you’re mistaking it for signal.
The inverse is equally true, and this is where the real leverage lives. When you come to AI collaboration with clarity — a settled sense of what you’re trying to accomplish, why it matters, and what you’re not willing to compromise — the AI becomes extraordinarily useful. Not because the tool changed, but because you changed what you’re directing it toward. You can see the difference between a response that actually serves the goal and one that merely sounds like it does. You can push back on a confident-sounding answer when your own judgment tells you something is off. You can use AI to explore and then return to yourself to decide.
Research from Frontiers in Psychology (2025) introduced a taxonomy worth sitting with: cognitive offloading with AI progresses across three stages — assistive, substitutive, and disruptive. In the assistive phase, AI extends your capacity. In the substitutive phase, it starts replacing your cognition. In the disruptive phase, it actively degrades your ability to self-monitor, evaluate your own reasoning, and make accurate assessments of what you know versus what you merely received. Heavy AI use is correlating with lower metacognitive accuracy. In plain language: people are becoming less able to accurately gauge how well they understand something, because AI fills the comprehension gap so quickly that the struggle — which is where learning and discernment actually happen — never occurs.
This is the illusion of competence. You feel more capable. You produce more output. But the independent judgment that makes that output trustworthy hasn’t developed — it’s been bypassed.
I don’t raise this to argue against AI. I use it extensively, and I think the collaborative potential is genuinely significant. I raise it because the frame most organizations are using to think about AI adoption is leaving out the most important variable.
When you train someone on an AI tool without developing the human infrastructure required to use it well, you haven’t improved their capability. You’ve given a powerful amplifier to someone who hasn’t worked on what they’re amplifying. If the signal is good — stable, clear, coherent — the amplifier makes it better. If the signal is noisy, reactive, or externally dependent, the amplifier makes that worse too. The tool doesn’t know the difference.
This becomes a compounding problem at scale. One reactive person using AI badly produces confused output. A team of reactive people using AI badly produces organizational chaos at acceleration. The decisions come faster. The pivots happen more frequently. The feedback loops tighten. But the underlying human capacity to hold steady — to evaluate, to commit, to maintain direction — hasn’t kept pace with the velocity the technology enables.
What develops that capacity? Not more AI training. Not more productivity frameworks. The research points toward something more fundamental: self-monitoring, metacognitive practice, and what I’d describe as stability under pressure — the ability to remain coherent when the environment is moving fast and confident-sounding information is arriving from every direction.
This is precisely why Somabase starts with the human, not the tool. Not because the tool is unimportant — it’s transformative — but because without the human foundation, the tool accelerates the wrong things. Our AI Collaboration cohort is built around a simple premise: before you can collaborate well with an intelligent system, you need a level of internal stability and independent judgment that makes that collaboration generative rather than disorienting. You need to know what you think, what you value, and what you’re responsible for — in a way that doesn’t dissolve the moment AI offers you 12 alternatives.
This isn’t a soft-skills conversation. It’s a performance conversation. The humans who will use AI most effectively are not necessarily the most technically sophisticated. They’re the ones with the clearest signal — the most developed capacity to direct, evaluate, commit, and override. Maturity, in the deepest sense of that word, is the bottleneck.
We’re at the beginning of figuring out how to develop that capacity intentionally, in the context of AI collaboration specifically. The research is early. The practices are experimental. But the direction is clear.
This is what Somabase is exploring. If that framing resonates — if you’ve felt the quality of your collaboration vary with your own state and wanted a structured way to develop what’s underneath — we’re building something for exactly that.
By Erik Horbacz · February 2026
There’s a useful distinction that most conversations about AI adoption are missing.
The distinction is between task delegation and identity outsourcing. They can look identical from the outside — both involve handing something to AI that you used to do yourself — but the internal experience is completely different, and the long-term consequences are on opposite ends of the spectrum.
Task delegation is what AI is legitimately excellent at. You have a clear goal, a defined scope, and you direct an AI system to help you accomplish it. Your judgment stays engaged throughout. You evaluate the output. You decide what to use, what to discard, and what to do next. The locus of decision-making stays with you. The AI is an instrument.
Identity outsourcing is something else. It starts subtly — an over-reliance on AI to generate not just outputs but positions. What should I think about this? What’s the right way to frame this? Is this a good idea? The AI answers, and the answer feels good, and over time the practice of generating your own positions — sitting with a question long enough to develop a view — atrophies. You’re no longer using AI to extend your thinking. You’re using it to replace the effort that thinking requires.
Research published in Social Behavior & Personality in 2024 described this pattern across five dimensions: dependency, gullibility, irrationality, unreliability, and loss of cognitive autonomy. What I find valuable about this framework isn’t the clinical language — it’s what it points to. These aren’t five separate problems. They’re five faces of the same underlying shift: the gradual erosion of the internal structures that let you direct, evaluate, and hold positions independently. When those structures weaken, you become gullible not because you’re unintelligent, but because your capacity for independent verification has been underexercised. You become unreliable not because you’re untrustworthy, but because your positions are being generated outside yourself and they shift when the external generator shifts.
The question I keep returning to — and the one that Somabase is built around exploring — is: what maintains the integrity of those structures while you’re also working with powerful AI systems daily?
Identity boundaries is the phrase I’d use. Not in a rigid sense — not a wall between yourself and the technology. More like a clear, stable sense of what belongs to you that remains coherent when an AI system is confidently offering to take it over. Your values. Your judgment. Your creative voice. Your direction. These aren’t things you need to protect from AI. They’re things you need to remain fluent in, even as AI becomes capable of simulating them quite well.
The simulation quality is actually the complicating factor here. A decade ago, AI-generated output was obviously different from human output. The quality gap made it easy to maintain a clear sense of what was yours versus what was generated. That gap is closing rapidly. AI can now produce writing that sounds like you, ideas that sound like yours, strategic reasoning that matches your usual patterns. This is genuinely useful — and genuinely disorienting. Because when the output mirrors your own style closely enough, the friction that normally signals “this came from outside me” disappears. And so does the metacognitive check that keeps you in the driver’s seat.
Frontiers in AI (2024) found that emotional engagement with AI is a significant variable in decision quality — and specifically, that higher emotional engagement tends to correlate with reduced willingness to override AI recommendations, even when the human has good reason to. This makes sense. If you’ve developed a working relationship with an AI system, if it feels responsive and helpful, the cognitive cost of contradicting it rises. Not because the AI has earned your deference — but because the emotional register of the relationship has started to carry weight in your evaluation process.
68% of users report feeling more emotionally engaged with empathetic AI. That’s not a small number. And the implication isn’t that empathetic AI is bad — it’s that the humans using it need a level of relational discernment that most of us haven’t been asked to develop before. We’ve never had to navigate emotional engagement with non-human systems at this level of sophistication. The question of what’s real, what’s simulated, and what that distinction means for your judgment is genuinely new.
I don’t think the answer is more skepticism. Chronic skepticism toward your tools is a different kind of instability — it just destabilizes in the direction of over-caution rather than over-reliance. The capacity I’m interested in is something more nuanced: the ability to hold genuine engagement with AI collaboration while also maintaining a clear enough internal signal that you can feel the difference between using the tool and being used by it.
That difference — and I want to be precise here — isn’t primarily about what you’re doing. It’s about what’s happening underneath the doing. Someone can produce the exact same AI-assisted work product with either a stable sense of their own authorship intact or with that authorship quietly dissolved. From the outside, the outputs look the same. But the trajectory those two people are on is completely different. One is developing capacity. The other is outsourcing it.
Somabase’s AI Collaboration cohort is structured around making that internal distinction practical and trainable. Not through abstract frameworks, but through structured practice in the actual conditions where identity outsourcing tends to happen: high velocity, complex decisions, sophisticated AI input arriving at volume. The goal is to develop what I’d call behavioral containment — the ability to engage fully with what AI offers while retaining the internal coherence to evaluate, override, and own the outcome.
This is an experiment. There’s no established curriculum for what we’re doing because the situation itself is genuinely new. What we know from the research, from early cohort work, and from lived experience building with these tools is that the humans who navigate this best are not the most technically sophisticated or the most skeptical. They’re the ones who have done enough internal work to know what belongs to them — and who can hold that identity boundary clearly enough that it doesn’t erode under the very real pressure to just let the AI handle it.
If you’ve noticed that pressure — in your own work, in your own thinking, in the way your creative voice sometimes sounds more like your AI’s suggestions than your own — we’re building something for exactly that territory.
By Erik Horbacz · February 2026
One of the less-discussed effects of working with AI systems daily is what it does to your relationship with commitment.
Not commitment in the abstract — in the specific, practical sense of making a decision and holding it long enough for it to produce useful information. That interval — the gap between deciding and learning — is where most of the real signal comes from. Execution reveals things that analysis never could. But the interval requires something that’s becoming harder to sustain: the willingness to commit under conditions of unresolved uncertainty, when you know that more input is available if you want it.
AI has made more input perpetually available. That’s one of its most genuinely valuable properties. You can query, iterate, refine, and generate alternatives faster than any previous tool in history allowed. The problem is that the same capacity that makes AI so useful for exploration makes it actively disruptive to the part of decision-making that comes after exploration. At some point, you have to stop generating alternatives and commit to one. And the internal architecture required to do that — the capacity to tolerate the discomfort of foreclosing options, to hold a position under pressure, to trust your own synthesis when AI keeps suggesting there’s a better answer — that architecture doesn’t automatically strengthen just because you have better tools.
What I’m watching in people working with AI intensively is a pattern I’d describe as decision reversal under velocity. The decisions aren’t bad. The reasoning isn’t flawed. But the commitment doesn’t hold. Not because circumstances changed — because the mere availability of more input creates a standing invitation to reconsider. The decision gets reopened. The pivot happens before the original direction had time to produce any signal. The cycle repeats, and the organization accumulates motion without accumulating learning.
This is worth distinguishing from legitimate responsiveness. Updating your position when meaningful new information arrives is exactly right. What I’m describing is different: the compulsive revisiting that happens not because new information arrived, but because the psychological cost of staying committed is higher than the psychological cost of starting the loop over again. In a world where AI can generate a compelling rationale for almost any direction, that loop can run indefinitely. There’s always a better option available in the output.
Cognitive load theory offers a useful lens here. The mental effort required to hold a decision against the incoming tide of alternatives is itself a limited resource. When you’re working at high velocity — processing large volumes of AI-generated input, managing complex decisions across multiple domains, operating in environments where the feedback loops are tight — the available bandwidth for sustaining commitment narrows. And when commitment starts to feel too expensive, the system defaults to the state that requires less active maintenance: uncertainty, optionality, and the perpetual feeling that you haven’t quite decided yet.
The real bottleneck, in my view, is upstream of the decision itself. It’s the clarity of values that the decision is meant to express. When you know — with a kind of settled, embodied certainty rather than just intellectual acknowledgment — what you’re actually trying to build, what you’re responsible for, and what you’re not willing to trade away, the decision-making process looks different. The alternatives AI generates are still interesting. But they’re interesting from the vantage point of someone who already has a direction, evaluating whether the alternatives sharpen or dilute it. That’s very different from evaluating alternatives from a position of genuine openness — which is the correct posture during exploration, but the wrong one during commitment.
Decision coherence is what I’d call the target capacity: the ability to maintain internal alignment between your values, your reasoning, and your actions over time — even as external input accelerates. This isn’t rigidity. Coherent decision-makers change their minds. But they change their minds for specific reasons, in the direction of their actual priorities, rather than because the incoming data stream has temporarily made something else seem more compelling.
The second capacity is what I’d call discomfort tolerance around commitment. This sounds almost trivially simple, and yet it’s one of the places where I see even very sophisticated people struggle. Choosing means not-choosing. Committing to one path means acknowledging that the alternatives you’re not taking might have been better. AI makes this harder because it can always show you what a different choice might have produced. The counterfactual is no longer theoretical — it’s generatable, often quite persuasively, in real time. Learning to sit with the discomfort of a committed position when a confident-sounding alternative is available — that’s a trainable capacity, and an important one.
The third is what I’d describe as signal tolerance: the ability to hold a decision long enough that execution has time to generate real information, rather than abandoning the position before the experiment has run. Most decisions don’t reveal their quality quickly. The early data is often ambiguous, sometimes negative, always incomplete. The pull to reopen the decision during that ambiguous period is strong. Resisting that pull — not out of stubbornness, but out of a disciplined respect for what commitment actually produces — is a skill that atrophies when AI makes the cycle of generating and revisiting alternatives too frictionless.
None of this is a critique of using AI for decision support. Modeling scenarios, exploring alternatives, pressure-testing assumptions — these are all legitimate and valuable uses of AI in the decision process. The issue is that the same tools need to be held by humans who can complete the decision cycle: who can take in the analysis, integrate it, commit to a direction, and hold that direction with enough stability to learn from what it produces.
That last piece — holding direction under velocity — is what Somabase’s cohort work is specifically designed to develop. Not through frameworks or theoretical models, but through structured practice in the actual conditions where decision coherence gets tested: high-velocity information environments, sophisticated input from AI systems, ambiguous early signal, and the very real temptation to keep the loop open just a little longer.
This is experimental work. We don’t have a finished curriculum because the challenge itself is still taking shape. What we do have is a clear hypothesis: that the humans who navigate this era best will be the ones who did the work to develop the internal architecture for commitment — before the velocity made that work feel impossible.
If this framing matches something you’ve been living in your own work, we’re building for exactly that.
By Erik Horbacz · February 2026
When organizations talk about the costs of AI adoption, the conversation tends to focus on the visible expenses: licensing, infrastructure, training, integration. These are real costs, and they’re tractable. You can line-item them in a budget, track them against outcomes, and make rational decisions about allocation.
The cost I’m watching organizations miss is harder to quantify, but it’s larger. And it compounds.
It’s escalation.
Not escalation in the formal sense — not the escalation of a ticket to senior leadership, or the escalation of a minor conflict into a significant one. I mean something more structural: the acceleration of instability when you add AI velocity to humans who haven’t developed the capacity to hold steady under it. When that combination occurs, the feedback loops don’t just move faster. They move faster in the direction of reactive behavior, premature reversals, and amplified confusion. The speed that AI enables doesn’t go to useful outcomes. It goes to cycling through more iterations of the same unresolved dynamic.
This is the hidden cost. Not the tools. Not the training. The escalation cycles that emerge when unstable humans interact with accelerating systems.
Let me make this concrete. An organization deploys AI tools to its leadership and strategy teams. Output volume increases significantly. Decisions are being made faster — or at least, positions are being generated faster. Meetings are better prepared. Analysis is more thorough. On the surface, the indicators are positive.
But underneath, something else is happening. The increased velocity has reduced the latency that used to cushion reactive decision-making. When a team member was reactive — anxious, under-resourced, or carrying unresolved conflict — the natural slowness of information processing gave things time to settle before they became actions. The email took a day to draft. The decision had to wait for the next meeting. The proposal needed another review cycle. That friction wasn’t pure inefficiency. Some of it was absorbing the energy of reactivity before it could manifest as strategic action.
AI removes a significant portion of that friction. And with it, the natural containment that slower systems provided. Now the reactive email gets polished in minutes. The anxious pivot is articulated in a beautifully structured memo. The impulsive reversal of strategy arrives with compelling supporting analysis. The instability moves faster, and it moves dressed in competence.
This is what I mean by escalation under velocity. The underlying human dynamic hasn’t changed. The reactivity, the anxiety, the unresolved relational tension — all of it is still there. But AI has shortened the interval between those states and their organizational consequences. The escalation cycles run faster. The cost accumulates more rapidly.
Research from Frontiers in AI (2024) found that emotional state significantly affects decision quality when working with AI — and specifically that elevated emotional reactivity correlates with biased decisions even when AI is involved in the reasoning process. The AI doesn’t stabilize the emotional state of the user. It processes whatever it’s given. And when what it’s given is reactive input, it produces output that legitimizes and accelerates that reactivity.
What this means practically is that the organizations getting the most dysfunctional outcomes from AI adoption are not necessarily the ones that deployed the technology badly. Many of them deployed it well, by conventional measures. The dysfunction comes from the gap between the capability of the tools and the stability of the humans directing them. The tools are sophisticated. The humans are not yet stabilized for the environment the tools create.
There’s a particular pattern I see at the team level that’s worth naming. When one or two people on a team are internally unstable — reactive, scattered, or carrying unexamined behavioral patterns — AI tools create a kind of amplification loop. The reactive person generates more confident-sounding output, more quickly. That output enters team conversations with more apparent authority. The people with more stable judgment have less time to process and push back, because the cycle has accelerated. The quality of collective decision-making drops not because the tools are bad, but because the tools have shifted the balance toward whoever is generating the most output, fastest — which is not the same as whoever is generating the most reliable judgment.
Organizational stability under AI velocity requires something that most teams haven’t been asked to develop before: what I’d call relational integrity under acceleration. The ability to maintain constructive, clear, and grounded interactions with colleagues when the shared information environment is moving at a pace that previously would have been reserved for crisis conditions. When that capacity is absent, normal operational pressure starts triggering crisis-level responses. The escalation cycles aren’t just internal to individuals — they propagate across teams and become structural.
The conventional response to this problem is process: more structure, more sign-off layers, more review cycles. These can help at the margin. But they address the symptom, not the source. The source is the gap between the human capacity to hold steady and the velocity the technology enables. Adding process to that gap doesn’t close it. It adds friction at the output layer while leaving the underlying instability intact — and unstable humans are reliably creative at working around process friction.
What closes the gap is developing the underlying capacity: stability under pressure, decision coherence, behavioral containment. These are trainable. They’re not soft or abstract — they’re operational capacities that show up directly in the quality of decisions made under time pressure, the ability to hold a team direction when alternatives are being generated rapidly, and the reduction of escalation cycles that cost organizations real time, real money, and real relational capital.
This is what Somabase’s Enterprise program is designed to address. Not as a compliance or wellness initiative — as a performance investment. The hypothesis is simple: if the limiting factor for AI collaboration is human stability, then developing that stability is one of the highest-leverage things an organization can do in the current environment. Not instead of AI investment — alongside it. The tools are accelerating. The question is whether the humans directing those tools are developing at a commensurate pace.
We’re at the beginning of knowing how to do this systematically. The research is early. The organizational context is genuinely new. But the pattern is visible enough to act on, and the cost of not acting is compounding every quarter as AI capability advances.
The escalation cycles are the signal. They’re telling you that the investment in human capacity hasn’t kept pace. What you do with that signal is the question.
This is what Somabase is exploring — and if you’re seeing this pattern in your organization, we’re building something worth talking about.
By Erik Horbacz · February 2026
There is a category error embedded in how most organizations think about relationships. Relational skills — how people listen, navigate conflict, hold their ground under pressure, repair after rupture — get filed under “soft skills,” which is professional shorthand for: important in theory, deprioritized in practice. Trainings happen once a year. HR handles it. We move on.
The cost of this error is enormous and almost entirely invisible on the spreadsheet.
Every team runs on the quality of its relational dynamics. Not on top of them — on them. The capacity to deliver honest feedback, to stay coherent when a partnership is under strain, to move through disagreement without fracturing, to sustain presence when a conversation gets difficult — these are not interpersonal amenities. They are load-bearing elements of any functioning collaboration. When they’re underdeveloped, you don’t just get awkward meetings. You get avoidance masquerading as agreement, passive friction bleeding into timelines, decisions made by whoever tolerates confrontation least, and teams that look functional on an org chart until stress exposes the structural gap.
I call this relational infrastructure. And like physical infrastructure, most people only notice it when it fails.
The question worth sitting with is: what would it take to actually develop this? Not to talk about it in a workshop, not to absorb another framework on active listening — but to genuinely build the capacity for relational integrity under real-world pressure?
This is the question Somabase’s Relational Intelligence track is organized around.
The research here is stronger than the soft-skill label suggests. Relational quality changes with condition, not just intention. When people are grounded enough to stay present, communication improves, listening improves, and problem-solving gets cleaner. When pressure rises past capacity, the system defaults to self-protection.
This is practical. Relational quality is not just a trait to screen for in hiring. It can be built or degraded by conditions. A well-designed cohort creates conditions for repetition, feedback, and more reliable behavior under pressure.
Patterns formed early tend to carry forward. How people handle uncertainty, closeness, dependence, and perceived threat often repeats in adult professional and personal life. Once those patterns become visible, they become workable.
The Somabase Relational Intelligence cohort is an 8-week guided practice container focused on the capacities most likely to break down under pressure: identity boundaries, escalation reduction, sustained presence in high-stakes moments, and the ability to repair rather than retreat when friction shows up.
The cohort format works because relational capacity has to be practiced with other people. The friction, resonance, misreads, and repair moments that emerge in the group are not side effects. They are part of the work.
Structured group containers also tend to produce more durable behavioral change than isolated learning. Change requires repetition in context. A cohort provides that context over time.
The Relational Intelligence cohort works with how patterns surface under pressure, what identity boundaries look like in practice, how intimacy reveals patterns around closeness, avoidance, power, and reciprocity, and how the relationship with self shapes every external relationship.
That last dimension — the relationship with self — tends to get the least airtime in professional development contexts, and it may be the most consequential. The way a person relates to their own internal experience, the degree to which they can stay present to uncertainty without collapsing or overreacting, the quality of their internal coherence under pressure — all of this transmits directly into their relational behavior. You can’t build relational integrity externally while the internal relationship is fragmented. It doesn’t hold.
I am not framing this as a problem to be fixed. Most of the people drawn to Somabase’s work are already capable, already functional, already succeeding by most external measures. What the Relational Intelligence track offers is not remediation. It is precision development — the kind of capacity building that shows up in decision coherence, in the quality of partnerships, in the ability to navigate high-stakes relational dynamics without defaulting to patterns that are no longer useful.
The organizations and individuals doing this work will be better equipped for what’s coming. The complexity of human collaboration — with each other, and increasingly with intelligent systems — is not decreasing. The relational demands are increasing. Treating relational intelligence as infrastructure, and building it accordingly, is not the soft choice. It is the strategic one.
Somabase is still early. The Relational Intelligence cohort is an experiment, running in real time, with real people who are willing to do serious work in a structured container. If that sounds like the kind of development you’re ready for, we’d like to hear from you.
By Erik Horbacz · February 2026
Most frameworks for relational development stay at the level of communication tips, conflict models, and listening techniques. Those are useful. They do not always reach the deeper patterns driving behavior.
Intimacy is one of the clearest places those patterns become visible. Closeness raises the stakes. It exposes how someone handles vulnerability, desire, boundaries, reciprocity, honesty, and repair.
Let me be direct.
How someone handles wanting, asking, receiving, or setting a limit in close relationship often maps to how they handle recognition, influence, disagreement, and dependence elsewhere. The context changes. The structure often does not.
Intimacy compresses the distance between pattern and consequence. Avoidance becomes visible faster. Control becomes visible faster. Boundary collapse becomes visible faster. So does integrity.
That is why Somabase treats intimacy as a useful domain of inquiry inside relational development. It reveals relational architecture quickly and with less room for performance.
In practice, this work is handled through structured coaching and guided reflection focused on patterns around closeness, honesty, boundaries, reciprocity, and self-trust. The aim is not disclosure for its own sake. The aim is clearer pattern recognition and more coherent behavior.
The relationship with self sits underneath all of it. A person who can stay honest with themselves about what they want, what they will accept, and where they need a boundary tends to show up with more coherence everywhere else.
People who do serious work in this domain show up differently. They stay present longer. They repair faster. They hold clearer boundaries without collapse or aggression. That changes leadership, partnership, and collaboration.
This requires a strong container: skilled facilitation, clear structure, serious participants, and ethical boundaries. That is the standard Somabase is building toward.
This is serious work for people ready to examine how they relate, not just how they present.
If you’re curious about what that container looks like and whether it’s the right fit, we’re having those conversations.
By Erik Horbacz · February 2026
One of the persistent challenges in behavioral development work is the lag between what someone intends and what they actually do — and the further lag between what they do and their awareness of having done it. Patterns are, by definition, automatic. They run below the level of conscious deliberation. This is what makes them efficient, and it’s also what makes them difficult to change: you can’t work with what you can’t see.
This is the problem MS360 is designed to address.
MS360 — MindScape360 — is Carinda Salomon’s biometric translation technology. I want to be precise about that attribution, because it matters: this is Carinda’s work. I am Somabase’s co-founder. She is the developer of this technology. What we are building together is an integration — an exploration of how MS360’s visibility layer can work alongside Somabase’s cohort model to accelerate the behavioral pattern recognition that the group work is designed to develop.
What the technology does is translate internal physiological states into visible patterns. Biometric data — the kind that reflects internal states — becomes a readable signal rather than an invisible undercurrent. The body is always tracking. MS360 makes some of that tracking legible.
It does not diagnose. It does not interpret meaning. It does not tell you what to do. It provides data — a physiological layer that participants can reference as they move through the relational and collaborative work of the cohort.
The value of this visibility is straightforward. The feedback loop gets shorter. Instead of needing weeks of reflection to identify a pattern, a participant can see how pressure, behavior, and response start linking together in specific moments. The connection becomes more observable and easier to work with.
That does not replace coaching or guided practice. It supports them. MS360 adds another reference point participants can use to sharpen self-observation, notice recurring patterns faster, and bring clearer material into the work.
MS360 is used in parallel to the Somabase cohort work, not embedded within the group sessions themselves. This is an important distinction. The cohort sessions are a relational container — a structured group practice space where human dynamics are the primary material. The MS360 layer is something participants work with alongside that, providing a complementary data signal they can bring to their own reflection and, where relevant, to the coaching context.
The hypothesis we are testing is this: when people can see their patterns — physiologically, not just behaviorally — they develop the capacity to change them faster. When the invisible becomes visible, it becomes workable. This is not a guarantee. It is a working premise that we are testing in real time, with real participants, in an early-stage experiment.
I want to be honest about where we are with this. MS360 is not a validated clinical instrument in the sense that it has been through the full arc of peer-reviewed longitudinal study. It is a sophisticated technology in active development, being integrated with a cohort model that is itself in active development. Carinda and I are building this carefully and iteratively. The early signals are promising. We are treating them as signals, not conclusions.
There is a particular quality of self-knowledge that becomes available when behavioral patterns are visible rather than inferred. Most people have some awareness of their patterns — the circumstances under which they become reactive, the relational dynamics that reliably produce a specific response. But awareness of a pattern in the abstract is different from watching it unfold in real time. The latter is more actionable. It creates a reference point: a moment you can return to, examine, and use as the basis for a different choice next time. MS360 is designed to provide exactly that kind of reference point — a data signal that is personal, specific, and grounded in what your own physiology actually did, not what you think it did.
What makes this integration worth pursuing is the nature of the problem it addresses. The Somabase cohort work — whether in Relational Intelligence or AI Collaboration — is designed to develop stable, grounded behavioral capacity over time. That development happens through practice, reflection, and the feedback that a skilled facilitated group provides. Adding a physiological visibility layer does not replace that process. It creates a complementary data channel that participants can use to sharpen their own self-observation and accelerate the pattern recognition that the group work initiates.
The goal is not technology for its own sake. The goal is acceleration of genuine behavioral development — the kind that persists beyond the cohort container and shapes how someone actually functions in their relationships and their work. If MS360 can meaningfully contribute to that, it earns its place in the model. We believe it can. We are finding out.
The combination of cohort-based relational work with the physiological visibility that MS360 provides is foundational to how Somabase operates. Every participant in the cohort works with MS360 as part of the program — a behavioral mirror that runs alongside the group work, providing reference points that deepen the development process.
Carinda’s work deserves its own exploration, and we’ll be publishing more on the technology itself in coming months. For now, this is the orientation: a technology that makes internal states visible, integrated with a group practice model that makes relational patterns workable. Two complementary approaches to the same fundamental challenge — developing the capacity to see clearly enough to change.
By Erik Horbacz · February 2026
What follows is not a manifesto. It is a working document — a set of principles distilled from the experiment that Somabase is running, refined through the cohort work we’ve done so far, and held loosely enough to be revised as the evidence develops. These are operating hypotheses, not conclusions. They are the intellectual foundation we’re building on, stated plainly so they can be examined, challenged, and tested.
If you’re building in this space — or thinking seriously about what it means to work with intelligent systems well — these are the premises I’m working from.
1. Your state determines your output.
This is the most fundamental principle in the model, and the most consistently underestimated. The quality of what you produce — your decisions, your communications, your collaborations — is downstream of your internal state. An intelligent system can amplify your capacity. It will also amplify your noise. If you are operating from a state of chronic reactivity, avoidance, or fragmentation, AI will not solve that. It will scale it.
Human stability is not a precondition for working with AI in the way that, say, hardware compatibility is a precondition. It is a precondition in the deeper sense: the quality of the person operating the tool shapes the quality of what the tool produces. Before asking what AI can do, it’s worth asking what state you’re in when you’re using it.
2. Delegation without discernment becomes dependency.
Intelligent systems are extraordinarily capable of handling tasks that were previously time-consuming. The efficiency gains are real. The risk is that the relief of offloading work can gradually extend to offloading judgment — the evaluative capacity that determines what to do with the outputs, how to use them, and whether they serve the actual goal.
Discernment is the capacity to judge well. It is developed through experience, through making decisions and living with their consequences, through building pattern recognition in real-world contexts. Delegating tasks is leverage. Delegating judgment is atrophy. The distinction requires ongoing attention.
3. The quality of your relationships determines the quality of your collaboration — with humans and with AI.
How you relate to other people shapes how you relate to every collaborative process. Relational patterns are not domain-specific. A person who defaults to control in close relationships tends to use technology in a controlling way — extracting answers rather than exploring possibilities, using AI to confirm what they already think rather than to genuinely challenge it. A person with high relational integrity tends to collaborate well with both humans and systems.
This is one of the core hypotheses of Somabase’s model: relational intelligence is not separate from AI collaboration capacity. It is foundational to it.
4. Velocity without coherence produces escalation, not progress.
AI dramatically increases the speed at which work can be produced. This is valuable when the direction is sound. It is genuinely dangerous when the direction is unclear, when the underlying thinking hasn’t been examined, or when the person operating the system is in a reactive state. Faster in the wrong direction is worse than slow.
Coherence — clarity of intent, alignment between what you say you want and what you’re actually doing — is the prerequisite for productive velocity. Somabase’s cohort work is substantially about developing coherence: the capacity to act from a stable, examined position rather than from the momentum of habit.
5. Identity boundaries are the prerequisite for productive partnership — with any intelligence.
Partnership requires distinction. You cannot have a genuine collaboration with another intelligence — human or artificial — without a clear enough sense of yourself to know where you end and the collaboration begins. When identity boundaries are weak, collaboration tends toward either enmeshment (losing yourself in the process) or reactivity (defending against the process because it feels threatening).
Identity clarity does not mean rigidity. It means knowing your own values, preferences, and positions well enough to engage with difference without losing yourself in it. This is as relevant to working with AI systems as it is to close human relationships.
6. Behavioral patterns are consistent — they surface in relationships, in work, and in how you interact with technology.
One of the most reliable observations from behavioral research is that patterns transfer across contexts. The person who avoids directness in personal relationships tends to avoid it with AI — crafting prompts that hedge, that avoid specificity, that don’t actually ask for what they need. The person who defaults to dominance in professional contexts tends to use AI transactionally, extracting outputs without real engagement. The person with high relational integrity tends to interact with intelligent systems with a similar quality of presence.
This is not determinism. It is pattern recognition. And it is an invitation — if you want to understand how you actually relate, look at how you’re relating right now, in every context that’s in front of you.
7. Visibility accelerates change.
Patterns that operate below awareness are difficult to change. Patterns that are visible — behaviorally, physiologically, through the reflection of a skilled practitioner or a cohort — become workable. The reason Somabase integrates the MS360 biometric visibility layer into the cohort model is not a belief in technology for its own sake. It is an application of this principle: when you can see your patterns, you can work with them. When you can’t, you’re working in the dark.
This is also why the cohort format itself is valuable. Other people reflect your patterns back to you in ways that solo reflection doesn’t reach. Visibility is not only a technological function. It is a relational one.
8. Community is the container — individual development happens fastest in structured group settings.
Behavioral change is not primarily an information problem. Most people who struggle with relational integrity, with stability under pressure, with decision coherence — they don’t lack information about what good behavior looks like. They lack the conditions for sustained practice and feedback in real-world relational contexts. Those conditions require other people.
Research from Frontiers in Psychology (2025) and the Brandon Hall Group’s work on learning effectiveness both document this: structured group containers produce more durable behavioral change than individual learning. The cohort is not just a delivery mechanism for content. It is the development environment itself.
These eight principles are the framework I’m building from. They are being tested through Somabase’s cohort work, refined by what we observe, and revised when the evidence demands it. Some of them will hold. Some will be refined beyond recognition. That’s the nature of working at the edge of something genuinely new.
The broader project — understanding how human beings can develop the stability, relational capacity, and behavioral coherence required to work with intelligent systems well — is not finished. It may be the defining developmental project of the next decade.
Somabase is an early attempt to build infrastructure for it. These principles are where we’re starting.
Persona 01
You’ve built something real. You’ve made decisions others wouldn’t, under pressure others haven’t felt, with information that’s never complete. The organization runs on the quality of your judgment. That’s the leverage. And that’s the exposure.
When AI compresses execution timelines, flattens hierarchies, and eliminates the buffer between your decisions and their consequences, the question shifts from what are you building to what are you running on.
The Problem
What Somabase Develops
Somabase is not a strategic advisory engagement. It doesn’t tell you what to decide — it develops the operating system that your decisions run on. Most participants begin noticing cleaner decision quality and reduced reactive pivoting within the first four weeks.
Persona 02
You are the reason things don’t fall apart. You translate vision into execution, hold accountability across a leadership team, and absorb the friction between what the Visionary sees and what the organization can actually do.
The gap most operators hit isn’t operational. It’s relational — and it’s expensive.
The Problem
What Somabase Develops
Somabase is structured cohort practice, not open-ended engagement. It fits within a defined time commitment and produces measurable behavioral outputs — not just a better frame on your challenges.
Persona 03
You’ve built real things. You understand systems, can reason through complexity, and solve problems others give up on. Your technical depth is genuine.
The disruption isn’t about whether you’re good. It’s about what “good” is worth now — and what the next layer of your competitive advantage actually is.
The Problem
What Somabase Develops
Somabase is live cohort practice — not a course, not a curriculum, not content consumption. The evidence on human skill development is consistent: behavior change requires practice environments, not information transfer.
Research Foundation
The central argument of Somabase is simple: human performance is not a fixed trait. It is the output of a specific neurological, physiological, and relational state — and that state is trainable. The science across eight independent research domains converges on this conclusion with precision that most leadership development conversations have not caught up to.
The brain under pressure operates on a fundamentally different architecture than the brain in a stable, expansive state. Amy Arnsten’s foundational research at Yale documents the literal offline transition of prefrontal function under stress — synapse by synapse, the capacity for deliberate judgment degrades. This is not metaphor. It is mechanism. And it is the upstream condition for everything else in a leader’s performance: communication, trust-building, feedback, strategic decisions. These are all downstream of whether the evaluating brain is online.
The neurochemistry of performance maps the molecular substrate of high-functioning states — regulated cortisol, calibrated dopamine, oxytocin-rich relational environments, BDNF sufficient for learning. Body-mind coherence research establishes that the physical dimension is not incidental to cognition: interoception, heart-brain coherence, and somatic state directly determine decision quality under uncertainty. These are not soft concepts. They are measurable physiological realities with documented behavioral consequences.
The relational layer is where organizational performance is ultimately decided. Attachment patterns, oxytocin dynamics, mirror neuron systems, and the emerging science of human-AI cognitive integration all point toward the same conclusion: the quality of how people relate to each other — and to the tools they use — is the primary variable in whether any of the other capabilities are accessible under pressure. These articles are the research foundation. The work itself is what happens when you build on it.
Research Articles
Neuroscience
How the prefrontal cortex degrades under stress, why stability under pressure is a trainable neural capacity, and what this means for leadership performance.
Read →Neurochemistry
Five molecules — dopamine, cortisol, oxytocin, serotonin, BDNF — govern every dimension of human performance. What they do and how to work with them.
Read →Somatic Science
Interoception, heart-brain coherence, and the research establishing that the body is an active participant in every decision a person makes.
Read →Relational Psychology
Attachment science, relational patterns under pressure, and why the quality of how people relate is not downstream of performance — it is performance.
Read →Neuroplasticity
Why most professional development doesn’t work, what the neuroscience of behavioral change actually requires, and the four conditions that make new patterns stick.
Read →Creativity Science
The three neural networks behind creative breakthroughs, what stress and digital saturation do to creative capacity, and why regulated states are a strategic asset.
Read →Behavioral Visibility
Facial Action Coding System research, automated behavioral analysis, and the science behind why visible patterns become workable patterns.
Read →Human-AI Research
Cognitive outsourcing, the extended mind, and what the research on AI’s effects on human cognition means for how we develop in the age of intelligent tools.
Read →Social Neuroscience
Oxytocin, mirror neurons, and the social architecture of learning — why cohort-based development produces outcomes that individual work structurally cannot.
Read →Integration
The integrated case: what the convergence of eight research domains points to, and what coherence under velocity means as an organizational and individual capacity.
Read →Apply the Research
Somabase programs are designed to operate within the conditions the research identifies as necessary for genuine behavioral change. If the science resonates, the next step is a conversation.
There’s a particular quality of decision that gets made in the third hour of a difficult meeting, or in the moment right after someone says something that lands wrong, or when the quarter is closing and the pressure is real and the room is watching. The quality is different from a decision made in calm, unhurried conditions — and not different in a subtle way. It’s structurally different. The inputs are compressed. The options considered are fewer. The brain is running on a different operating system.
This isn’t a character flaw. It’s neuroscience.
The prefrontal cortex — the region responsible for judgment, planning, impulse containment, and nuanced communication — is exquisitely sensitive to stress. Amy Arnsten’s foundational research at Yale, published in Nature Neuroscience (2015), demonstrated that even mild stress increases catecholamine release in the prefrontal cortex, measurably reducing neuronal firing rates. The PFC doesn’t metaphorically “go offline” under pressure. It literally does. Synapse by synapse, its functional capacity degrades.
What takes over is faster and older. The brain shifts from what researchers call “reflective” to “reflexive” control — from deliberate evaluation of options to pattern-matching against stored responses. From the boardroom brain to the survival brain. The shift is designed for genuine threat: fast, decisive, effective for the situation it evolved to handle. For a negotiation, a personnel decision, or a conversation about someone’s performance — it’s the wrong tool.
Arnsten describes this as a hard biochemical transition. And she documents something further: it isn’t just acute stress that causes it. Chronic stress — the sustained, low-grade, always-on kind that most leaders and teams operate under — produces structural changes to PFC dendrites. Not just functional impairment but architectural change. The implication is that the cost of chronic pressure isn’t just bad moments of judgment. It’s a gradual degradation of the capacity for good judgment at all.
Arnsten’s research describes the biochemistry. Cognitive load research fills in the behavioral picture.
A 2025 review published in Frontiers in Psychology (Zaniboni et al.) examined what happens to decision quality as mental demand increases. The findings are precise: people generate fewer decision alternatives, evaluate fewer factors per option, and become disproportionately sensitive to how choices are framed — not the substance of the choices themselves. Under load, framing effects dominate. The first option presented carries more weight. The familiar answer wins.
What makes this particularly consequential is that people under cognitive load are typically unaware it’s happening. There’s no internal alarm that says “warning: you’re evaluating this option with 40% of your normal processing capacity.” The decision feels normal from the inside. The compression is invisible to the person experiencing it.
This is the mechanism behind what Suter and colleagues (2016, Neurobiology of Stress) documented in their Stress-Induced Deliberation-to-Intuition model: under stress, the brain potentiates a shift from analytic, PFC-based decisions to intuitive, subcortical habitual responses. The shift is not random. It defaults to whatever the person already knows — their existing patterns, their established reactions, their most rehearsed responses. Stress doesn’t create poor judgment. It amplifies existing patterns, whether those patterns are useful or not.
None of this is fixed. Research published in Biological Psychiatry (Greenberg et al., 2014) used fMRI imaging to show that individuals with higher baseline amygdala-PFC functional connectivity — stronger neural communication between the reactive and the deliberate — maintained goal-directed behavior under conditions that degraded performance in others. The architecture of stability under pressure is measurable. And it varies between people. And it can be developed.
This is the key finding that changes the frame from “some people handle pressure and others don’t” to “stability under pressure is a trainable neural capacity.” It’s not temperament. It’s not grit. It’s the strength and speed of the feedback loop between the reactive brain and the evaluating brain. The loop that, when it’s functioning, lets you notice that you’re about to say something reactive and choose differently. When it’s not functioning, the reactive thing is said before the deliberate brain was consulted.
After 5–7 days of repeated stress, PFC-dependent recognition memory is profoundly impaired in animal models (Yuen et al., 2012, Neuron) — not as a one-off event but as a predictable, measurable cascade. The cost of sustained pressure is cumulative.
Judgment quality is the upstream condition for everything else in a leader’s performance. Communication. Trust-building. Feedback. Strategic decisions. Conflict resolution. These are all downstream of whether the evaluating brain is online when it needs to be.
This is why stability under pressure is not one skill among many at Somabase — it’s the foundation. Not in a motivational sense. In a mechanical one. A leader who can’t maintain decision coherence under moderate stress will make worse communication choices, produce less trustworthy relational signals, and create more escalation in the people around them — regardless of how much they know about leadership theory. The knowledge is inaccessible when the operating system has shifted.
The training approach follows from the science. You can’t develop stability under pressure by reading about it or attending a talk about it. The PFC-amygdala connectivity that predicts composed performance under pressure is built through structured, calibrated practice — repeated exposure to pressure-condition simulations, with real-time behavioral feedback, at an intensity that challenges without overwhelming. Below the stress floor that flips the brain into reactive mode. High enough to build the circuitry.
That’s the window. That’s what Somabase is designed to operate in.
The brain you bring to the difficult moment is not the brain you’re stuck with. It’s the brain you’ve built.
Most conversations about performance talk about strategy, habits, and mindset. These aren’t wrong — they’re just incomplete. Before any of those constructs become available, there’s a prior condition: the neurochemical environment in which all of it operates. Get that environment wrong, and the strategy doesn’t matter. The habits won’t hold. The mindset shifts won’t land.
The chemistry comes first.
Human performance — in relationships, decisions, communication, and creativity — is governed by a specific neurochemical environment. Five molecules are most relevant to how people actually function in professional settings.
Dopamine is widely misunderstood as the “pleasure chemical.” It’s more precisely a prediction and novelty-seeking molecule — it’s released in anticipation of reward, in response to novelty, and in contexts where the outcome is uncertain. Research from Harvard Medical School shows that individuals with high digital device usage demonstrate 15–23% reduced activity in the brain’s natural reward centers — creating a state called anhedonia, in which previously satisfying work loses its motivational pull. Stanford research puts the impairment in working memory, attention span, and decision-making capacity from dopamine dysregulation at 25–40%. The problem isn’t that people become addicted to their phones. The problem is that the phone trains the dopamine system to respond to rapid, unpredictable, shallow signals — and in that environment, deep, sustained work starts to feel unrewarding, because it’s operating in a system that’s been calibrated for something else.
Cortisol is the primary stress hormone, and its effects at chronic elevation are well-documented. A 2023 review published in Cells (Picard et al.) established that chronic cortisol disrupts hippocampal integrity, impairs declarative memory, suppresses neurogenesis, and degrades the structural and functional integrity of the brain’s primary decision-making and emotional regulation centers. Importantly, the damage isn’t only functional — structural hippocampal atrophy occurs with sustained cortisol exposure. The memory and learning system physically shrinks.
Oxytocin is the neurochemical substrate of trust. Neuroeconomist Paul Zak’s research at Claremont Graduate University established that oxytocin levels in the bloodstream increase by an average of 41% after receiving a sign of trust — and that this increase directly predicts subsequent trustworthy behavior in return. Oxytocin doesn’t just make people feel connected. It changes what they do. Research published in Frontiers in Neuroscience (Kendrick et al., 2019) shows that oxytocin promotes social cohesion, in-group conformity, and cooperative learning through specific amygdalo-frontal-striatal circuitry. Team cohesion isn’t an atmosphere — it has a molecular signature.
Serotonin governs impulse control, behavioral inhibition, and the capacity to wait. Research published in Molecular Neurobiology (Miyazaki et al., 2012) established that serotonin regulates waiting behavior and impulse inhibition: low serotonin promotes impulsive choice — selecting immediate small rewards over larger delayed ones. This is the neurobiological mechanism behind reactive decision-making, short-termism, and the leader who can’t tolerate the ambiguity long enough to make the right call.
BDNF — brain-derived neurotrophic factor — is the brain’s primary growth factor for learning and neuroplasticity. It facilitates long-term potentiation, the cellular mechanism of memory consolidation. Chronic stress suppresses it. Physical activity, social connection, and structured challenge elevate it. It is, in the most literal sense, the molecular substrate of development.
Flow is not a metaphor. Research published in Frontiers in Psychology (Van der Linden et al., 2021) on the neuroscience of flow states established that they involve simultaneous activation of the locus coeruleus-norepinephrine system in “exploitation mode,” along with dopaminergic reward activation, reduced self-referential thinking, and sustained attentional focus. Steven Kotler’s synthesis identifies five neurochemicals in simultaneous release during flow: norepinephrine, dopamine, serotonin, anandamide, and endorphins. The result is a state of increased focus, enhanced pattern recognition, elevated lateral thinking, and unusual emotional stability.
This state is not produced by trying harder. It’s produced by specific conditions: a challenge calibrated to the edge of current capability, a clear goal, immediate feedback, and an environment where self-consciousness is low enough to allow full absorption. These are engineering conditions, not luck conditions.
The neurochemical profile of burnout is the inverse of flow — and it follows a predictable cascade. Depleted dopamine baseline. Chronically elevated cortisol. Suppressed BDNF. Disrupted serotonin regulation. This isn’t a metaphor either. It’s a measurable neurochemical state that explains why burnout produces the simultaneous collapse of motivation, creativity, memory, and emotional stability. These aren’t separate symptoms — they’re the same system in cascade.
The neurochemical profile of the high-functioning leader isn’t accidental. It’s the output of specific conditions. A regulated cortisol baseline. Dopamine circuits that haven’t been hijacked by digital novelty-seeking. Serotonin sufficient for impulse containment. Oxytocin-rich relational environments. BDNF elevated enough for the brain to actually change.
These are the conditions Somabase is designed to produce: structure that provides challenge without overwhelm, safety sufficient for genuine social learning, relational connection that activates the oxytocin circuitry, and physical engagement through MS360 (Carinda Salomon’s technology) that engages the body as a full participant in development.
The chemistry doesn’t lie. But it also isn’t fixed.
Most leadership development treats the body as irrelevant. The brain is where the action is — cognition, emotion, decision. The body is just transport. This is wrong, and the evidence against it is now substantial enough that holding the contrary position requires ignoring a decade of convergent research from multiple independent scientific traditions.
The body is not a passive vehicle for the brain. It is an active participant in cognition, emotion regulation, and the quality of every decision a person makes.
Interoception is the brain’s perception of internal bodily signals — heartbeat, breath, gut tension, the subtle shifts in physical state that precede or accompany emotional experience. Research published in Frontiers in Psychology (Price & Hooven, 2018) established that interoceptive awareness is a core mechanism in emotional regulation: the ability to accurately perceive internal signals correlates directly with the capacity to manage emotional responses adaptively.
The decision quality connection is more specific than that. Research published in Frontiers in Psychology (Lischke et al., 2023) showed that individuals with higher interoceptive accuracy perform more advantageously on the Iowa Gambling Task — a validated measure of decision quality under uncertainty. The mechanism is what neuroscientist Antonio Damasio called “somatic markers”: body-based signals that function as rapid, pre-conscious guidance in decision situations, particularly ambiguous or high-stakes ones. When interoception is impaired, those signals are still being generated — but the person can’t read them. They have less data available to inform the choice.
Put directly: people with better body awareness make better decisions under uncertainty. This isn’t a metaphysical claim. It’s a functional one about how the brain uses bodily information to navigate complex situations.
Further, research published in Journal of Affective Disorders (Garfinkel & Critchley, 2019) established that high interoceptive ability correlates with adaptive emotion regulation strategies — specifically, the capacity to reappraise emotional responses rather than suppress them. Suppression is the maladaptive strategy: it costs cognitive resources, it doesn’t reduce the underlying emotional activation, and it tends to leak into behavior in ways that the person isn’t tracking. Reappraisal works differently — it changes the interpretation of the event rather than fighting the response to it. Interoception is part of what makes reappraisal available.
The HeartMath Institute has over 35 years of published research documenting the bidirectional communication pathways between the heart and brain. The heart sends more signals to the brain than the brain sends to the heart, via neural, hormonal, mechanical, and electromagnetic channels. These signals demonstrably affect brain function, including cognitive performance and emotional regulation.
The coherence model describes a specific, measurable physiological state characterized by a smooth, high-amplitude, sine-wave-like heart rate variability pattern — in which autonomic, emotional, and cognitive systems operate in synchrony. When the system is in coherence, there is coordination across physiological layers that appears in behavioral outputs: clearer thinking, reduced reactivity, greater capacity for flexible response.
A 2025 narrative review published in Global Advances in Integrative Medicine and Health (Burge et al.) synthesized the research: regular HeartMath coherence practice produces significant improvements in cognitive performance, emotional regulation, sleep, and stress reduction. In one randomized controlled trial of 136 students, four months of coherence practice produced statistically significant improvements in objective physiological markers. In a physician stress management study using HeartMath HRV biofeedback, 67% of participants maintained practice with persistent stress reduction benefits 56 days after the intervention ended.
Most striking: after 4–5 weeks of daily coherence practice, participants showed increased brain volume in the hippocampus — a structural neurological effect from a practice-based intervention. The body changed the brain. Not metaphorically. Structurally.
I’m deliberate about not using certain terms publicly — not because the underlying science isn’t solid, but because clinical terminology carries baggage that gets in the way of the actual work. The research on autonomic state regulation describes, in physiological terms, exactly what we observe behaviorally in Somabase’s work: the state a person is in when they walk into a room determines the cognitive and relational capabilities available to them.
When someone is in a regulated, expansive state, they can access trust, curiosity, collaborative engagement, and nuanced judgment. When the system has shifted toward reactivity, those capacities are constrained, sometimes significantly. Two people with identical knowledge and skills will perform radically differently depending on their physiological state going into a high-stakes moment. The meeting, the negotiation, the feedback conversation.
At Somabase, we describe this in terms of the operational outcomes: stability under pressure, decision coherence, relational integrity. These aren’t different concepts from what the physiological research describes — they’re the behavioral expression of it. Learning to read and regulate your own physical state, in real time, is not soft work. It’s the foundation of consistent performance.
This is why Somabase integrates the physical dimension through Carinda Salomon’s MS360 technology. Somatic data — what the body is doing — surfaces information that isn’t available through cognitive self-report alone. People often don’t know they’re escalated until they’ve already acted from escalation. The body registers the shift before the conscious mind does.
Somatic experiencing research (Brom et al., 2017, Journal of Traumatic Stress) demonstrated significant effects on symptom reduction with effect sizes of Cohen’s d = 0.94–1.26 — large by any clinical standard — through a methodology that works directly with bodily sensations and movement rather than purely cognitive processing. The mechanism isn’t mystery: it’s restoring the feedback loop between what the body is registering and what the mind can do with that information.
When that loop functions, the body becomes an asset in the work — a real-time data source on your own state, which is the most operationally relevant information you have in any high-stakes situation.
High performance isn’t a mental game. It’s a full-body one.
In most organizational frameworks, relationships are described as important. Soft skills. Culture. Interpersonal effectiveness. The language suggests that relationships are a nice-to-have layer on top of the real infrastructure — the processes, the metrics, the strategy.
The neuroscience inverts this. Relationships are not the decorative layer. They are the substrate. The operating medium in which all other capabilities either function or fail.
Attachment theory, originally developed by John Bowlby and Mary Ainsworth and now extensively updated and empirically validated, describes how early relational experience shapes the internal working models we carry into every relationship thereafter. These models encode: can people be trusted? Am I worthy of connection? How should I interpret the signals I receive from others?
The critical insight is not that these models exist — it’s that they operate largely outside conscious awareness. A 2024 framework published in Frontiers in Psychology (Zhang et al.) established that attachment styles directly shape cognitive appraisals of workplace situations, emotional responses to challenges, and resulting behaviors. Not sometimes. Reliably. The research shows that attachment styles are activated specifically by challenging circumstances — which means high-stakes professional situations are precisely the conditions under which early relational programming runs most clearly.
A securely attached person walks into a difficult conversation with the implicit assumption that the relationship can hold the difficulty. They can tolerate disagreement without interpreting it as rejection. They can ask for what they need without it feeling like exposure. They can receive critical feedback as information rather than attack.
An anxiously attached person brings a different internal working model: relational threat is likely, approval must be maintained, connection is fragile. This produces over-functioning, people-pleasing, difficulty setting limits, and an exhausting background monitoring for signs of withdrawal or disapproval. In a leader, this pattern looks like difficulty making decisions without excessive consensus-seeking, conflict avoidance that enables dysfunction, and a preoccupation with how they’re perceived that consumes cognitive resources that should be on the problem.
An avoidantly attached person brings the inverse: connection is unreliable, so the safest strategy is not to need it. This produces a kind of functional isolation — competent, often highly productive, but not genuinely available in the relational sense. Teams led by avoidantly attached leaders tend to feel managed rather than trusted.
These are not character judgments. They are pattern responses that were adaptive at some point and are now running in contexts where they may not serve. And they are changeable.
Research published in the Journal of Occupational and Organizational Psychology (Yip et al., 2024) established that attachment dynamics play out not just in leader behavior but in follower response to leadership — follower attachment styles predict engagement and organizational trust. Which means the relational field of a team is a compounded system: each person’s relational patterns interact with every other person’s relational patterns, producing the actual culture that exists, underneath whatever the stated values are.
This is why Somabase uses the language of relational infrastructure. The way people relate to each other — their capacity for direct communication, their ability to set and hold limits, their tolerance for productive tension without interpersonal collapse — is not downstream of business performance. It is business performance, in the domain where most organizational failure actually occurs. The strategy was fine. The relational field wasn’t.
There is a newer layer to this conversation that I think is among the most consequential topics in human development right now: the human-AI relationship, and what patterns of dependence it enables.
A 2025 study published in Frontiers in Psychology (Risko et al.) examined what they call “disruptive offloading” — the pattern of delegating cognitive functions to AI in ways that gradually erode the internal capacity those functions depend on. When AI systems provide ready answers, users consistently overestimate their own understanding of the generated material. The researchers call this “illusion of competence.” The study documents “cognitive inertia” — a pattern in which the effort of independent thinking is increasingly avoided because a shortcut is available.
This is a functional parallel to anxious attachment dynamics in human relationships. It’s not just reliance on an external resource. It’s the gradual erosion of the internal capacity the resource was supposed to support.
Identity boundaries are a relational concept that applies to human-human relationships and human-technology relationships equally. Where does my thinking end and the tool’s output begin? If I can’t answer that with clarity, I don’t have a tool — I have a dependency. And dependencies, in any domain, create a specific kind of fragility.
Somabase’s Relational Intelligence track exists because relational capacity is not innate, not randomly distributed, and not fixed. It’s a skill set built on a foundation of understood patterns and practiced responses. The work involves identifying which relational patterns are running, developing the capacity to stay present under relational pressure, and building the communication precision that turns relational insight into actual behavior change.
The research from Fehr et al. (2025, ScienceDirect) establishes that ethical leadership behavior — prosocial tendencies, moral decision-making, the capacity to act from principle rather than self-protection — is grounded in attachment-based relational models. You can’t separate the relational layer from the ethical layer from the performance layer. They’re the same substrate.
Healthy interdependence — in human relationships and human-technology relationships — requires knowing where you end and the other begins. That’s not a soft skill. That’s operational clarity about the most consequential boundary in any working relationship.
Most professional development doesn’t work. Not because the content is wrong, though some of it is. Because the delivery model is wrong — and the delivery model is wrong in ways that the neuroscience of behavioral change makes quite clear.
The standard model — attend a workshop, absorb information, implement insights — fails because it misunderstands what behavioral change actually requires. Information is the cheapest input in the entire process. It’s the least limiting factor. People know more than they do. The gap between knowing and doing is not an information deficit. It’s a practice deficit.
Neuroplasticity — the brain’s ability to reorganize its structure, functions, and neural pathways in response to experience — is real and well-documented. The adult brain retains significant capacity for change throughout life. But the conditions required for that change to happen are specific, and most professional development programs get them wrong by design.
A review published in Brain Sciences (Puderbaugh et al., 2023) confirmed the core conditions: environmental enrichment, physical challenge, social connection, and novel learning all enhance neurogenesis, increase BDNF, and improve learning and memory consolidation. These are enabling conditions — they create the biological substrate in which change becomes possible. Chronic high stress, cognitive overload, isolation, and passive consumption — the conditions of most content-based professional development — actively inhibit the same processes. Chronic cortisol suppresses BDNF. Impaired BDNF impairs long-term potentiation — the cellular mechanism of memory consolidation. The information that went into the workshop doesn’t get encoded as new behavior. It gets lost.
Harvard Health summarizes the neuroplasticity principle (Budson, 2025): the brain changes most when challenged in a way that requires active engagement and feedback. Not passive observation. Not information exposure. Active engagement with feedback.
This is where Anders Ericsson’s decades of research on deliberate practice become essential. Ericsson identified four necessary conditions for genuine skill acquisition: a specific goal, focused attention, immediate feedback, and discomfort. All four. Remove any one and the neurological adaptation doesn’t occur. The practice has to be targeted at the edge of current capability — not so easy it requires no adaptation, not so difficult it overwhelms the system. The feedback has to come from someone who knows what good looks like. The discomfort is not incidental — it’s the signal that the brain is being required to do something it can’t already do automatically.
A Practica Learning study (2024) made this concrete with empirical precision. In a standard leadership workshop, 64% of participants demonstrated coaching skills at the required standard. Adding just two hours of deliberate practice — three structured repetitions with expert feedback — raised that to 76%. A 22% improvement in demonstrated skill from two hours of the right kind of practice. For specific communication behaviors, the improvement reached 41%.
That number matters: 41% improvement in a measurable interpersonal skill from deliberate practice versus information exposure alone. The information doesn’t build the skill. The practice does.
Individual development — a person working alone through content, or even with a single coach — is missing a critical element. Behavioral change in interpersonal skills requires other people to practice with. This is not a logistics preference. It’s a neurological one.
Research published in Frontiers in Education (Mauldin et al., 2024) on cohort-based learning in graduate programs found that cohort structure improved critical thinking, professional development outcomes, and content retention — but the mechanism was specific. The improvement came from peer accountability, exposure to diverse perspectives, and the development of critical thinking through actual dialogue with peers who see things differently. Not just the content. The relational container in which the content was engaged.
Collaborative learning research (ScienceDirect, 2014) establishes that for high-complexity tasks, group learning outperforms individual learning because cognitive load can be distributed and individual knowledge gaps are filled by peer contributions. A cohort is not just a motivational support structure — it’s a distributed cognitive resource that enables each participant to operate above their individual ceiling.
The standard professional development workshop fails on all four of Ericsson’s deliberate practice criteria simultaneously. The goal is general, not specific. The attention is diffuse. The feedback is absent or delayed. The discomfort level is low — learning about something is not the same as attempting it with someone watching who can tell you what you actually did.
The workshop also produces what the research calls the illusion of competence. You understand the concept. You can explain it. You can recognize it in others. This creates the subjective experience of learning without the behavioral change that constitutes actual skill. The knowledge is real. The skill isn’t there yet.
Somabase uses 8-week cohorts because 8 weeks is the minimum duration for the neuroplasticity conditions to produce durable behavioral change. BDNF elevation requires sustained practice over weeks — not a weekend. New patterns need to be practiced enough times that they begin to displace the existing pattern, which means they need to be practiced in contexts that actually activate the existing pattern. Live practice in a structured group does that. Watching a video of someone else doing it doesn’t.
MS360’s behavioral visibility function provides the expert feedback element that Ericsson’s model identifies as necessary. You can’t see your own behavioral patterns in the moment you’re producing them. That’s the fundamental challenge of interpersonal skill development: the thing you need to see is invisible to you when you’re doing it. External visibility closes that loop. It makes the implicit explicit — and the explicit is what can be worked with.
Change in adults follows the same principles as change in any biological system: it requires the right conditions, the right inputs, and enough time for the structure to actually reform. Those conditions are documented. And they’re buildable.
Adults can change. The question is whether we’re using the methods that actually produce it.
There is a narrative that treats creative work as the output of pressure — the deadline that forces inspiration, the crisis that demands innovation. The neuroscience says something different. Creativity is not a personality trait that activates under heat. It is a state-dependent capacity, and the state it requires is one most modern work environments systematically destroy.
Understanding why starts with the architecture of the creative brain.
The brain runs creativity through the coordinated activity of three large-scale networks. The Default Mode Network (DMN) handles internal thought, imagination, and the mind’s ability to wander — to make remote, unusual associations between distant ideas. The Executive Control Network (ECN) provides directed attention and working memory, holding multiple threads simultaneously. The Salience Network (SN) detects relevance and switches focus between the internal and external world.
Creative breakthroughs require the DMN and ECN to be simultaneously active and coordinated — imagination running in parallel with analytical holding capacity. This coordination depends on the Salience Network staying quiet enough to allow internal, associative processing to continue without constant interruption.
In 2022, Ben Shofty and colleagues published in Nature the first direct causal evidence of this: using cortical stimulation in awake patients performing divergent thinking tasks, they showed that disrupting dominant-hemisphere DMN nodes specifically impaired creative fluency. Stimulating non-DMN regions had no effect. This is not correlation — it is mechanism. DMN integrity is a prerequisite for the kind of thinking organizations most urgently need from their people.
A landmark review published in Frontiers in Psychology (Vartanian et al., 2020) synthesized neuroscience and behavioral research to establish precisely what stress does to this architecture. Acute stress increases Salience Network activity and decreases Executive Control Network activity — the direct inverse of what creativity requires. Attention narrows to environmental threats. Working memory resources contract. The internal, associative space the DMN needs to operate is invaded by threat-detection circuitry.
Research published in Thinking Skills and Creativity (2024) confirmed that acute stress specifically degrades divergent thinking — the capacity to generate multiple novel solutions — through cortisol elevation and reduced cognitive flexibility. Convergent thinking, finding a single correct answer, remains relatively unaffected. Translation: stress leaves analytic execution intact while removing the generative, original thinking that produces strategic advantage.
This is not a minor impairment. It is the selective elimination of the cognitive mode that produces differentiated value.
Stress is not the only threat to DMN access. The digital environment we work in is architecturally hostile to creative thought in a second and distinct way.
Gary Small and colleagues, publishing in Dialogues in Clinical Neuroscience (2020), documented that heavy media multitasking correlates with reduced volume in the anterior cingulate cortex — a structural brain change associated with reduced attentional control. Heavy multitaskers show functional changes in prefrontal activity, the neural signature of attention trained toward rapid switching rather than sustained focus. These are not temporary performance decrements. They are architectural adaptations to an attention environment that rewards reactivity.
The statistic that lands hardest from this body of research: simply seeing a smartphone — not using it, just having it in visual range — measurably lowers working memory capacity and decreases performance on cognitive tasks. The device does not need to be active. Its presence alone pulls attentional resources away from sustained internal processing.
The word “calm” can sound passive. Scientifically, it is anything but. Calmer, grounded physiological states enable DMN-ECN coordination by releasing the Salience Network from threat-detection mode. Researchers at Drexel University found in the first neuroimaging study to capture how the brain achieves a creative flow state (Drexel Creativity Research Lab, 2024): flow is characterized by reduced prefrontal self-monitoring combined with coordinated DMN-ECN activity — a state requiring sustained internal quiet.
That quiet is not emptiness. It is the neural condition under which the brain can do its most sophisticated work.
The economic case for organizational creativity has never been stronger. In the AI era, the thinking AI cannot do — divergent, contextually wise, relationally grounded, genuinely novel — is precisely the thinking that justifies human presence on complex problems. Every organization is in a race to maintain access to that capacity in its people.
But you cannot train creative thinking directly. You train the conditions that make it accessible. Stability under pressure reduces the Salience Network’s chronic engagement. Behavioral containment creates the internal spaciousness that DMN processing requires. Reduced digital reactivity allows sustained internal attention to develop and hold.
Somabase’s work on grounding and escalation reduction is not preparatory to creative work — it is the enabling condition for it. Organizations that invest in the regulated states of their people are making a direct investment in strategic imagination.
Creative work is a downstream expression of a regulated interior. Protect that, and the ideas follow.
There is a version of self-awareness that lives entirely in the mind — introspective, conceptual, and largely disconnected from what everyone around you can already see. Most developmental work operates in that space. The problem is not the work itself; the problem is that the behavioral patterns most worth addressing are the ones least visible to the person running them.
The face is where those patterns surface. And the science of reading it has become precise enough to be practically useful.
In the late 1960s and through the 1970s, psychologist Paul Ekman developed the Facial Action Coding System (FACS) — a comprehensive, anatomically grounded system for coding all visible facial muscle movements through “Action Units” (AUs). The system, published with Wallace Friesen in 1978 and significantly updated in 2002, has since become the standard reference for research connecting facial expressions to internal emotional states.
The core finding that FACS research produced: facial expressions are the involuntary output of internal emotional states. When the internal state and the presented expression diverge — when someone is managing their display — “microexpressions” appear. These are brief, full-face expressions lasting 1/25th of a second that expose the genuine emotion before the intentional expression takes over. According to the Paul Ekman Group, microexpressions occur in everyone, without their knowledge, and cannot be prevented. The face leaks what the mind tries to manage.
This makes facial data uniquely valuable as a developmental signal. Self-report tells you what someone believes about their own patterns. Behavioral observation tells you something more accurate. Facial data provides access to the space between the two — the gap where the most consequential developmental information lives.
Ekman’s work provided the framework. Modern automated analysis provides the scale. Research published in Frontiers in Psychology (Krumhuber et al., 2020) reviewed the use of facial coding in applied research and found that systems like Noldus FaceReader achieve 89% accuracy in classifying emotional expressions, with FACS Action Units detected at 86% mean accuracy. More recently, machine learning analysis of facial movement in ordinary video has been shown to reliably identify Big Five personality traits (Li et al., 2022, Frontiers in Public Health).
The face is not just communicating emotion in the moment. Over time, it reveals consistent patterns: habitual response tendencies, characteristic emotional tones, and the behavioral signatures of how a person moves through different kinds of relational and performance pressure. These patterns are measurable, and measurement is the precondition for change.
The mechanism that makes facial data developmentally useful is grounded in biofeedback science. A meta-analysis published in Frontiers in Psychology (2025) covering 41 studies and over 2,300 athletes found that biofeedback and neurofeedback training produced statistically significant effects on mental health (SMD = 0.76), athletic performance (SMD = 0.88), and cognitive performance. These are large effect sizes.
The mechanism is straightforward: biofeedback creates a feedback loop between an internal state — previously invisible to the person experiencing it — and an external signal that makes it observable. This enables individuals to identify, practice, and eventually independently regulate states they previously had no conscious access to. Research published in Nature Human Behaviour (Ehrlich et al., 2023) found that participants who trained with pupil-based biofeedback maintained self-regulation abilities after the feedback was removed. The external visibility built an internal skill.
This is the scientific foundation beneath MS360, the behavioral visibility technology developed by Carinda Salomon that Somabase partners with. MS360 applies facial and biometric analysis to developmental practice, creating the feedback loop that biofeedback research consistently identifies as the mechanism of accelerated behavioral change.
What Carinda built is not an assessment. It is a mirror with a scientific frame around it — one capable of surfacing the behavioral patterns that operate outside conscious awareness, in the gap between intention and expression. When those patterns become visible, they become workable. When they become workable, they change.
The importance of correct attribution here matters to me directly. I am not the designer of MS360. Carinda Salomon developed this technology, and the precision and care she brought to its construction are evident in every dimension of how it functions. Somabase’s partnership with MS360 is a deliberate alignment — our structured practice model creates the developmental container; her technology creates the visibility that accelerates movement through it.
The developmental work most leaders have access to is overwhelmingly verbal, conceptual, and self-referential. It operates entirely within the stories people tell about themselves — which are, by definition, the least reliable data source for understanding the patterns others actually experience.
Facial and biometric data introduces a different signal: behavioral reality as it occurs, in the conditions under which patterns actually emerge. Combined with structured practice and the containment of a cohort model, that signal accelerates change in a way that conceptual insight alone never will.
Patterns that remain invisible remain unchanged. Patterns that become visible — even briefly, even uncomfortably — enter the domain of what can be developed.
The conversation about AI in organizations has been almost entirely about capability — what AI can do, how fast it is improving, which roles it will affect. A more important conversation is just beginning: what happens to human cognition when AI starts doing the work that thinking is made of?
The evidence is starting to arrive. Some of it is reassuring. Much of it requires attention.
In 1998, philosophers Andy Clark and David Chalmers published “The Extended Mind” — an argument that cognitive processes are not brain-bound. When external tools perform cognitive functions and are reliably coupled with internal processes, they become part of our cognitive system. The pen that organizes thinking, the notebook that extends memory, the colleague who holds context we cannot hold alone — these are not just aids to cognition. They are part of it.
In 2025, Clark extended this framework in Nature Communications to address AI directly. His argument: human intelligence has always been hybrid, built on layers of cognitive scaffolding, and AI represents the newest layer. This is not alarming in itself. What matters is how the integration happens. Clark names the critical new skill as “cognitive hygiene” — the intentional curation of what we delegate to our cognitive tools and what we retain as genuinely internal capacity. The risk is not that AI becomes part of our cognitive system. The risk is that the terms of integration erode the specifically human elements of the system: personal judgment, contextual wisdom, and the earned capacity that comes from navigating difficulty without shortcuts.
A 2025 paper in Frontiers in Psychology (Risko et al.) developed a taxonomy for cognitive off-loading that maps the terrain precisely. The three categories are assistive, substitutive, and disruptive. Assistive offloading amplifies human capability without replacing it. Substitutive offloading performs tasks we could do independently, with some cost to skill maintenance. Disruptive offloading is the category that demands attention: it undermines metacognitive accuracy, sustained attention, and the capacity for self-directed reflection.
When AI systems provide ready answers, users consistently overestimate their own understanding of the material — a phenomenon the research calls “illusion of competence.” Repetitive AI reliance creates what the researchers term “cognitive inertia”: a pattern where the effort of independent thinking is progressively avoided, not because the capacity is gone but because it is no longer practiced.
Microsoft Research’s 2025 study of 319 knowledge workers found that higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more. The Stanford HAI 2025 AI Index Report confirmed that AI boosts productivity — but requires human judgment to translate productivity gains into meaningful outcomes.
The implication is structural: AI amplifies what you bring to it. If what you bring is diminished by the very tool you are using, the amplification is of something less than you intended.
The World Economic Forum and McKinsey Health Institute’s 2026 report “The Human Advantage: Stronger Brains in the Age of AI” synthesizes the organizational evidence clearly. 59% of employees will need retraining to meet AI-era skill demands by 2030. The skills most in demand — in both current and future contexts — are what the report calls “brain skills”: resilience, critical thinking, communication, emotional intelligence, and judgment under uncertainty.
McKinsey is direct about the consequence of neglecting this: “Without this, organizations risk losing their competitive edge and driving up preventable costs through declining employee well-being.”
These are not soft skills. They are the specifically human capacities that make AI deployment valuable rather than merely efficient. The AI era does not diminish the value of human judgment — it makes the quality of that judgment more consequential than it has ever been.
There is a gap opening in organizations between AI adoption and human readiness for what AI adoption requires. 78% of organizations reported using AI in 2024, up from 55% the previous year (Stanford HAI 2025). Investment in the human capacities that make that usage effective did not grow at anything like the same rate.
This is the gap Somabase’s AI Collaboration track addresses directly. Not resistance to AI — that is neither useful nor coherent. The work is building the human capacities that determine whether AI deployment produces genuine value: decision coherence that doesn’t collapse under speed, identity boundaries that don’t erode when external systems carry more and more cognitive load, and the metacognitive accuracy to know the difference between what you genuinely understand and what the tool understood for you.
Andy Clark’s framing of “cognitive hygiene” is useful, but hygiene is a maintenance metaphor. What Somabase is working toward is something more active: human development that creates the interior robustness to engage AI as a genuine collaborator rather than as a replacement for thinking.
The terms of that engagement matter. AI that substitutes for judgment produces the illusion of decision quality without the substance. AI that augments developed human judgment produces something genuinely better than either alone. The difference between those two outcomes is the quality of the human in the system — the stability, the discernment, the relational and contextual intelligence that no model currently replicates.
The AI era is not a threat to human development work. It is the most compelling argument for it that has ever existed.
Individual development is a reasonable starting point. It is not a sufficient destination.
The brain’s most sophisticated capacities — trust, cooperation, moral judgment, social perception, the ability to learn from and through other people — do not develop in isolation. They develop in relationship. Not as a nice complement to solo work, but as the primary substrate. This is what social neuroscience has established with increasing precision, and it has direct implications for how development actually produces behavioral change.
The neurochemical foundation of this is oxytocin. Neuroeconomist Paul Zak’s research at Claremont Graduate School, published in the Harvard Business Review (2017) and extensively replicated, established that oxytocin is not just a bonding chemical — it is the literal molecular substrate of trust and cooperation. Receiving a sign of trust increases oxytocin levels in the bloodstream by an average of 41%, and this increase directly predicts subsequent trustworthy behavior. The relationship is bidirectional: trust generates oxytocin, oxytocin generates more trust.
Research from Frontiers in Neuroscience (Kendrick et al., 2019) extended this: oxytocin promotes social cohesion, conformity with trusted in-group members, and cooperative learning through amygdalo-frontal-striatal circuitry. The chemical that creates trust also creates the neural state in which learning from other people becomes most effective. This is not incidental — it is the architecture of how human beings are designed to develop.
In high-trust group environments, people are not just more psychologically comfortable. Their brains are chemically primed for learning in a way that isolated development does not produce.
Learning is a social act more than it is an informational one. Stephen Porges’ Polyvagal framework, updated through ongoing publications in Clinical Neuropsychiatry (2025), makes this physiologically explicit: the ventral vagal state — the autonomic condition associated with safety, social engagement, and full access to collaborative and creative capacity — is activated primarily through social signals of safety. Voice prosody, facial expression, eye contact, and the experience of being seen and received without threat all regulate the autonomic state in the direction of learning and growth.
In dysregulated states, the neural systems needed for cooperative engagement, nuanced learning, and genuine relational presence are constrained or unavailable. The content of what is being taught does not change. The physiological state of the learner determines how much of it lands.
This is why the environment in which development occurs matters as much as the content of the development. A safe cohort, with consistent membership and a clearly held container, activates the oxytocin-rich conditions under which the human brain learns most effectively.
The discovery of mirror neurons in the early 1990s provided the neural mechanism for a capacity humans intuitively recognized long before the science arrived: we learn by observing and internally mirroring what we see others do. The mirror neuron system fires both when we perform an action and when we observe the same action in others — creating an internal simulation of others’ states, intentions, and behaviors.
In developmental contexts, this matters practically. Observing someone navigate a difficult conversation with precision and care does not just provide information about how to handle such conversations. It activates the neural substrate of that capability in the observer. This is what skilled modeling in cohort practice produces: not just demonstration, but neurological priming.
The mirror neuron system is also the neural basis of empathy — the capacity to internally simulate another person’s emotional state. Developing this capacity more accurately and more consciously is a direct contribution to relational integrity, communication coherence, and the quality of leadership presence.
One-to-one developmental work is valuable. Coaching relationships, guided practice, individualized attention — these contribute meaningfully to development. But they do not replicate what group containers produce because they cannot.
Research on cohort-based learning published in Frontiers in Education (Mauldin et al., 2024) found that cohort learning increased critical thinking, professional development, and content retention specifically through peer accountability, exposure to diverse perspectives, and the development of capacities through the friction of genuine relational encounter. For complex, high-stakes learning, collaborative environments consistently outperform individual learning: less knowledgeable learners in collaborative groups outperform their equally knowledgeable peers working alone.
The group is not just a support structure. It is a developmental engine. The relational friction of real people with genuinely different patterns, responses, and triggers creates conditions that simulated or purely dyadic work cannot replicate. Relational capacity requires relational practice — and relational practice requires other people.
The decision to structure Somabase’s work around live cohorts is not a program design preference. It is a response to what the science consistently demonstrates about how human beings actually develop the capacities that matter most in professional contexts.
The trust and safety created by consistent cohort membership activates the oxytocin-rich, conditions under which social learning reaches its highest effectiveness. The live relational dynamics — the moments of genuine friction, awkward repair, unexpected depth — create the conditions that structural brain research identifies as necessary for new behavioral patterns to consolidate.
Behavioral change at the level of pattern — not knowledge, not insight, but the automatic responses that operate under pressure — requires repetition in conditions of genuine stakes. Other people provide those stakes. There is no substitute.
The brain did not develop its most sophisticated capacities alone. Development that respects that architecture produces different results than development that ignores it.
I want to make the integrated case.
Over the course of the articles in this series, we have covered eight distinct bodies of research: the neuroscience of judgment under pressure, the neurochemistry of performance, the somatic science of body-mind coherence, the developmental logic of relationships, the mechanisms of behavioral change, the neuroscience of creativity, the science of facial and biometric data, and the emerging research on human cognition in the age of AI.
Each domain is independently compelling. But the real argument is not in any one of them. It is in what they point to together.
When you lay the eight domains alongside each other, a single coherent picture emerges: there is a specific neurological, physiological, and relational state in which human beings function at their highest capacity — in judgment, communication, creativity, learning, and trust. And there is a measurable departure from that state that chronic pressure, digital saturation, relational reactivity, and cognitive outsourcing reliably produce.
The research names this from multiple angles. Amy Arnsten’s work at Yale (Nature Neuroscience, 2015) shows the prefrontal cortex going offline under stress. The HeartMath research base (Burge et al., 2025, Global Advances in Integrative Medicine and Health) maps the heart-brain coherence that characterizes high-performance states. Ben Shofty’s 2022 Nature paper establishes that DMN integrity is a prerequisite for creative thought. Paul Zak’s oxytocin research (Harvard Business Review, 2017) identifies trust as the neurochemical substrate of cooperation. Ericsson’s deliberate practice research establishes that 22% skill gains from structured practice over standard instruction are available to anyone, given the right conditions. Andy Clark’s 2025 Nature Communications paper frames the AI era as a challenge of cognitive integration: maintaining human judgment while incorporating powerful tools.
The common thread: high-capacity human functioning is not a default state under modern conditions. It is an achieved state, and it requires specific inputs to reach and maintain.
There is a regulated version of the human operating system and a dysregulated one. The research across all eight domains maps the contrast with precision.
In the regulated state: the prefrontal cortex is online, producing deliberate and flexible judgment. Cortisol is at baseline. Dopamine circuits are calibrated — motivating rather than depleting. Oxytocin is present, sustaining the trust that relational work requires. BDNF is elevated, supporting learning and neuroplasticity. Heart coherence is high. Interoception is accurate. The Default Mode Network is accessible for creative thought. The relationship with AI is complementary rather than substitutive — human judgment intact.
In the dysregulated state, each of these is inverted. The inversion does not announce itself. People in chronically dysregulated states routinely believe they are performing well — the prefrontal offline state impairs the very metacognitive capacity that would allow them to notice. This is one of the most important findings in the literature, and one of the most directly useful.
Coherence is the word I use for the integrated state — when neurochemistry, body-mind alignment, relational capacity, and cognitive clarity are operating together rather than in contradiction. It is not a permanent condition. It is a recoverable one, and the speed and reliability of recovery under pressure is a trainable skill.
The goal of Somabase’s work is not peak performance as an occasional event. It is coherence as a practiced default — the ability to access regulated function reliably, to detect departure from it quickly, and to return without extended periods in reactive, degraded operation.
Somabase’s operational design maps onto the neuroscience with specific intent.
Live cohorts create the oxytocin-rich, trust-enabling environment that activates social learning circuitry. The autonomic safety that consistent, well-held groups produce is the physiological prerequisite for the states that Porges’ research (Clinical Neuropsychiatry, 2025) identifies as the precondition for genuine learning, creativity, and relational engagement.
Structured practice with expert facilitation delivers the deliberate repetition with immediate feedback that Ericsson’s research identifies as the mechanism of genuine behavioral change. Content exposure without structured repetition produces knowledge, not behavioral capacity. The practice is what moves insight into the body.
MS360, the behavioral visibility technology developed by Carinda Salomon, creates the feedback loop that the biofeedback research literature consistently identifies as the mechanism of accelerated behavioral change. A meta-analysis covering 41 studies and over 2,300 participants (Frontiers in Psychology, 2025) found that biofeedback and neurofeedback training produced effect sizes of SMD = 0.88 on performance outcomes. Making patterns visible creates the conditions for their modification. Invisible patterns remain unchanged regardless of how much insight surrounds them.
The community calibration that cohort practice provides corrects for the “illusion of competence” that both AI delegation and isolated self-assessment reliably produce. Other people are epistemically necessary. They see what we cannot see about ourselves.
The stakes of this work have increased as AI has scaled. The WEF and McKinsey Health Institute’s 2026 report “The Human Advantage: Stronger Brains in the Age of AI” is explicit: the skills most in demand in the AI era are brain skills — judgment, resilience, communication, critical thinking, and the emotional intelligence to navigate complexity in relationship with other people. McKinsey sizes the long-term AI productivity opportunity at $4.4 trillion — but only realizable through effective human-AI collaboration.
The cognitive outsourcing research adds urgency: heavy AI use in routine cognitive tasks produces measurable declines in metacognitive accuracy and critical thinking engagement over time (Risko et al., 2025, Frontiers in Psychology). The human advantage degrades if it is not actively maintained. This is not a case against AI. It is a case for deliberate investment in the human infrastructure that AI deployment requires.
The phrase I return to is coherence under velocity. The challenge of the current moment is not complexity alone, not speed alone, not relational difficulty alone — it is all of them simultaneously, at a pace that gives the system little time to recover between demands.
Coherence under velocity is what Somabase is building toward. It is the integrated result of everything the science across eight domains points to: a human being who can think clearly under pressure, build trust quickly, navigate uncertainty without contracting, use AI without losing themselves in it, and sustain the conditions that keep the whole system running.
This is not soft work. It is the hardest and most consequential development available to leaders and organizations that want to compete well in the decade ahead. The science has always supported it. The scale of what’s now at stake makes it urgent.