Two Paths
AI Collaboration
Build clearer thinking, stronger boundaries, stable decisions, and cleaner communication under speed.
Explore AI Collaboration →Relational Intelligence
Reduce escalation. Strengthen communication. Build the relational capacity leadership depends on when pressure rises.
Explore Relational Intelligence →The Method
Somabase works through live cohorts, structured practice, community calibration, and behavioral visibility through MS360. This is how judgment, communication, and leadership capacity become more reliable under velocity.
6–8 week programs with live facilitation, real pressure, and direct feedback.
Live relational feedback that makes communication patterns visible and workable.
Behavioral visibility that surfaces patterns affecting judgment, communication, and leadership under pressure.
For Organizations
Custom cohorts for leadership teams where AI adoption is moving faster than human readiness. Built to improve decision quality, communication coherence, escalation reduction, and responsible AI collaboration.
Enterprise Inquiry →The Core Principle
Clear thinking produces better output. Unclear thinking produces faster confusion. AI scales judgment and it scales distortion. Your relationship to the tool shapes the value you get from it.
Outcomes
Structure
Participants leave with a clearer way to think, decide, and lead with AI.
Enterprise
Somabase delivers custom internal cohorts built around the human patterns shaping AI performance inside your organization.
Enterprise Inquiry →Partnership
MS360 provides a behavioral translation layer — a behavioral mirror that surfaces invisible patterns. Built into every Somabase cohort through our partnership with Carinda Salomon.
Learn more about MS360 →Recognition
Relational patterns are consistent. They surface in your partnerships, your team, your intimacy, and your self-talk — until something shifts.
Outcomes
Core Domain
How you relate under pressure reveals how you lead everywhere — escalation patterns, boundary dynamics, communication clarity, self-trust. Somabase treats relational integrity as a core domain of leadership development, using it as a precision mirror for the patterns that shape every professional and personal relationship you have.
Structure
Participants leave with a stable relational framework.
What It Is
MS360 provides behavioral pattern visibility through biometric translation. It surfaces signals that are otherwise invisible — helping individuals and teams see patterns in real time.
A calibration tool. A behavioral mirror. MS360 gives you information — the interpretation and action remain yours.
How It Works
For Individuals
MS360 is built into every cohort — a feedback layer that runs alongside your development, supporting awareness without creating dependency. You use it as a mirror, then set it down.
For Teams
In enterprise cohorts, MS360 offers behavioral pattern visibility. Useful for decision stability tracking, escalation pattern recognition, communication coherence assessment.
Who This Is For
Leadership teams moving quickly with AI while decision quality, communication, and trust are under strain. High-escalation environments where reversals, fragmentation, and reactive leadership are costing real money and talent.
What's at Stake
Decision quality drops. Communication fragments. Escalation increases. Teams move faster without getting clearer. AI adoption adds output without improving alignment.
The Process
Diagnostic conversation. Identify where velocity, decision drift, escalation, and communication breakdown are limiting AI value.
Custom 6–10 week cohort mapped to your organization's specific dynamics and needs.
Live sessions. Structured practices. MS360 behavioral visibility throughout.
Executive summary. Findings and recommendations. Continuation options.
Inquiry
Tell us about your organization. We'll respond within 48 hours to schedule a discovery call.
Received
We'll be in touch within 48 hours to schedule a discovery call.
Learn about MS360 →What This Is
Somabase community is a live relational environment where members practice stability, receive feedback, and calibrate together. Active participation, not passive consumption.
Real-time relational dynamics that mirror your patterns back to you in a structured, supported container.
Integration happens when you practice with others, not alone. The community is the practice environment.
The community holds the container. You bring the work. Structured support. Real accountability.
Why carefully constructed communities are the most important thing we can create in a digital world.
By Erik Horbacz · February 2026Why maturity is the bottleneck for intelligent collaboration.
Read →Maintaining agency in an AI-mediated world.
Read →How to hold decisions when everything accelerates.
Read →What instability costs organizations adopting AI.
Read →This is not self-help. It is operational capacity.
Read →A precision mirror for relational patterns.
Read →Principles for the next decade of working with intelligence.
Read →What It Is
Somabase helps leaders and organizations build the human advantage required to create real value with AI. We focus on judgment, decision stability, communication coherence, and relational capacity under speed.
The platform operates through live cohorts, structured practice, community calibration, and integrated behavioral visibility through MS360 (developed by Carinda Salomon).
Three Pillars
Somabase develops three integrated capacities: stability under pressure, decision quality with AI, and relational clarity under velocity. These capacities determine whether speed creates leverage or chaos.
Stability holds pressure. Judgment guides decisions. Relational clarity keeps trust and communication intact.
What Somabase Is
In Practice
Somabase develops the judgment, communication, and stability leaders need when AI increases speed, volume, and pressure.
Application Received
Next step: Schedule a 20-minute consult to discuss your application.
Schedule ConsultWhile you wait, read: "AI Amplifies Your State" →
Application Received
Next step: Schedule a 20-minute consult to discuss your application.
Schedule ConsultWhile you wait, read: "Relational Intelligence as Infrastructure" →
What to expect
This is a 20-minute discovery conversation. We'll discuss where you are, what you're working on, and whether Somabase is the right container for your next stage of development. Come with honesty about where you're stuck. No preparation required beyond that.
Before You Book
Answer a few questions so we can make the most of our time together.
Why carefully constructed communities are the most important thing we can create in a digital world
There is something distorted and inflated, and we all feel it.
Scroll through any feed for ten minutes, and you can taste it — the hollow aftertaste of connection that isn't. Thousands of followers, dozens of group chats, notifications piling up like leaves in a gutter. We are more networked than any generation in human history, and somehow more alone.
I'm not pointing fingers. I've been in these spaces. I've never worked inside tech or inside someone else's company — I've been an entrepreneur since college, building things from the outside, which means I don't think the way most people in these industries think. But I've spent real time in digital communities — meme-coin Discords, self-help groups, creator circles, crypto communities, and forums that burned bright for six months and then went dark. I've watched communities form around missions and leaders and content — and I've watched most of them collapse the moment the leader lost energy, the content dried up, or the first real conflict surfaced.
The pattern is always the same. A group comes together with enthusiasm. Everyone is polite. Everyone agrees. It feels electric — like something real is forming. Then someone says something uncomfortable. A disagreement. A tension that can't be smoothed over with an emoji reaction. And instead of leaning in, the group scatters. Back to the safety of surface-level engagement. Back to the community's performance without its substance.
And here is the part that bothers me most: almost every community I've been in lives and dies by its leader or its content. People join as long as there is value being handed to them. They consume. They extract. And when the value stream slows, they leave. Almost nobody shows up to contribute as who they are. Almost nobody brings the value — they wait for it to be delivered. Communities function like audiences with membership fees.
Only when they move past that, do they arrive at true community. Deep listening. Real trust. Collective intelligence that none of the individuals could have reached alone.
Most digital communities never make it past the first stage. They are dressed up as tribes but function as audiences. They have members but not relationships. Channels but not conversations. Content but not culture.
This is the dark part. This is what is real.
And it matters more than we think.
Because we are entering an era where the ground under every institution is shifting. Intelligence is being industrialized. AI systems can now do in seconds what took teams of knowledge workers months. Energy is getting cheaper. Labor as we know it is being redefined. The economic models that shaped the last century — scarcity-based, production-measured, shareholder-first — are groaning under pressures they were never built to absorb.
Ray Dalio mapped the cycles. Great powers rise, consolidate, decay, and are replaced. Raoul Pal pointed to 2030 as the moment when the convergence of AI, blockchain, and abundant energy makes the old economic rules unrecognizable. Peter Diamandis talks about “solving everything” — using industrial intelligence to crack disease, energy, and materials science. Kurzweil charted the exponential curve and said the singularity is not a distant fantasy but a near-term reality. I would argue that we are here.
They are all pointing at the same horizon. And they are all, in their own way, missing the same thing.
Technology can transform industries, domains, and specific problems. But we can't rely on it to evolve us. That work is ours to do — deliberately and with intention.
We are building powerful machines without enough human readiness for the speed they create. We can automate the how of nearly everything, but we still have weak infrastructure for the who. Who are we becoming? What do we value? How do we coordinate when the old rules stop working? How do we stay human while the pace keeps rising?
These are not philosophical luxuries. They are survival questions. And the answer — the only answer that has ever worked across civilizations, across centuries, across every tradition that left something worth inheriting — is community. Not the word. Not the brand. Not the Slack channel. The real thing.
Real community is the hardest thing to build. That's why almost nobody does it.
Real community requires you to show up as you actually are — not the curated version, not the professional persona, not the optimized personal brand. It requires conflict. It requires sitting in the discomfort of disagreement with people you've committed to, and choosing to stay instead of scroll away.
Peck described the stage before true community as “emptiness” — a space where each person lets go of their need to fix, control, convince, or perform. It looks like failure. It feels like loss. And it is the only doorway to the thing everyone says they want but almost no one is willing to earn.
I've been through something like the dark side of this personally. I was born two months early, brought back to life by the hands of strangers. I grew up with ADHD in the basement of a Catholic school, building worlds in my head that nobody else could see. Every time I tried to share what was inside me, I learned something about connection — and about what it feels like when that connection doesn't land. So I know what avoidance tastes like. I know what it means to have a voice and be afraid to use it.
In a world of high-speed extraction, the quiet ones, the ones with the most depth and the most real substance often sit it out because the cost of engagement feels too high.
This is backward. This is the imbalance we need to correct.
So what does a carefully constructed community actually look like in a digital space?
It starts with a promise. Not a mission statement carved into a boardroom wall. A living, dynamic promise — what I call a Compelling Aligning Promise, or CAP. It is specific enough to pull action, meaningful enough to matter, and shared enough to align people around a common direction. The individual who makes the promise and the community that holds the promise operate in parallel. The founder is not above the community — they are the first member. Their model of personal alignment is the community's foundation.
Beneath that promise lives identity — values, energy, trajectory. Who are we? How do we show up? Where are we heading? This is not brand language. This is the honest accounting of what we actually stand for, the energy we bring into rooms, and where our current behavior is really taking us. When identity is clear, it becomes a filter. The right people feel it and lean in. The wrong people self-select out. No sales pitch required.
Then comes voice — authentic, resonant communication. Not marketing. Not content strategy. The actual sound of a community speaking its truth. Marshall Ganz calls it the story of self, the story of us, and the story of now. Seth Godin calls it the smallest viable audience. I call it the difference between attention and belonging. You don't need millions of followers. You need the right people hearing the right signal.
Then priorities. Then the network structure. Then rituals — the daily and weekly practices that turn a group of strangers into a culture. Then, at the root, individual ownership — each person doing their own work, taking responsibility for their own growth, feeding what they learn back into the collective.
This applies at every scale — from a small cohort to a global network. It is fractal. The same principles that govern a family or a small team can be applied to a neighborhood, a company, and an economy. The same geometry at every scale. Fractal alignment.
I want to be honest about something: I have not successfully built a community. Not yet. I've studied this, researched it deeply, lived inside communities that didn't work, and spent years developing a framework for how it could work. But the proof is still being written. I'm building now — with Corvia, an AI music community exploring emotional development, and with Somabase, a platform for human-technology collaboration and relational intelligence. Both are early. Both are laboratories for the hypothesis that intentional community, carefully constructed around shared values and honest practice, is the most powerful structure humans have for navigating change. The foundation I'm building runs deeper than I can explain in a single blog post. It will take time and context to show. But I'd rather be honest about where I am than pretend I've already arrived.
Here is where the light breaks through.
When a community gets this right — when it moves through pseudo-community and chaos and emptiness and arrives at the real thing — something extraordinary happens. The collective becomes smarter than any individual in it. Problems that seemed unsolvable from the inside become obvious from the shared perspective. People who feel lost find direction. People who feel voiceless find a frequency that is unmistakably theirs. Ventures emerge not from market analysis but from genuine need, identified by people who trust each other enough to be honest about what is missing.
The Beatles didn't just make music. They created a cultural field that millions of people stepped into and were changed by. That field was community — unstructured, messy, emergent, but real. Imagine what becomes possible when we apply everything we've learned about human development, organizational design, tokenomics, AI collaboration, and consciousness research to intentionally build that same kind of field.
Our current systems are designed to extract intelligence rather than concentrate it. Where the definition of value shifts from what you produced to what you contributed to the flourishing of the whole.
This is not utopian fantasy. The Economic Space Agency is building protocol infrastructure for exactly this kind of postcapitalist coordination. DAOs have already demonstrated that decentralized governance can work — imperfectly, messily, but really. Tokenized ownership turns shared infrastructure into shared income. The pieces exist.
What has been missing is human readiness. The capacity to hold disagreement without fragmenting. The discipline to use technology intentionally. The ability to value contribution over extraction.
The real work is practical. Stronger identity boundaries. Better behavior under pressure. Cleaner coordination. Better judgment around the tools shaping how we think, work, and relate.
We are in the water right now.
The waves of technological disruption are coming whether we are ready or not. The old boats — the corporations, the institutions, the governments designed for a slower world — are taking on water. Some will adapt. Many will not. The question is not whether change is coming. The question is what vessel you are in when the waves hit.
The boat is community. Carefully constructed. Values-aligned. Honest about the chaos. Willing to pass through emptiness. Built not for speed but for resilience. Crewed not by employees but by owners — people with real stakes, real voices, real accountability to each other.
This is the thing I am dedicating my work to. Not because I figured it all out — I am figuring it out in real time, messily, publicly, with the same fears and doubts as everyone else. But because the alternative is worse. The alternative is drowning alone in a sea of information, clutching a phone full of followers and wondering why none of it feels real.
If you are reading this and you feel the pull toward something more honest, more intentional, more human than what the platforms are currently offering — know that you are not alone. The avoidant ones, the quiet ones, the people who have done the inner work but haven't yet found a space worthy of their voice: you are exactly who we need. The narcissistic networks have had their turn. It is time for the people with substance to show up.
Not to perform. Not to optimize. To build something real. To construct the community carefully — with shared values, clear direction, honest communication, and the willingness to stay when it gets hard.
That is how we ride the waves.
That is how we build the boat together.
By Erik Horbacz · February 2026
There’s a premise baked into almost every AI adoption conversation happening in organizations right now: the bottleneck is technical. If we just train people on the tools, adopt the right platforms, build the right workflows — the results will follow. What we’re discovering, slowly and sometimes painfully, is that this premise is wrong.
The bottleneck isn’t technical. It’s human.
More specifically, it’s the quality of the internal state the human brings to the collaboration. And AI doesn’t just work around that state — it amplifies it.
I want to be precise about what I mean by that, because it’s easy to read “internal state” and immediately drift toward something vague or psychological. I’m not talking about mood. I’m talking about something more structural: the degree to which a person can maintain clarity under pressure, hold a decision without immediately reversing it, and sustain independent judgment when a sophisticated system is producing confident-sounding output at high velocity.
That’s what I mean by state. And when you bring that quality — or its absence — to AI collaboration, the AI doesn’t moderate it. It magnifies it.
Here’s the dynamic in practice. When you sit down to work with an AI system and you’re scattered — bouncing between competing priorities, running on sleep debt, carrying unresolved tension from an earlier conversation — the AI will produce output that reflects the fragmentation of your input. The prompts will be unclear. The outputs will be loosely formed. You’ll accept them anyway, because in a scattered state you don’t have the discriminating capacity to evaluate what you’re receiving. The output feels like it’s helping because it’s producing something, filling the space, generating motion. But it’s producing the shape of your own confusion back to you, and you’re mistaking it for signal.
The inverse is equally true, and this is where the real leverage lives. When you come to AI collaboration with clarity — a settled sense of what you’re trying to accomplish, why it matters, and what you’re not willing to compromise — the AI becomes extraordinarily useful. Not because the tool changed, but because you changed what you’re directing it toward. You can see the difference between a response that actually serves the goal and one that merely sounds like it does. You can push back on a confident-sounding answer when your own judgment tells you something is off. You can use AI to explore and then return to yourself to decide.
Research from Frontiers in Psychology (2025) introduced a taxonomy worth sitting with: cognitive offloading with AI progresses across three stages — assistive, substitutive, and disruptive. In the assistive phase, AI extends your capacity. In the substitutive phase, it starts replacing your cognition. In the disruptive phase, it actively degrades your ability to self-monitor, evaluate your own reasoning, and make accurate assessments of what you know versus what you merely received. Heavy AI use is correlating with lower metacognitive accuracy. In plain language: people are becoming less able to accurately gauge how well they understand something, because AI fills the comprehension gap so quickly that the struggle — which is where learning and discernment actually happen — never occurs.
This is the illusion of competence. You feel more capable. You produce more output. But the independent judgment that makes that output trustworthy hasn’t developed — it’s been bypassed.
I don’t raise this to argue against AI. I use it extensively, and I think the collaborative potential is genuinely significant. I raise it because the frame most organizations are using to think about AI adoption is leaving out the most important variable.
When you train someone on an AI tool without developing the human infrastructure required to use it well, you haven’t improved their capability. You’ve given a powerful amplifier to someone who hasn’t worked on what they’re amplifying. If the signal is good — stable, clear, coherent — the amplifier makes it better. If the signal is noisy, reactive, or externally dependent, the amplifier makes that worse too. The tool doesn’t know the difference.
This becomes a compounding problem at scale. One reactive person using AI badly produces confused output. A team of reactive people using AI badly produces organizational chaos at acceleration. The decisions come faster. The pivots happen more frequently. The feedback loops tighten. But the underlying human capacity to hold steady — to evaluate, to commit, to maintain direction — hasn’t kept pace with the velocity the technology enables.
What develops that capacity? Not more AI training. Not more productivity frameworks. The research points toward something more fundamental: self-monitoring, metacognitive practice, and what I’d describe as stability under pressure — the ability to remain coherent when the environment is moving fast and confident-sounding information is arriving from every direction.
This is precisely why Somabase starts with the human, not the tool. Not because the tool is unimportant — it’s transformative — but because without the human foundation, the tool accelerates the wrong things. Our AI Collaboration cohort is built around a simple premise: before you can collaborate well with an intelligent system, you need a level of internal stability and independent judgment that makes that collaboration generative rather than disorienting. You need to know what you think, what you value, and what you’re responsible for — in a way that doesn’t dissolve the moment AI offers you 12 alternatives.
This isn’t a soft-skills conversation. It’s a performance conversation. The humans who will use AI most effectively are not necessarily the most technically sophisticated. They’re the ones with the clearest signal — the most developed capacity to direct, evaluate, commit, and override. Maturity, in the deepest sense of that word, is the bottleneck.
We’re at the beginning of figuring out how to develop that capacity intentionally, in the context of AI collaboration specifically. The research is early. The practices are experimental. But the direction is clear.
This is what Somabase is exploring. If that framing resonates — if you’ve felt the quality of your collaboration vary with your own state and wanted a structured way to develop what’s underneath — we’re building something for exactly that.
By Erik Horbacz · February 2026
There’s a useful distinction that most conversations about AI adoption are missing.
The distinction is between task delegation and identity outsourcing. They can look identical from the outside — both involve handing something to AI that you used to do yourself — but the internal experience is completely different, and the long-term consequences are on opposite ends of the spectrum.
Task delegation is what AI is legitimately excellent at. You have a clear goal, a defined scope, and you direct an AI system to help you accomplish it. Your judgment stays engaged throughout. You evaluate the output. You decide what to use, what to discard, and what to do next. The locus of decision-making stays with you. The AI is an instrument.
Identity outsourcing is something else. It starts subtly — an over-reliance on AI to generate not just outputs but positions. What should I think about this? What’s the right way to frame this? Is this a good idea? The AI answers, and the answer feels good, and over time the practice of generating your own positions — sitting with a question long enough to develop a view — atrophies. You’re no longer using AI to extend your thinking. You’re using it to replace the effort that thinking requires.
Research published in Social Behavior & Personality in 2024 described this pattern across five dimensions: dependency, gullibility, irrationality, unreliability, and loss of cognitive autonomy. What I find valuable about this framework isn’t the clinical language — it’s what it points to. These aren’t five separate problems. They’re five faces of the same underlying shift: the gradual erosion of the internal structures that let you direct, evaluate, and hold positions independently. When those structures weaken, you become gullible not because you’re unintelligent, but because your capacity for independent verification has been underexercised. You become unreliable not because you’re untrustworthy, but because your positions are being generated outside yourself and they shift when the external generator shifts.
The question I keep returning to — and the one that Somabase is built around exploring — is: what maintains the integrity of those structures while you’re also working with powerful AI systems daily?
Identity boundaries is the phrase I’d use. Not in a rigid sense — not a wall between yourself and the technology. More like a clear, stable sense of what belongs to you that remains coherent when an AI system is confidently offering to take it over. Your values. Your judgment. Your creative voice. Your direction. These aren’t things you need to protect from AI. They’re things you need to remain fluent in, even as AI becomes capable of simulating them quite well.
The simulation quality is actually the complicating factor here. A decade ago, AI-generated output was obviously different from human output. The quality gap made it easy to maintain a clear sense of what was yours versus what was generated. That gap is closing rapidly. AI can now produce writing that sounds like you, ideas that sound like yours, strategic reasoning that matches your usual patterns. This is genuinely useful — and genuinely disorienting. Because when the output mirrors your own style closely enough, the friction that normally signals “this came from outside me” disappears. And so does the metacognitive check that keeps you in the driver’s seat.
Frontiers in AI (2024) found that emotional engagement with AI is a significant variable in decision quality — and specifically, that higher emotional engagement tends to correlate with reduced willingness to override AI recommendations, even when the human has good reason to. This makes sense. If you’ve developed a working relationship with an AI system, if it feels responsive and helpful, the cognitive cost of contradicting it rises. Not because the AI has earned your deference — but because the emotional register of the relationship has started to carry weight in your evaluation process.
68% of users report feeling more emotionally engaged with empathetic AI. That’s not a small number. And the implication isn’t that empathetic AI is bad — it’s that the humans using it need a level of relational discernment that most of us haven’t been asked to develop before. We’ve never had to navigate emotional engagement with non-human systems at this level of sophistication. The question of what’s real, what’s simulated, and what that distinction means for your judgment is genuinely new.
I don’t think the answer is more skepticism. Chronic skepticism toward your tools is a different kind of instability — it just destabilizes in the direction of over-caution rather than over-reliance. The capacity I’m interested in is something more nuanced: the ability to hold genuine engagement with AI collaboration while also maintaining a clear enough internal signal that you can feel the difference between using the tool and being used by it.
That difference — and I want to be precise here — isn’t primarily about what you’re doing. It’s about what’s happening underneath the doing. Someone can produce the exact same AI-assisted work product with either a stable sense of their own authorship intact or with that authorship quietly dissolved. From the outside, the outputs look the same. But the trajectory those two people are on is completely different. One is developing capacity. The other is outsourcing it.
Somabase’s AI Collaboration cohort is structured around making that internal distinction practical and trainable. Not through abstract frameworks, but through structured practice in the actual conditions where identity outsourcing tends to happen: high velocity, complex decisions, sophisticated AI input arriving at volume. The goal is to develop what I’d call behavioral containment — the ability to engage fully with what AI offers while retaining the internal coherence to evaluate, override, and own the outcome.
This is an experiment. There’s no established curriculum for what we’re doing because the situation itself is genuinely new. What we know from the research, from early cohort work, and from lived experience building with these tools is that the humans who navigate this best are not the most technically sophisticated or the most skeptical. They’re the ones who have done enough internal work to know what belongs to them — and who can hold that identity boundary clearly enough that it doesn’t erode under the very real pressure to just let the AI handle it.
If you’ve noticed that pressure — in your own work, in your own thinking, in the way your creative voice sometimes sounds more like your AI’s suggestions than your own — we’re building something for exactly that territory.
By Erik Horbacz · February 2026
One of the less-discussed effects of working with AI systems daily is what it does to your relationship with commitment.
Not commitment in the abstract — in the specific, practical sense of making a decision and holding it long enough for it to produce useful information. That interval — the gap between deciding and learning — is where most of the real signal comes from. Execution reveals things that analysis never could. But the interval requires something that’s becoming harder to sustain: the willingness to commit under conditions of unresolved uncertainty, when you know that more input is available if you want it.
AI has made more input perpetually available. That’s one of its most genuinely valuable properties. You can query, iterate, refine, and generate alternatives faster than any previous tool in history allowed. The problem is that the same capacity that makes AI so useful for exploration makes it actively disruptive to the part of decision-making that comes after exploration. At some point, you have to stop generating alternatives and commit to one. And the internal architecture required to do that — the capacity to tolerate the discomfort of foreclosing options, to hold a position under pressure, to trust your own synthesis when AI keeps suggesting there’s a better answer — that architecture doesn’t automatically strengthen just because you have better tools.
What I’m watching in people working with AI intensively is a pattern I’d describe as decision reversal under velocity. The decisions aren’t bad. The reasoning isn’t flawed. But the commitment doesn’t hold. Not because circumstances changed — because the mere availability of more input creates a standing invitation to reconsider. The decision gets reopened. The pivot happens before the original direction had time to produce any signal. The cycle repeats, and the organization accumulates motion without accumulating learning.
This is worth distinguishing from legitimate responsiveness. Updating your position when meaningful new information arrives is exactly right. What I’m describing is different: the compulsive revisiting that happens not because new information arrived, but because the psychological cost of staying committed is higher than the psychological cost of starting the loop over again. In a world where AI can generate a compelling rationale for almost any direction, that loop can run indefinitely. There’s always a better option available in the output.
Cognitive load theory offers a useful lens here. The mental effort required to hold a decision against the incoming tide of alternatives is itself a limited resource. When you’re working at high velocity — processing large volumes of AI-generated input, managing complex decisions across multiple domains, operating in environments where the feedback loops are tight — the available bandwidth for sustaining commitment narrows. And when commitment starts to feel too expensive, the system defaults to the state that requires less active maintenance: uncertainty, optionality, and the perpetual feeling that you haven’t quite decided yet.
The real bottleneck, in my view, is upstream of the decision itself. It’s the clarity of values that the decision is meant to express. When you know — with a kind of settled, embodied certainty rather than just intellectual acknowledgment — what you’re actually trying to build, what you’re responsible for, and what you’re not willing to trade away, the decision-making process looks different. The alternatives AI generates are still interesting. But they’re interesting from the vantage point of someone who already has a direction, evaluating whether the alternatives sharpen or dilute it. That’s very different from evaluating alternatives from a position of genuine openness — which is the correct posture during exploration, but the wrong one during commitment.
Decision coherence is what I’d call the target capacity: the ability to maintain internal alignment between your values, your reasoning, and your actions over time — even as external input accelerates. This isn’t rigidity. Coherent decision-makers change their minds. But they change their minds for specific reasons, in the direction of their actual priorities, rather than because the incoming data stream has temporarily made something else seem more compelling.
The second capacity is what I’d call discomfort tolerance around commitment. This sounds almost trivially simple, and yet it’s one of the places where I see even very sophisticated people struggle. Choosing means not-choosing. Committing to one path means acknowledging that the alternatives you’re not taking might have been better. AI makes this harder because it can always show you what a different choice might have produced. The counterfactual is no longer theoretical — it’s generatable, often quite persuasively, in real time. Learning to sit with the discomfort of a committed position when a confident-sounding alternative is available — that’s a trainable capacity, and an important one.
The third is what I’d describe as signal tolerance: the ability to hold a decision long enough that execution has time to generate real information, rather than abandoning the position before the experiment has run. Most decisions don’t reveal their quality quickly. The early data is often ambiguous, sometimes negative, always incomplete. The pull to reopen the decision during that ambiguous period is strong. Resisting that pull — not out of stubbornness, but out of a disciplined respect for what commitment actually produces — is a skill that atrophies when AI makes the cycle of generating and revisiting alternatives too frictionless.
None of this is a critique of using AI for decision support. Modeling scenarios, exploring alternatives, pressure-testing assumptions — these are all legitimate and valuable uses of AI in the decision process. The issue is that the same tools need to be held by humans who can complete the decision cycle: who can take in the analysis, integrate it, commit to a direction, and hold that direction with enough stability to learn from what it produces.
That last piece — holding direction under velocity — is what Somabase’s cohort work is specifically designed to develop. Not through frameworks or theoretical models, but through structured practice in the actual conditions where decision coherence gets tested: high-velocity information environments, sophisticated input from AI systems, ambiguous early signal, and the very real temptation to keep the loop open just a little longer.
This is experimental work. We don’t have a finished curriculum because the challenge itself is still taking shape. What we do have is a clear hypothesis: that the humans who navigate this era best will be the ones who did the work to develop the internal architecture for commitment — before the velocity made that work feel impossible.
If this framing matches something you’ve been living in your own work, we’re building for exactly that.
By Erik Horbacz · February 2026
When organizations talk about the costs of AI adoption, the conversation tends to focus on the visible expenses: licensing, infrastructure, training, integration. These are real costs, and they’re tractable. You can line-item them in a budget, track them against outcomes, and make rational decisions about allocation.
The cost I’m watching organizations miss is harder to quantify, but it’s larger. And it compounds.
It’s escalation.
Not escalation in the formal sense — not the escalation of a ticket to senior leadership, or the escalation of a minor conflict into a significant one. I mean something more structural: the acceleration of instability when you add AI velocity to humans who haven’t developed the capacity to hold steady under it. When that combination occurs, the feedback loops don’t just move faster. They move faster in the direction of reactive behavior, premature reversals, and amplified confusion. The speed that AI enables doesn’t go to useful outcomes. It goes to cycling through more iterations of the same unresolved dynamic.
This is the hidden cost. Not the tools. Not the training. The escalation cycles that emerge when unstable humans interact with accelerating systems.
Let me make this concrete. An organization deploys AI tools to its leadership and strategy teams. Output volume increases significantly. Decisions are being made faster — or at least, positions are being generated faster. Meetings are better prepared. Analysis is more thorough. On the surface, the indicators are positive.
But underneath, something else is happening. The increased velocity has reduced the latency that used to cushion reactive decision-making. When a team member was reactive — anxious, under-resourced, or carrying unresolved conflict — the natural slowness of information processing gave things time to settle before they became actions. The email took a day to draft. The decision had to wait for the next meeting. The proposal needed another review cycle. That friction wasn’t pure inefficiency. Some of it was absorbing the energy of reactivity before it could manifest as strategic action.
AI removes a significant portion of that friction. And with it, the natural containment that slower systems provided. Now the reactive email gets polished in minutes. The anxious pivot is articulated in a beautifully structured memo. The impulsive reversal of strategy arrives with compelling supporting analysis. The instability moves faster, and it moves dressed in competence.
This is what I mean by escalation under velocity. The underlying human dynamic hasn’t changed. The reactivity, the anxiety, the unresolved relational tension — all of it is still there. But AI has shortened the interval between those states and their organizational consequences. The escalation cycles run faster. The cost accumulates more rapidly.
Research from Frontiers in AI (2024) found that emotional state significantly affects decision quality when working with AI — and specifically that elevated emotional reactivity correlates with biased decisions even when AI is involved in the reasoning process. The AI doesn’t stabilize the emotional state of the user. It processes whatever it’s given. And when what it’s given is reactive input, it produces output that legitimizes and accelerates that reactivity.
What this means practically is that the organizations getting the most dysfunctional outcomes from AI adoption are not necessarily the ones that deployed the technology badly. Many of them deployed it well, by conventional measures. The dysfunction comes from the gap between the capability of the tools and the stability of the humans directing them. The tools are sophisticated. The humans are not yet stabilized for the environment the tools create.
There’s a particular pattern I see at the team level that’s worth naming. When one or two people on a team are internally unstable — reactive, scattered, or carrying unexamined behavioral patterns — AI tools create a kind of amplification loop. The reactive person generates more confident-sounding output, more quickly. That output enters team conversations with more apparent authority. The people with more stable judgment have less time to process and push back, because the cycle has accelerated. The quality of collective decision-making drops not because the tools are bad, but because the tools have shifted the balance toward whoever is generating the most output, fastest — which is not the same as whoever is generating the most reliable judgment.
Organizational stability under AI velocity requires something that most teams haven’t been asked to develop before: what I’d call relational integrity under acceleration. The ability to maintain constructive, clear, and grounded interactions with colleagues when the shared information environment is moving at a pace that previously would have been reserved for crisis conditions. When that capacity is absent, normal operational pressure starts triggering crisis-level responses. The escalation cycles aren’t just internal to individuals — they propagate across teams and become structural.
The conventional response to this problem is process: more structure, more sign-off layers, more review cycles. These can help at the margin. But they address the symptom, not the source. The source is the gap between the human capacity to hold steady and the velocity the technology enables. Adding process to that gap doesn’t close it. It adds friction at the output layer while leaving the underlying instability intact — and unstable humans are reliably creative at working around process friction.
What closes the gap is developing the underlying capacity: stability under pressure, decision coherence, behavioral containment. These are trainable. They’re not soft or abstract — they’re operational capacities that show up directly in the quality of decisions made under time pressure, the ability to hold a team direction when alternatives are being generated rapidly, and the reduction of escalation cycles that cost organizations real time, real money, and real relational capital.
This is what Somabase’s Enterprise program is designed to address. Not as a compliance or wellness initiative — as a performance investment. The hypothesis is simple: if the limiting factor for AI collaboration is human stability, then developing that stability is one of the highest-leverage things an organization can do in the current environment. Not instead of AI investment — alongside it. The tools are accelerating. The question is whether the humans directing those tools are developing at a commensurate pace.
We’re at the beginning of knowing how to do this systematically. The research is early. The organizational context is genuinely new. But the pattern is visible enough to act on, and the cost of not acting is compounding every quarter as AI capability advances.
The escalation cycles are the signal. They’re telling you that the investment in human capacity hasn’t kept pace. What you do with that signal is the question.
This is what Somabase is exploring — and if you’re seeing this pattern in your organization, we’re building something worth talking about.
By Erik Horbacz · February 2026
There is a category error embedded in how most organizations think about relationships. Relational skills — how people listen, navigate conflict, hold their ground under pressure, repair after rupture — get filed under “soft skills,” which is professional shorthand for: important in theory, deprioritized in practice. Trainings happen once a year. HR handles it. We move on.
The cost of this error is enormous and almost entirely invisible on the spreadsheet.
Every team runs on the quality of its relational dynamics. Not on top of them — on them. The capacity to deliver honest feedback, to stay coherent when a partnership is under strain, to move through disagreement without fracturing, to sustain presence when a conversation gets difficult — these are not interpersonal amenities. They are load-bearing elements of any functioning collaboration. When they’re underdeveloped, you don’t just get awkward meetings. You get avoidance masquerading as agreement, passive friction bleeding into timelines, decisions made by whoever tolerates confrontation least, and teams that look functional on an org chart until stress exposes the structural gap.
I call this relational infrastructure. And like physical infrastructure, most people only notice it when it fails.
The question worth sitting with is: what would it take to actually develop this? Not to talk about it in a workshop, not to absorb another framework on active listening — but to genuinely build the capacity for relational integrity under real-world pressure?
This is the question Somabase’s Relational Intelligence track is organized around.
The research here is stronger than the soft-skill label suggests. Relational quality changes with condition, not just intention. When people are grounded enough to stay present, communication improves, listening improves, and problem-solving gets cleaner. When pressure rises past capacity, the system defaults to self-protection.
This is practical. Relational quality is not just a trait to screen for in hiring. It can be built or degraded by conditions. A well-designed cohort creates conditions for repetition, feedback, and more reliable behavior under pressure.
Patterns formed early tend to carry forward. How people handle uncertainty, closeness, dependence, and perceived threat often repeats in adult professional and personal life. Once those patterns become visible, they become workable.
The Somabase Relational Intelligence cohort is an 8-week guided practice container focused on the capacities most likely to break down under pressure: identity boundaries, escalation reduction, sustained presence in high-stakes moments, and the ability to repair rather than retreat when friction shows up.
The cohort format works because relational capacity has to be practiced with other people. The friction, resonance, misreads, and repair moments that emerge in the group are not side effects. They are part of the work.
Structured group containers also tend to produce more durable behavioral change than isolated learning. Change requires repetition in context. A cohort provides that context over time.
The Relational Intelligence cohort works with how patterns surface under pressure, what identity boundaries look like in practice, how intimacy reveals patterns around closeness, avoidance, power, and reciprocity, and how the relationship with self shapes every external relationship.
That last dimension — the relationship with self — tends to get the least airtime in professional development contexts, and it may be the most consequential. The way a person relates to their own internal experience, the degree to which they can stay present to uncertainty without collapsing or overreacting, the quality of their internal coherence under pressure — all of this transmits directly into their relational behavior. You can’t build relational integrity externally while the internal relationship is fragmented. It doesn’t hold.
I am not framing this as a problem to be fixed. Most of the people drawn to Somabase’s work are already capable, already functional, already succeeding by most external measures. What the Relational Intelligence track offers is not remediation. It is precision development — the kind of capacity building that shows up in decision coherence, in the quality of partnerships, in the ability to navigate high-stakes relational dynamics without defaulting to patterns that are no longer useful.
The organizations and individuals doing this work will be better equipped for what’s coming. The complexity of human collaboration — with each other, and increasingly with intelligent systems — is not decreasing. The relational demands are increasing. Treating relational intelligence as infrastructure, and building it accordingly, is not the soft choice. It is the strategic one.
Somabase is still early. The Relational Intelligence cohort is an experiment, running in real time, with real people who are willing to do serious work in a structured container. If that sounds like the kind of development you’re ready for, we’d like to hear from you.
By Erik Horbacz · February 2026
Most frameworks for relational development stay at the level of communication tips, conflict models, and listening techniques. Those are useful. They do not always reach the deeper patterns driving behavior.
Intimacy is one of the clearest places those patterns become visible. Closeness raises the stakes. It exposes how someone handles vulnerability, desire, boundaries, reciprocity, honesty, and repair.
Let me be direct.
How someone handles wanting, asking, receiving, or setting a limit in close relationship often maps to how they handle recognition, influence, disagreement, and dependence elsewhere. The context changes. The structure often does not.
Intimacy compresses the distance between pattern and consequence. Avoidance becomes visible faster. Control becomes visible faster. Boundary collapse becomes visible faster. So does integrity.
That is why Somabase treats intimacy as a useful domain of inquiry inside relational development. It reveals relational architecture quickly and with less room for performance.
In practice, this work is handled through structured coaching and guided reflection focused on patterns around closeness, honesty, boundaries, reciprocity, and self-trust. The aim is not disclosure for its own sake. The aim is clearer pattern recognition and more coherent behavior.
The relationship with self sits underneath all of it. A person who can stay honest with themselves about what they want, what they will accept, and where they need a boundary tends to show up with more coherence everywhere else.
People who do serious work in this domain show up differently. They stay present longer. They repair faster. They hold clearer boundaries without collapse or aggression. That changes leadership, partnership, and collaboration.
This requires a strong container: skilled facilitation, clear structure, serious participants, and ethical boundaries. That is the standard Somabase is building toward.
This is serious work for people ready to examine how they relate, not just how they present.
If you’re curious about what that container looks like and whether it’s the right fit, we’re having those conversations.
By Erik Horbacz · February 2026
One of the persistent challenges in behavioral development work is the lag between what someone intends and what they actually do — and the further lag between what they do and their awareness of having done it. Patterns are, by definition, automatic. They run below the level of conscious deliberation. This is what makes them efficient, and it’s also what makes them difficult to change: you can’t work with what you can’t see.
This is the problem MS360 is designed to address.
MS360 — MindScape360 — is Carinda Salomon’s biometric translation technology. I want to be precise about that attribution, because it matters: this is Carinda’s work. I am Somabase’s co-founder. She is the developer of this technology. What we are building together is an integration — an exploration of how MS360’s visibility layer can work alongside Somabase’s cohort model to accelerate the behavioral pattern recognition that the group work is designed to develop.
What the technology does is translate internal physiological states into visible patterns. Biometric data — the kind that reflects internal states — becomes a readable signal rather than an invisible undercurrent. The body is always tracking. MS360 makes some of that tracking legible.
It does not diagnose. It does not interpret meaning. It does not tell you what to do. It provides data — a physiological layer that participants can reference as they move through the relational and collaborative work of the cohort.
The value of this visibility is straightforward. The feedback loop gets shorter. Instead of needing weeks of reflection to identify a pattern, a participant can see how pressure, behavior, and response start linking together in specific moments. The connection becomes more observable and easier to work with.
That does not replace coaching or guided practice. It supports them. MS360 adds another reference point participants can use to sharpen self-observation, notice recurring patterns faster, and bring clearer material into the work.
MS360 is used in parallel to the Somabase cohort work, not embedded within the group sessions themselves. This is an important distinction. The cohort sessions are a relational container — a structured group practice space where human dynamics are the primary material. The MS360 layer is something participants work with alongside that, providing a complementary data signal they can bring to their own reflection and, where relevant, to the coaching context.
The hypothesis we are testing is this: when people can see their patterns — physiologically, not just behaviorally — they develop the capacity to change them faster. When the invisible becomes visible, it becomes workable. This is not a guarantee. It is a working premise that we are testing in real time, with real participants, in an early-stage experiment.
I want to be honest about where we are with this. MS360 is not a validated clinical instrument in the sense that it has been through the full arc of peer-reviewed longitudinal study. It is a sophisticated technology in active development, being integrated with a cohort model that is itself in active development. Carinda and I are building this carefully and iteratively. The early signals are promising. We are treating them as signals, not conclusions.
There is a particular quality of self-knowledge that becomes available when behavioral patterns are visible rather than inferred. Most people have some awareness of their patterns — the circumstances under which they become reactive, the relational dynamics that reliably produce a specific response. But awareness of a pattern in the abstract is different from watching it unfold in real time. The latter is more actionable. It creates a reference point: a moment you can return to, examine, and use as the basis for a different choice next time. MS360 is designed to provide exactly that kind of reference point — a data signal that is personal, specific, and grounded in what your own physiology actually did, not what you think it did.
What makes this integration worth pursuing is the nature of the problem it addresses. The Somabase cohort work — whether in Relational Intelligence or AI Collaboration — is designed to develop stable, grounded behavioral capacity over time. That development happens through practice, reflection, and the feedback that a skilled facilitated group provides. Adding a physiological visibility layer does not replace that process. It creates a complementary data channel that participants can use to sharpen their own self-observation and accelerate the pattern recognition that the group work initiates.
The goal is not technology for its own sake. The goal is acceleration of genuine behavioral development — the kind that persists beyond the cohort container and shapes how someone actually functions in their relationships and their work. If MS360 can meaningfully contribute to that, it earns its place in the model. We believe it can. We are finding out.
The combination of cohort-based relational work with the physiological visibility that MS360 provides is foundational to how Somabase operates. Every participant in the cohort works with MS360 as part of the program — a behavioral mirror that runs alongside the group work, providing reference points that deepen the development process.
Carinda’s work deserves its own exploration, and we’ll be publishing more on the technology itself in coming months. For now, this is the orientation: a technology that makes internal states visible, integrated with a group practice model that makes relational patterns workable. Two complementary approaches to the same fundamental challenge — developing the capacity to see clearly enough to change.
By Erik Horbacz · February 2026
What follows is not a manifesto. It is a working document — a set of principles distilled from the experiment that Somabase is running, refined through the cohort work we’ve done so far, and held loosely enough to be revised as the evidence develops. These are operating hypotheses, not conclusions. They are the intellectual foundation we’re building on, stated plainly so they can be examined, challenged, and tested.
If you’re building in this space — or thinking seriously about what it means to work with intelligent systems well — these are the premises I’m working from.
1. Your state determines your output.
This is the most fundamental principle in the model, and the most consistently underestimated. The quality of what you produce — your decisions, your communications, your collaborations — is downstream of your internal state. An intelligent system can amplify your capacity. It will also amplify your noise. If you are operating from a state of chronic reactivity, avoidance, or fragmentation, AI will not solve that. It will scale it.
Human stability is not a precondition for working with AI in the way that, say, hardware compatibility is a precondition. It is a precondition in the deeper sense: the quality of the person operating the tool shapes the quality of what the tool produces. Before asking what AI can do, it’s worth asking what state you’re in when you’re using it.
2. Delegation without discernment becomes dependency.
Intelligent systems are extraordinarily capable of handling tasks that were previously time-consuming. The efficiency gains are real. The risk is that the relief of offloading work can gradually extend to offloading judgment — the evaluative capacity that determines what to do with the outputs, how to use them, and whether they serve the actual goal.
Discernment is the capacity to judge well. It is developed through experience, through making decisions and living with their consequences, through building pattern recognition in real-world contexts. Delegating tasks is leverage. Delegating judgment is atrophy. The distinction requires ongoing attention.
3. The quality of your relationships determines the quality of your collaboration — with humans and with AI.
How you relate to other people shapes how you relate to every collaborative process. Relational patterns are not domain-specific. A person who defaults to control in close relationships tends to use technology in a controlling way — extracting answers rather than exploring possibilities, using AI to confirm what they already think rather than to genuinely challenge it. A person with high relational integrity tends to collaborate well with both humans and systems.
This is one of the core hypotheses of Somabase’s model: relational intelligence is not separate from AI collaboration capacity. It is foundational to it.
4. Velocity without coherence produces escalation, not progress.
AI dramatically increases the speed at which work can be produced. This is valuable when the direction is sound. It is genuinely dangerous when the direction is unclear, when the underlying thinking hasn’t been examined, or when the person operating the system is in a reactive state. Faster in the wrong direction is worse than slow.
Coherence — clarity of intent, alignment between what you say you want and what you’re actually doing — is the prerequisite for productive velocity. Somabase’s cohort work is substantially about developing coherence: the capacity to act from a stable, examined position rather than from the momentum of habit.
5. Identity boundaries are the prerequisite for productive partnership — with any intelligence.
Partnership requires distinction. You cannot have a genuine collaboration with another intelligence — human or artificial — without a clear enough sense of yourself to know where you end and the collaboration begins. When identity boundaries are weak, collaboration tends toward either enmeshment (losing yourself in the process) or reactivity (defending against the process because it feels threatening).
Identity clarity does not mean rigidity. It means knowing your own values, preferences, and positions well enough to engage with difference without losing yourself in it. This is as relevant to working with AI systems as it is to close human relationships.
6. Behavioral patterns are consistent — they surface in relationships, in work, and in how you interact with technology.
One of the most reliable observations from behavioral research is that patterns transfer across contexts. The person who avoids directness in personal relationships tends to avoid it with AI — crafting prompts that hedge, that avoid specificity, that don’t actually ask for what they need. The person who defaults to dominance in professional contexts tends to use AI transactionally, extracting outputs without real engagement. The person with high relational integrity tends to interact with intelligent systems with a similar quality of presence.
This is not determinism. It is pattern recognition. And it is an invitation — if you want to understand how you actually relate, look at how you’re relating right now, in every context that’s in front of you.
7. Visibility accelerates change.
Patterns that operate below awareness are difficult to change. Patterns that are visible — behaviorally, physiologically, through the reflection of a skilled practitioner or a cohort — become workable. The reason Somabase integrates the MS360 biometric visibility layer into the cohort model is not a belief in technology for its own sake. It is an application of this principle: when you can see your patterns, you can work with them. When you can’t, you’re working in the dark.
This is also why the cohort format itself is valuable. Other people reflect your patterns back to you in ways that solo reflection doesn’t reach. Visibility is not only a technological function. It is a relational one.
8. Community is the container — individual development happens fastest in structured group settings.
Behavioral change is not primarily an information problem. Most people who struggle with relational integrity, with stability under pressure, with decision coherence — they don’t lack information about what good behavior looks like. They lack the conditions for sustained practice and feedback in real-world relational contexts. Those conditions require other people.
Research from Frontiers in Psychology (2025) and the Brandon Hall Group’s work on learning effectiveness both document this: structured group containers produce more durable behavioral change than individual learning. The cohort is not just a delivery mechanism for content. It is the development environment itself.
These eight principles are the framework I’m building from. They are being tested through Somabase’s cohort work, refined by what we observe, and revised when the evidence demands it. Some of them will hold. Some will be refined beyond recognition. That’s the nature of working at the edge of something genuinely new.
The broader project — understanding how human beings can develop the stability, relational capacity, and behavioral coherence required to work with intelligent systems well — is not finished. It may be the defining developmental project of the next decade.
Somabase is an early attempt to build infrastructure for it. These principles are where we’re starting.