Maturity Infrastructure for the AI Era

Somabase bridges relational intelligence and AI collaboration.

Develop the human stability required to collaborate with intelligent systems — maintaining agency, relational integrity, and decision coherence at velocity.

The Problem

AI is scaling.
Humans are not.

Artificial intelligence increases velocity in communication, decision-making, influence, and creative production. But velocity exposes instability. Reactivity in conflict. Boundary confusion. Decision reversal loops. Escalation cycles. Identity outsourcing to tools.

When humans are unstable, acceleration becomes escalation.

Two Domains, One Platform

Two integrated development domains.

AI Collaboration

Build the capacity to collaborate without losing yourself.

Build identity boundaries with AI tools. Reduce cognitive outsourcing. Maintain decision stability under velocity. Operate with responsible influence.

Explore AI Collaboration →

Relational Intelligence

How you relate shapes how you lead, love, and build.

Relationships. Sexual wellness. Your relationship with yourself. The relational foundation for everything else.

Explore Relational Intelligence →

The Method

Structured containers for real development.

Somabase develops maturity through live cohorts, structured practices, community calibration, and optional biometric integration with MS360. Development happens through structured practice.

Live Cohorts

6–8 week structured programs with real-time facilitation and peer calibration.

Community

Ongoing calibration and feedback within a live relational environment.

MS360

Optional behavioral translation layer — a behavioral mirror that surfaces invisible patterns.

For Organizations

For organizations adopting AI.

Custom internal cohorts for leadership teams. Decision stability. Escalation reduction. Communication coherence. Responsible AI collaboration. Built to the specific dynamics of your organization.

Enterprise Inquiry →

AI Collaboration

AI collaboration requires
relational intelligence.

Your state determines your output. Human stability is the bottleneck.

The Core Principle

AI amplifies what you bring into it.

When you bring clarity, AI sharpens output. When you bring confusion, AI scales confusion. When you bring insecurity, AI generates content that reflects insecurity. Your state determines your output.

Clarity → precision output
Contained integrity → authority and trust
Decision stability → coherent direction
Regulated state → escalation-free communication
Grounded identity → human-led collaboration

Outcomes

What the AI Collaboration cohort develops.

Identity boundaries with AI tools
Reduced cognitive outsourcing
Stable communication under velocity
Responsible influence and decision coherence
Amplification awareness
Escalation containment in digital systems

Structure

8 weeks. Live sessions. Structured practices.

Understanding how AI tools amplify your current state — not your ideal state. Mapping your amplification patterns.
Where do you end and the tool begins? Building clear boundaries between your judgment and AI-generated output.
Maintaining coherent decisions under velocity. Recognizing and interrupting reversal loops.
Calibrating tone and precision when AI accelerates output speed. Staying intentional in a fast environment.
Identifying how instability escalates through AI-mediated systems. Interruption strategies.
Operating with integrity when AI amplifies your reach and impact. Distinguishing authority from performance.
Leading teams through AI adoption with stability. Modeling the human–technology collaboration you want your organization to embody.
Consolidating your personal AI collaboration code. Integration practices and ongoing calibration framework.

Participants leave with a personal AI collaboration code.

Enterprise

For teams adopting AI.

Somabase delivers custom internal cohorts. Discovery → Design → Delivery → Debrief. Built around the specific dynamics and instability patterns of your organization.

Enterprise Inquiry →

Optional Add-On

MS360 provides a behavioral translation layer — a behavioral mirror that surfaces invisible patterns. Available as an optional add-on to the AI Collaboration cohort.

Learn more about MS360 →

Relational Intelligence

How you relate determines how you lead, love, and build.

Relationships. Sexual wellness. The relationship with yourself.

Recognition

You already know where this shows up.

Relational patterns are consistent. They surface in your partnerships, your team, your intimacy, and your self-talk — until something shifts.

Escalation when connection was possible.
Withdrawal when presence was needed.
Boundaries that collapse under intimacy.
Self-abandonment disguised as compromise.
Shame shaping how you show up sexually.
Patterns that repeat across every relationship.

Outcomes

What the Relational Intelligence cohort develops.

Stability in intimate relationships
Boundary clarity without disconnection
Sexual self-awareness and groundedness
Honest communication under vulnerability
A coherent relationship with yourself
Relational presence that holds

Core Domain

Sexuality as a precision mirror.

How you relate sexually reveals how you relate everywhere — shame patterns, attachment dynamics, boundary clarity, self-worth. Somabase treats sexual wellness as a core domain of relational intelligence, using it as a precision mirror for the patterns that shape every relationship you have.

Structure

8 weeks. Relationships, sexuality, self.

Mapping how you show up in relationships. Identifying the patterns you bring to intimacy, conflict, and connection.
Distinguishing boundaries from walls. Building clarity in relationships without rigidity or collapse.
Your internal relationship sets the template for every external one. Self-abandonment, self-worth, and the patterns between them.
How shame drives hiding, overcorrection, and performance in relationships. Working with vulnerability without collapse.
Using sexuality as a precision mirror for relational patterns — attachment, desire, communication, and self-worth as they surface in intimate space.
Speaking honestly when it matters most. Distinguishing productive honesty from destructive escalation in relationships.
Holding space for others without losing yourself. Staying grounded in relationships where the stakes are highest.
Building your relational framework. Practices for ongoing calibration in partnerships, intimacy, and your relationship with yourself.

Participants leave with a stable relational framework.

MS360

A translation layer between
human behavior and intelligent systems.

Behavioral pattern visibility in real time.

What It Is

Behavioral pattern visibility.

MS360 provides behavioral pattern visibility through biometric translation. It surfaces signals that are otherwise invisible — helping individuals and teams see patterns in real time.

A calibration tool. A behavioral mirror. MS360 gives you information — the interpretation and action remain yours.

How It Works

How MS360 Works

Translates biometric signals into behavioral pattern data
Surfaces patterns that are invisible in real-time operation
Provides a calibration layer during active development
Supports awareness without creating dependency
Returns interpretive authority to the individual

For Individuals

Optional cohort add-on.

Available as an optional cohort add-on. MS360 provides a feedback layer during your development — supporting awareness without creating dependency. You use it as a mirror, then set it down.

For Teams

Behavioral visibility across team dynamics.

In enterprise cohorts, MS360 offers behavioral pattern visibility. Useful for decision stability tracking, escalation pattern recognition, communication coherence assessment.

Pilot data from a 2022 institutional study was associated with reduced decision reversals and improved trust indicators. This pilot was non-controlled and not designed to assert causality.

Enterprise

Build stable leadership teams in
AI-accelerated environments.

Custom internal cohorts bridging relational intelligence and AI collaboration.

Who This Is For

Organizations with real velocity.

Organizations adopting AI without adequate human maturity infrastructure. Leadership teams navigating velocity. High-escalation environments. Cross-functional teams with alignment friction.

What's at Stake

The Cost of Instability

Decision reversals increase. Escalation cycles compound. Communication fragments. AI adoption creates confusion, dependency, or internal conflict.

The Process

Discovery. Design. Delivery. Debrief.

01 Discovery

Diagnostic conversation. Identify team-specific instability patterns and organizational context.

02 Design

Custom 6–10 week cohort mapped to your organization's specific dynamics and needs.

03 Delivery

Live sessions. Structured practices. Optional MS360 integration throughout.

04 Debrief

Executive summary. Findings and recommendations. Continuation options.

Inquiry

Enterprise Inquiry

Tell us about your organization. We'll respond within 48 hours to schedule a discovery call.

Received

Your inquiry has been received.

We'll be in touch within 48 hours to schedule a discovery call.

Learn about MS360 →

Community

Community as calibration.

Maturity develops in practice, not isolation.

What This Is

A live relational environment.

Somabase community is a live relational environment where members practice stability, receive feedback, and calibrate together. Active participation, not passive consumption.

Feedback Loops

Real-time relational dynamics that mirror your patterns back to you in a structured, supported container.

Practice in Public

Integration happens when you practice with others, not alone. The community is the practice environment.

Accountability

The community holds the container. You bring the work. Structured support. Real accountability.

Library

Research, reflection,
and field notes.

Thinking that feeds practice.

Community

"The Boat We Build Together"

Why carefully constructed communities are the most important thing we can create in a digital world.

By Erik Horbacz · February 2026
AI Collaboration

"AI Amplifies Your State"

Why maturity is the bottleneck for intelligent collaboration.

Read →
AI Collaboration

"AI Collaboration Without Identity Outsourcing"

Maintaining agency in an AI-mediated world.

Read →
AI Collaboration

"Decision Stability Under Velocity"

How to hold decisions when everything accelerates.

Read →
AI Collaboration

"Escalation Cycles: The Hidden Cost"

What instability costs organizations adopting AI.

Read →
Relational Intelligence

"Relational Intelligence as Infrastructure"

This is not self-help. It is operational capacity.

Read →
Relational Intelligence

"Sexuality as a Diagnostic Domain"

A precision mirror for relational integrity.

Read →
MS360

"MS360: A Mirror for Behavior Patterns"

What it is. What it is not.

Read →
Field Notes

"Human–Technology Collaboration Principles"

Principles for the next decade of working with intelligence.

Read →

About

About Somabase

What It Is

Somabase is a human–technology collaboration platform. It exists to develop the human capacities required to work with intelligent systems without losing stability, agency, or relational integrity.

The platform operates through live cohorts, structured practices, community calibration, and optional integration with MS360.

The Bridge

Two domains that must evolve together.

Somabase bridges two domains that must evolve together: relational intelligence and AI collaboration. Relational intelligence provides the foundation — stability in relationships, boundary clarity, sexual wellness, a coherent relationship with yourself. AI collaboration builds on that foundation — identity boundaries with tools, decision coherence under velocity, responsible influence.

Relational intelligence provides the foundation. AI collaboration builds operational capacity on top of it. Together, they form a complete development architecture.

What Somabase Is

Maturity infrastructure for the AI era.

A structured development platform
Live cohorts with real facilitation
Operational capacity through practice
A framework for human–AI collaboration
Behavioral visibility through MS360

In Practice

Human stability at the speed of intelligence.

A platform that develops the human stability required to work alongside intelligent systems with integrity, coherence, and agency intact.

Application

Apply for the
AI Collaboration Cohort

8 weeks. Live sessions. Structured practices.

Application Received

Thank you. Your application has been received.

Next step: Schedule a 20-minute consult to discuss your application.

Schedule Consult

While you wait, read: "AI Amplifies Your State" →

Application

Apply for the
Relational Intelligence Cohort

8 weeks. Live sessions. Structured practices.

Application Received

Thank you. Your application has been received.

Next step: Schedule a 20-minute consult to discuss your application.

Schedule Consult

While you wait, read: "Relational Intelligence as Infrastructure" →

Schedule

Schedule a Consult

A 20-minute discovery conversation.

What to expect

This is a 20-minute discovery conversation. We'll discuss where you are, what you're working on, and whether Somabase is the right container for your next stage of development. Come with honesty about where you're stuck. No preparation required beyond that.

Before You Book

Answer a few questions so we can make the most of our time together.

Community

The Boat We Build Together

Why carefully constructed communities are the most important thing we can create in a digital world.

A wooden boat on calm water at dawn

Why carefully constructed communities are the most important thing we can create in a digital world

There is something distorted and inflated, and we all feel it.

Scroll through any feed for ten minutes, and you can taste it — the hollow aftertaste of connection that isn't. Thousands of followers, dozens of group chats, notifications piling up like leaves in a gutter. We are more networked than any generation in human history, and somehow more alone.

I'm not pointing fingers. I've been in these spaces. I've never worked inside tech or inside someone else's company — I've been an entrepreneur since college, building things from the outside, which means I don't think the way most people in these industries think. But I've spent real time in digital communities — meme-coin Discords, self-help groups, creator circles, crypto communities, and forums that burned bright for six months and then went dark. I've watched communities form around missions and leaders and content — and I've watched most of them collapse the moment the leader lost energy, the content dried up, or the first real conflict surfaced.

The pattern is always the same. A group comes together with enthusiasm. Everyone is polite. Everyone agrees. It feels electric — like something real is forming. Then someone says something uncomfortable. A disagreement. A tension that can't be smoothed over with an emoji reaction. And instead of leaning in, the group scatters. Back to the safety of surface-level engagement. Back to the community's performance without its substance.

And here is the part that bothers me most: almost every community I've been in lives and dies by its leader or its content. People join as long as there is value being handed to them. They consume. They extract. And when the value stream slows, they leave. Almost nobody shows up to contribute as who they are. Almost nobody brings the value — they wait for it to be delivered. Communities function like audiences with membership fees.

Only when they move past that, do they arrive at true community. Deep listening. Real trust. Collective intelligence that none of the individuals could have reached alone.

Most digital communities never make it past the first stage. They are dressed up as tribes but function as audiences. They have members but not relationships. Channels but not conversations. Content but not culture.

This is the dark part. This is what is real.

And it matters more than we think.

Because we are entering an era where the ground under every institution is shifting. Intelligence is being industrialized. AI systems can now do in seconds what took teams of knowledge workers months. Energy is getting cheaper. Labor as we know it is being redefined. The economic models that shaped the last century — scarcity-based, production-measured, shareholder-first — are groaning under pressures they were never built to absorb.

Ray Dalio mapped the cycles. Great powers rise, consolidate, decay, and are replaced. Raoul Pal pointed to 2030 as the moment when the convergence of AI, blockchain, and abundant energy makes the old economic rules unrecognizable. Peter Diamandis talks about “solving everything” — using industrial intelligence to crack disease, energy, and materials science. Kurzweil charted the exponential curve and said the singularity is not a distant fantasy but a near-term reality. I would argue that we are here.

They are all pointing at the same horizon. And they are all, in their own way, missing the same thing.

Technology can transform industries, domains, and specific problems. But we can't rely on it to evolve us. That work is ours to do — deliberately and with intention.

We are building godlike machines with stone-age nervous systems. We have the capacity to automate the “how” of nearly everything — and almost no shared infrastructure for the “who.” Who are we becoming? What do we value? How do we coordinate when the old rules stop working? How do we stay human in a world that is accelerating past our ability to predict what happens next?

These are not philosophical luxuries. They are survival questions. And the answer — the only answer that has ever worked across civilizations, across centuries, across every tradition that left something worth inheriting — is community. Not the word. Not the brand. Not the Slack channel. The real thing.

Real community is the hardest thing to build. That's why almost nobody does it.

Real community requires you to show up as you actually are — not the curated version, not the professional persona, not the optimized personal brand. It requires conflict. It requires sitting in the discomfort of disagreement with people you've committed to, and choosing to stay instead of scroll away.

Peck described the stage before true community as “emptiness” — a space where each person lets go of their need to fix, control, convince, or perform. It looks like failure. It feels like loss. And it is the only doorway to the thing everyone says they want but almost no one is willing to earn.

I've been through something like the dark side of this personally. I was born two months early, brought back to life by the hands of strangers. I grew up with ADHD in the basement of a Catholic school, building worlds in my head that nobody else could see. Every time I tried to share what was inside me, I learned something about connection — and about what it feels like when that connection doesn't land. So I know what avoidance tastes like. I know what it means to have a voice and be afraid to use it.

In a world of high-speed extraction, the quiet ones, the ones with the most depth and the most real substance often sit it out because the cost of engagement feels too high.

This is backward. This is the imbalance we need to correct.

So what does a carefully constructed community actually look like in a digital space?

It starts with a promise. Not a mission statement carved into a boardroom wall. A living, dynamic promise — what I call a Compelling Aligning Promise, or CAP. It is specific enough to pull action, meaningful enough to matter, and shared enough to align people around a common direction. The individual who makes the promise and the community that holds the promise operate in parallel. The founder is not above the community — they are the first member. Their model of personal alignment is the community's foundation.

Beneath that promise lives identity — values, energy, trajectory. Who are we? How do we show up? Where are we heading? This is not brand language. This is the honest accounting of what we actually stand for, the energy we bring into rooms, and where our current behavior is really taking us. When identity is clear, it becomes a filter. The right people feel it and lean in. The wrong people self-select out. No sales pitch required.

Then comes voice — authentic, resonant communication. Not marketing. Not content strategy. The actual sound of a community speaking its truth. Marshall Ganz calls it the story of self, the story of us, and the story of now. Seth Godin calls it the smallest viable audience. I call it the difference between attention and belonging. You don't need millions of followers. You need the right people hearing the right signal.

Then priorities. Then the network structure. Then rituals — the daily and weekly practices that turn a group of strangers into a culture. Then, at the root, individual ownership — each person doing their own work, taking responsibility for their own growth, feeding what they learn back into the collective.

This applies at every scale — from a small cohort to a global network. It is fractal. The same principles that govern a family or a small team can be applied to a neighborhood, a company, and an economy. The same geometry at every scale. Fractal alignment.

I want to be honest about something: I have not successfully built a community. Not yet. I've studied this, researched it deeply, lived inside communities that didn't work, and spent years developing a framework for how it could work. But the proof is still being written. I'm building now — with Corvia, an AI music community exploring emotional development, and with Somabase, a platform for human-technology collaboration and relational intelligence. Both are early. Both are laboratories for the hypothesis that intentional community, carefully constructed around shared values and honest practice, is the most powerful structure humans have for navigating change. The foundation I'm building runs deeper than I can explain in a single blog post. It will take time and context to show. But I'd rather be honest about where I am than pretend I've already arrived.

Here is where the light breaks through.

When a community gets this right — when it moves through pseudo-community and chaos and emptiness and arrives at the real thing — something extraordinary happens. The collective becomes smarter than any individual in it. Problems that seemed unsolvable from the inside become obvious from the shared perspective. People who feel lost find direction. People who feel voiceless find a frequency that is unmistakably theirs. Ventures emerge not from market analysis but from genuine need, identified by people who trust each other enough to be honest about what is missing.

The Beatles didn't just make music. They created a cultural field that millions of people stepped into and were changed by. That field was community — unstructured, messy, emergent, but real. Imagine what becomes possible when we apply everything we've learned about human development, organizational design, tokenomics, AI collaboration, and consciousness research to intentionally build that same kind of field.

Our current systems are designed to extract intelligence rather than concentrate it. Where the definition of value shifts from what you produced to what you contributed to the flourishing of the whole.

This is not utopian fantasy. The Economic Space Agency is building protocol infrastructure for exactly this kind of postcapitalist coordination. DAOs have already demonstrated that decentralized governance can work — imperfectly, messily, but really. Tokenized ownership turns shared infrastructure into shared income. The pieces exist.

What has been missing is the human readiness. The emotional maturity to hold disagreement without fragmenting. The technological maturity to use these tools intentionally rather than compulsively. The economic maturity to value contribution over extraction.

Three maturities. Three directions the entire global economy needs to grow. I've spent a lot of time mapping this. Nearly everything we spend money on traces back to three primal forces: how we present ourselves and develop as people — our identity, our bodies, our growth. How we alter our experience — entertainment, substances, social media, anything that shifts our state of consciousness. And how we capture, store, and exchange value — the entire financial system, from banks to bitcoin. These three forces — identity, experience, and value — account for almost everything. When you look at where the money actually flows in the world, you see a $115 trillion economy built overwhelmingly around these drives. The question is whether we keep feeding them unconsciously or start maturing them deliberately. Emotional maturity ends wars. Technological maturity ends addiction and distraction. Economic maturity ends poverty and exploitation. The sequence is not optional — you cannot build a mature economy on an emotionally immature foundation, and you cannot use technology wisely until you have examined your relationship to the tools that shape your consciousness.

We are in the water right now.

The waves of technological disruption are coming whether we are ready or not. The old boats — the corporations, the institutions, the governments designed for a slower world — are taking on water. Some will adapt. Many will not. The question is not whether change is coming. The question is what vessel you are in when the waves hit.

The boat is community. Carefully constructed. Values-aligned. Honest about the chaos. Willing to pass through emptiness. Built not for speed but for resilience. Crewed not by employees but by owners — people with real stakes, real voices, real accountability to each other.

This is the thing I am dedicating my work to. Not because I figured it all out — I am figuring it out in real time, messily, publicly, with the same fears and doubts as everyone else. But because the alternative is worse. The alternative is drowning alone in a sea of information, clutching a phone full of followers and wondering why none of it feels real.

If you are reading this and you feel the pull toward something more honest, more intentional, more human than what the platforms are currently offering — know that you are not alone. The avoidant ones, the quiet ones, the people who have done the inner work but haven't yet found a space worthy of their voice: you are exactly who we need. The narcissistic networks have had their turn. It is time for the people with substance to show up.

Not to perform. Not to optimize. To build something real. To construct the community carefully — with shared values, clear direction, honest communication, and the willingness to stay when it gets hard.

That is how we ride the waves.

That is how we build the boat together.

Erik Horbacz is the founder of The New VC, a venture community lab exploring the intersection of human maturity, technology, and community-first business. He is building Corvia (an AI music community for emotional development) and Somabase (a human-technology collaboration platform), and writing about what it takes to stay human in an accelerating world.

← Back to Library

AI Collaboration

AI Amplifies Your State

Why maturity is the bottleneck for intelligent collaboration.

By Erik Horbacz · February 2026

There’s a premise baked into almost every AI adoption conversation happening in organizations right now: the bottleneck is technical. If we just train people on the tools, adopt the right platforms, build the right workflows — the results will follow. What we’re discovering, slowly and sometimes painfully, is that this premise is wrong.

The bottleneck isn’t technical. It’s human.

More specifically, it’s the quality of the internal state the human brings to the collaboration. And AI doesn’t just work around that state — it amplifies it.

I want to be precise about what I mean by that, because it’s easy to read “internal state” and immediately drift toward something vague or psychological. I’m not talking about mood. I’m talking about something more structural: the degree to which a person can maintain clarity under pressure, hold a decision without immediately reversing it, and sustain independent judgment when a sophisticated system is producing confident-sounding output at high velocity.

That’s what I mean by state. And when you bring that quality — or its absence — to AI collaboration, the AI doesn’t moderate it. It magnifies it.

Here’s the dynamic in practice. When you sit down to work with an AI system and you’re scattered — bouncing between competing priorities, running on sleep debt, carrying unresolved tension from an earlier conversation — the AI will produce output that reflects the fragmentation of your input. The prompts will be unclear. The outputs will be loosely formed. And here’s the part that matters most: you’ll accept them anyway, because in a scattered state you don’t have the discriminating capacity to evaluate what you’re receiving. The output feels like it’s helping because it’s producing something, filling the space, generating motion. But it’s producing the shape of your own confusion back to you, and you’re mistaking it for signal.

The inverse is equally true, and this is where the real leverage lives. When you come to AI collaboration with clarity — a settled sense of what you’re trying to accomplish, why it matters, and what you’re not willing to compromise — the AI becomes extraordinarily useful. Not because the tool changed, but because you changed what you’re directing it toward. You can see the difference between a response that actually serves the goal and one that merely sounds like it does. You can push back on a confident-sounding answer when your own judgment tells you something is off. You can use AI to explore and then return to yourself to decide.

Research from Frontiers in Psychology (2025) introduced a taxonomy worth sitting with: cognitive offloading with AI progresses across three stages — assistive, substitutive, and disruptive. In the assistive phase, AI extends your capacity. In the substitutive phase, it starts replacing your cognition. In the disruptive phase, it actively degrades your ability to self-monitor, evaluate your own reasoning, and make accurate assessments of what you know versus what you merely received. Heavy AI use is correlating with lower metacognitive accuracy. In plain language: people are becoming less able to accurately gauge how well they understand something, because AI fills the comprehension gap so quickly that the struggle — which is where learning and discernment actually happen — never occurs.

This is the illusion of competence. You feel more capable. You produce more output. But the independent judgment that makes that output trustworthy hasn’t developed — it’s been bypassed.

I don’t raise this to argue against AI. I use it extensively, and I think the collaborative potential is genuinely significant. I raise it because the frame most organizations are using to think about AI adoption is leaving out the most important variable.

When you train someone on an AI tool without developing the human infrastructure required to use it well, you haven’t improved their capability. You’ve given a powerful amplifier to someone who hasn’t worked on what they’re amplifying. If the signal is good — stable, clear, coherent — the amplifier makes it better. If the signal is noisy, reactive, or externally dependent, the amplifier makes that worse too. The tool doesn’t know the difference.

This becomes a compounding problem at scale. One reactive person using AI badly produces confused output. A team of reactive people using AI badly produces organizational chaos at acceleration. The decisions come faster. The pivots happen more frequently. The feedback loops tighten. But the underlying human capacity to hold steady — to evaluate, to commit, to maintain direction — hasn’t kept pace with the velocity the technology enables.

What develops that capacity? Not more AI training. Not more productivity frameworks. The research points toward something more fundamental: self-monitoring, metacognitive practice, and what I’d describe as stability under pressure — the ability to remain coherent when the environment is moving fast and confident-sounding information is arriving from every direction.

This is precisely why Somabase starts with the human, not the tool. Not because the tool is unimportant — it’s transformative — but because without the human foundation, the tool accelerates the wrong things. Our AI Collaboration cohort is built around a simple premise: before you can collaborate well with an intelligent system, you need a level of internal stability and independent judgment that makes that collaboration generative rather than disorienting. You need to know what you think, what you value, and what you’re responsible for — in a way that doesn’t dissolve the moment AI offers you 12 alternatives.

This isn’t a soft-skills conversation. It’s a performance conversation. The humans who will use AI most effectively are not necessarily the most technically sophisticated. They’re the ones with the clearest signal — the most developed capacity to direct, evaluate, commit, and override. Maturity, in the deepest sense of that word, is the bottleneck.

We’re at the beginning of figuring out how to develop that capacity intentionally, in the context of AI collaboration specifically. The research is early. The practices are experimental. But the direction is clear.

This is what Somabase is exploring. If that framing resonates — if you’ve felt the quality of your collaboration vary with your own state and wanted a structured way to develop what’s underneath — we’re building something for exactly that.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

AI Collaboration

AI Collaboration Without Identity Outsourcing

Maintaining agency in an AI-mediated world.

By Erik Horbacz · February 2026

There’s a useful distinction that most conversations about AI adoption are missing, and I think it’s the one that matters most right now.

The distinction is between task delegation and identity outsourcing. They can look identical from the outside — both involve handing something to AI that you used to do yourself — but the internal experience is completely different, and the long-term consequences are on opposite ends of the spectrum.

Task delegation is what AI is legitimately excellent at. You have a clear goal, a defined scope, and you direct an AI system to help you accomplish it. Your judgment stays engaged throughout. You evaluate the output. You decide what to use, what to discard, and what to do next. The locus of decision-making stays with you. The AI is an instrument.

Identity outsourcing is something else. It starts subtly — an over-reliance on AI to generate not just outputs but positions. What should I think about this? What’s the right way to frame this? Is this a good idea? The AI answers, and the answer feels good, and over time the practice of generating your own positions — sitting with a question long enough to develop a view — atrophies. You’re no longer using AI to extend your thinking. You’re using it to replace the effort that thinking requires.

Research published in Social Behavior & Personality in 2024 described this pattern across five dimensions: dependency, gullibility, irrationality, unreliability, and loss of cognitive autonomy. What I find valuable about this framework isn’t the clinical language — it’s what it points to. These aren’t five separate problems. They’re five faces of the same underlying shift: the gradual erosion of the internal structures that let you direct, evaluate, and hold positions independently. When those structures weaken, you become gullible not because you’re unintelligent, but because your capacity for independent verification has been underexercised. You become unreliable not because you’re untrustworthy, but because your positions are being generated outside yourself and they shift when the external generator shifts.

The question I keep returning to — and the one that Somabase is built around exploring — is: what maintains the integrity of those structures while you’re also working with powerful AI systems daily?

Identity boundaries is the phrase I’d use. Not in a rigid sense — not a wall between yourself and the technology. More like a clear, stable sense of what belongs to you that remains coherent when an AI system is confidently offering to take it over. Your values. Your judgment. Your creative voice. Your direction. These aren’t things you need to protect from AI. They’re things you need to remain fluent in, even as AI becomes capable of simulating them quite well.

The simulation quality is actually the complicating factor here. A decade ago, AI-generated output was obviously different from human output. The quality gap made it easy to maintain a clear sense of what was yours versus what was generated. That gap is closing rapidly. AI can now produce writing that sounds like you, ideas that sound like yours, strategic reasoning that matches your usual patterns. This is genuinely useful — and genuinely disorienting. Because when the output mirrors your own style closely enough, the friction that normally signals “this came from outside me” disappears. And so does the metacognitive check that keeps you in the driver’s seat.

Frontiers in AI (2024) found that emotional engagement with AI is a significant variable in decision quality — and specifically, that higher emotional engagement tends to correlate with reduced willingness to override AI recommendations, even when the human has good reason to. This makes sense. If you’ve developed a working relationship with an AI system, if it feels responsive and helpful, the cognitive cost of contradicting it rises. Not because the AI has earned your deference — but because the emotional register of the relationship has started to carry weight in your evaluation process.

68% of users report feeling more emotionally engaged with empathetic AI. That’s not a small number. And the implication isn’t that empathetic AI is bad — it’s that the humans using it need a level of relational discernment that most of us haven’t been asked to develop before. We’ve never had to navigate emotional engagement with non-human systems at this level of sophistication. The question of what’s real, what’s simulated, and what that distinction means for your judgment is genuinely new.

I don’t think the answer is more skepticism. Chronic skepticism toward your tools is a different kind of instability — it just destabilizes in the direction of over-caution rather than over-reliance. The capacity I’m interested in is something more nuanced: the ability to hold genuine engagement with AI collaboration while also maintaining a clear enough internal signal that you can feel the difference between using the tool and being used by it.

That difference — and I want to be precise here — isn’t primarily about what you’re doing. It’s about what’s happening underneath the doing. Someone can produce the exact same AI-assisted work product with either a stable sense of their own authorship intact or with that authorship quietly dissolved. From the outside, the outputs look the same. But the trajectory those two people are on is completely different. One is developing capacity. The other is outsourcing it.

Somabase’s AI Collaboration cohort is structured around making that internal distinction practical and trainable. Not through abstract frameworks, but through structured practice in the actual conditions where identity outsourcing tends to happen: high velocity, complex decisions, sophisticated AI input arriving at volume. The goal is to develop what I’d call behavioral containment — the ability to engage fully with what AI offers while retaining the internal coherence to evaluate, override, and own the outcome.

This is an experiment. There’s no established curriculum for what we’re doing because the situation itself is genuinely new. What we know from the research, from early cohort work, and from lived experience building with these tools is that the humans who navigate this best are not the most technically sophisticated or the most skeptical. They’re the ones who have done enough internal work to know what belongs to them — and who can hold that identity boundary clearly enough that it doesn’t erode under the very real pressure to just let the AI handle it.

If you’ve noticed that pressure — in your own work, in your own thinking, in the way your creative voice sometimes sounds more like your AI’s suggestions than your own — we’re building something for exactly that territory.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

AI Collaboration

Decision Stability Under Velocity

How to hold decisions when everything accelerates.

By Erik Horbacz · February 2026

One of the less-discussed effects of working with AI systems daily is what it does to your relationship with commitment.

Not commitment in the abstract — in the specific, practical sense of making a decision and holding it long enough for it to produce useful information. That interval — the gap between deciding and learning — is where most of the real signal comes from. Execution reveals things that analysis never could. But the interval requires something that’s becoming harder to sustain: the willingness to commit under conditions of unresolved uncertainty, when you know that more input is available if you want it.

AI has made more input perpetually available. That’s one of its most genuinely valuable properties. You can query, iterate, refine, and generate alternatives faster than any previous tool in history allowed. The problem is that the same capacity that makes AI so useful for exploration makes it actively disruptive to the part of decision-making that comes after exploration. At some point, you have to stop generating alternatives and commit to one. And the internal architecture required to do that — the capacity to tolerate the discomfort of foreclosing options, to hold a position under pressure, to trust your own synthesis when AI keeps suggesting there’s a better answer — that architecture doesn’t automatically strengthen just because you have better tools.

What I’m watching in people working with AI intensively is a pattern I’d describe as decision reversal under velocity. The decisions aren’t bad. The reasoning isn’t flawed. But the commitment doesn’t hold. Not because circumstances changed — because the mere availability of more input creates a standing invitation to reconsider. The decision gets reopened. The pivot happens before the original direction had time to produce any signal. The cycle repeats, and the organization accumulates motion without accumulating learning.

This is worth distinguishing from legitimate responsiveness. Updating your position when meaningful new information arrives is exactly right. What I’m describing is different: the compulsive revisiting that happens not because new information arrived, but because the psychological cost of staying committed is higher than the psychological cost of starting the loop over again. In a world where AI can generate a compelling rationale for almost any direction, that loop can run indefinitely. There’s always a better option available in the output.

Cognitive load theory offers a useful lens here. The mental effort required to hold a decision against the incoming tide of alternatives is itself a limited resource. When you’re working at high velocity — processing large volumes of AI-generated input, managing complex decisions across multiple domains, operating in environments where the feedback loops are tight — the available bandwidth for sustaining commitment narrows. And when commitment starts to feel too expensive, the system defaults to the state that requires less active maintenance: uncertainty, optionality, and the perpetual feeling that you haven’t quite decided yet.

The real bottleneck, in my view, is upstream of the decision itself. It’s the clarity of values that the decision is meant to express. When you know — with a kind of settled, embodied certainty rather than just intellectual acknowledgment — what you’re actually trying to build, what you’re responsible for, and what you’re not willing to trade away, the decision-making process looks different. The alternatives AI generates are still interesting. But they’re interesting from the vantage point of someone who already has a direction, evaluating whether the alternatives sharpen or dilute it. That’s very different from evaluating alternatives from a position of genuine openness — which is the correct posture during exploration, but the wrong one during commitment.

Decision coherence is what I’d call the target capacity: the ability to maintain internal alignment between your values, your reasoning, and your actions over time — even as external input accelerates. This isn’t rigidity. Coherent decision-makers change their minds. But they change their minds for specific reasons, in the direction of their actual priorities, rather than because the incoming data stream has temporarily made something else seem more compelling.

The second capacity is what I’d call discomfort tolerance around commitment. This sounds almost trivially simple, and yet it’s one of the places where I see even very sophisticated people struggle. Choosing means not-choosing. Committing to one path means acknowledging that the alternatives you’re not taking might have been better. AI makes this harder because it can always show you what a different choice might have produced. The counterfactual is no longer theoretical — it’s generatable, often quite persuasively, in real time. Learning to sit with the discomfort of a committed position when a confident-sounding alternative is available — that’s a trainable capacity, and an important one.

The third is what I’d describe as signal tolerance: the ability to hold a decision long enough that execution has time to generate real information, rather than abandoning the position before the experiment has run. Most decisions don’t reveal their quality quickly. The early data is often ambiguous, sometimes negative, always incomplete. The pull to reopen the decision during that ambiguous period is strong. Resisting that pull — not out of stubbornness, but out of a disciplined respect for what commitment actually produces — is a skill that atrophies when AI makes the cycle of generating and revisiting alternatives too frictionless.

None of this is a critique of using AI for decision support. Modeling scenarios, exploring alternatives, pressure-testing assumptions — these are all legitimate and valuable uses of AI in the decision process. The issue is that the same tools need to be held by humans who can complete the decision cycle: who can take in the analysis, integrate it, commit to a direction, and hold that direction with enough stability to learn from what it produces.

That last piece — holding direction under velocity — is what Somabase’s cohort work is specifically designed to develop. Not through frameworks or theoretical models, but through structured practice in the actual conditions where decision coherence gets tested: high-velocity information environments, sophisticated input from AI systems, ambiguous early signal, and the very real temptation to keep the loop open just a little longer.

This is experimental work. We don’t have a finished curriculum because the challenge itself is still taking shape. What we do have is a clear hypothesis: that the humans who navigate this era best will be the ones who did the work to develop the internal architecture for commitment — before the velocity made that work feel impossible.

If this framing matches something you’ve been living in your own work, we’re building for exactly that.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

AI Collaboration

Escalation Cycles: The Hidden Cost

What instability costs organizations adopting AI.

By Erik Horbacz · February 2026

When organizations talk about the costs of AI adoption, the conversation tends to focus on the visible expenses: licensing, infrastructure, training, integration. These are real costs, and they’re tractable. You can line-item them in a budget, track them against outcomes, and make rational decisions about allocation.

The cost I’m watching organizations miss is harder to quantify, but it’s larger. And it compounds.

It’s escalation.

Not escalation in the formal sense — not the escalation of a ticket to senior leadership, or the escalation of a minor conflict into a significant one. I mean something more structural: the acceleration of instability when you add AI velocity to humans who haven’t developed the capacity to hold steady under it. When that combination occurs, the feedback loops don’t just move faster. They move faster in the direction of reactive behavior, premature reversals, and amplified confusion. The speed that AI enables doesn’t go to useful outcomes. It goes to cycling through more iterations of the same unresolved dynamic.

This is the hidden cost. Not the tools. Not the training. The escalation cycles that emerge when unstable humans interact with accelerating systems.

Let me make this concrete. An organization deploys AI tools to its leadership and strategy teams. Output volume increases significantly. Decisions are being made faster — or at least, positions are being generated faster. Meetings are better prepared. Analysis is more thorough. On the surface, the indicators are positive.

But underneath, something else is happening. The increased velocity has reduced the latency that used to cushion reactive decision-making. When a team member was reactive — anxious, under-resourced, or carrying unresolved conflict — the natural slowness of information processing gave things time to settle before they became actions. The email took a day to draft. The decision had to wait for the next meeting. The proposal needed another review cycle. That friction wasn’t pure inefficiency. Some of it was absorbing the energy of reactivity before it could manifest as strategic action.

AI removes a significant portion of that friction. And with it, the natural containment that slower systems provided. Now the reactive email gets polished in minutes. The anxious pivot is articulated in a beautifully structured memo. The impulsive reversal of strategy arrives with compelling supporting analysis. The instability moves faster, and it moves dressed in competence.

This is what I mean by escalation under velocity. The underlying human dynamic hasn’t changed. The reactivity, the anxiety, the unresolved relational tension — all of it is still there. But AI has shortened the interval between those states and their organizational consequences. The escalation cycles run faster. The cost accumulates more rapidly.

Research from Frontiers in AI (2024) found that emotional state significantly affects decision quality when working with AI — and specifically that elevated emotional reactivity correlates with biased decisions even when AI is involved in the reasoning process. The AI doesn’t stabilize the emotional state of the user. It processes whatever it’s given. And when what it’s given is reactive input, it produces output that legitimizes and accelerates that reactivity.

What this means practically is that the organizations getting the most dysfunctional outcomes from AI adoption are not necessarily the ones that deployed the technology badly. Many of them deployed it well, by conventional measures. The dysfunction comes from the gap between the capability of the tools and the stability of the humans directing them. The tools are sophisticated. The humans are not yet stabilized for the environment the tools create.

There’s a particular pattern I see at the team level that’s worth naming. When one or two people on a team are internally unstable — reactive, scattered, or carrying unexamined behavioral patterns — AI tools create a kind of amplification loop. The reactive person generates more confident-sounding output, more quickly. That output enters team conversations with more apparent authority. The people with more stable judgment have less time to process and push back, because the cycle has accelerated. The quality of collective decision-making drops not because the tools are bad, but because the tools have shifted the balance toward whoever is generating the most output, fastest — which is not the same as whoever is generating the most reliable judgment.

Organizational stability under AI velocity requires something that most teams haven’t been asked to develop before: what I’d call relational integrity under acceleration. The ability to maintain constructive, clear, and grounded interactions with colleagues when the shared information environment is moving at a pace that previously would have been reserved for crisis conditions. When that capacity is absent, normal operational pressure starts triggering crisis-level responses. The escalation cycles aren’t just internal to individuals — they propagate across teams and become structural.

The conventional response to this problem is process: more structure, more sign-off layers, more review cycles. These can help at the margin. But they address the symptom, not the source. The source is the gap between the human capacity to hold steady and the velocity the technology enables. Adding process to that gap doesn’t close it. It adds friction at the output layer while leaving the underlying instability intact — and unstable humans are reliably creative at working around process friction.

What closes the gap is developing the underlying capacity: stability under pressure, decision coherence, behavioral containment. These are trainable. They’re not soft or abstract — they’re operational capacities that show up directly in the quality of decisions made under time pressure, the ability to hold a team direction when alternatives are being generated rapidly, and the reduction of escalation cycles that cost organizations real time, real money, and real relational capital.

This is what Somabase’s Enterprise program is designed to address. Not as a compliance or wellness initiative — as a performance investment. The hypothesis is simple: if the limiting factor for AI collaboration is human stability, then developing that stability is one of the highest-leverage things an organization can do in the current environment. Not instead of AI investment — alongside it. The tools are accelerating. The question is whether the humans directing those tools are developing at a commensurate pace.

We’re at the beginning of knowing how to do this systematically. The research is early. The organizational context is genuinely new. But the pattern is visible enough to act on, and the cost of not acting is compounding every quarter as AI capability advances.

The escalation cycles are the signal. They’re telling you that the investment in human capacity hasn’t kept pace. What you do with that signal is the question.

This is what Somabase is exploring — and if you’re seeing this pattern in your organization, we’re building something worth talking about.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

Relational Intelligence

Relational Intelligence as Infrastructure

This is not self-help. It is operational capacity.

By Erik Horbacz · February 2026

There is a category error embedded in how most organizations think about relationships. Relational skills — how people listen, navigate conflict, hold their ground under pressure, repair after rupture — get filed under “soft skills,” which is professional shorthand for: important in theory, deprioritized in practice. Trainings happen once a year. HR handles it. We move on.

The cost of this error is enormous and almost entirely invisible on the spreadsheet.

Every team runs on the quality of its relational dynamics. Not on top of them — on them. The capacity to deliver honest feedback, to stay coherent when a partnership is under strain, to move through disagreement without fracturing, to sustain presence when a conversation gets difficult — these are not interpersonal amenities. They are load-bearing elements of any functioning collaboration. When they’re underdeveloped, you don’t just get awkward meetings. You get avoidance masquerading as agreement, passive friction bleeding into timelines, decisions made by whoever tolerates confrontation least, and teams that look functional on an org chart until stress exposes the structural gap.

I call this relational infrastructure. And like physical infrastructure, most people only notice it when it fails.

The question worth sitting with is: what would it take to actually develop this? Not to talk about it in a workshop, not to absorb another framework on active listening — but to genuinely build the capacity for relational integrity under real-world pressure?

This is the question Somabase’s Relational Intelligence track is organized around.

The research here is more rigorous than the soft-skill framing suggests. Stephen Porges’ work on polyvagal theory has spent decades documenting something that practitioners have known intuitively: the body’s capacity for social engagement — open communication, genuine listening, collaborative problem-solving — is not primarily a matter of intention. It is a matter of physiological state. When a person (or a team) is in a state of felt safety, social engagement comes naturally. When that felt safety is absent, the system defaults to self-protection. Not because the person is difficult. Because that’s how biology works.

This matters practically because it means relational quality is not just a character trait to be selected for in hiring. It’s a capacity that can be built — or degraded — by conditions. The conditions of a cohort designed specifically for this development are different from the conditions of a standard team meeting. That difference is not cosmetic.

Attachment research adds another layer. The patterns people develop early in life — how they relate to uncertainty, how they handle the experience of needing something from another person, how they respond when a relationship feels threatened — these patterns carry forward with remarkable consistency into adult professional and personal life. This is not determinism. It is pattern recognition. And patterns, once visible, become workable.

The Somabase Relational Intelligence cohort is an 8-week structured practice container built around this premise. It is not therapy. It is not a support group. It is guided practice — in a cohort format — focused on the specific relational capacities that are most likely to break down under pressure: identity boundaries, escalation reduction, the ability to sustain presence in high-stakes relational moments, and the capacity to repair rather than retreat when rupture occurs.

What makes the cohort format effective here is not simply that it offers community. It’s that it creates the conditions for real relational practice, which requires — necessarily — other people. You cannot develop relational capacity in isolation any more than you can develop cardiovascular fitness by reading about running. The group container is the training environment. The relational dynamics that emerge in the cohort itself — the friction, the moments of resonance, the experiences of feeling seen or misunderstood — those are the material.

Research from Frontiers in Psychology (2025) and the Brandon Hall Group’s work on learning effectiveness both point to the same finding: structured group containers produce more durable behavioral change than individual learning. The reason is straightforward. Behavioral change requires repetition in context. A cohort provides that context, repeatedly, over eight weeks.

The topics the Relational Intelligence cohort works with include: how relational patterns surface under pressure; what identity boundaries actually look like in practice (and what their absence costs); how sexual dynamics serve as a precision lens for understanding how someone relates; and how the relationship with self — the quality of self-regard, the capacity for honest self-observation — shapes every external relationship as an upstream variable.

That last dimension — the relationship with self — tends to get the least airtime in professional development contexts, and it may be the most consequential. The way a person relates to their own internal experience, the degree to which they can stay present to uncertainty without collapsing or overreacting, the quality of their internal coherence under pressure — all of this transmits directly into their relational behavior. You can’t build relational integrity externally while the internal relationship is fragmented. It doesn’t hold.

I am not framing this as a problem to be fixed. Most of the people drawn to Somabase’s work are already capable, already functional, already succeeding by most external measures. What the Relational Intelligence track offers is not remediation. It is precision development — the kind of capacity building that shows up in decision coherence, in the quality of partnerships, in the ability to navigate high-stakes relational dynamics without defaulting to patterns that are no longer useful.

The organizations and individuals doing this work will be better equipped for what’s coming. The complexity of human collaboration — with each other, and increasingly with intelligent systems — is not decreasing. The relational demands are increasing. Treating relational intelligence as infrastructure, and building it accordingly, is not the soft choice. It is the strategic one.

Somabase is still early. The Relational Intelligence cohort is an experiment, running in real time, with real people who are willing to do serious work in a structured container. If that sounds like the kind of development you’re ready for, we’d like to hear from you.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

Relational Intelligence

Sexuality as a Diagnostic Domain

A precision lens for relational patterns.

By Erik Horbacz · February 2026

Most frameworks for relational development politely sidestep one of the most precise signals available. They cover communication styles, conflict resolution, feedback loops, listening skills. These are useful. They are also, in a specific way, sanitized — designed to be palatable in corporate contexts, safe to put in a handbook, unlikely to surface the deeper patterns that actually drive relational behavior.

Sexuality is different. It is not a topic people comfortably include in leadership development curricula, and that discomfort is itself data. The reason sexuality is avoided in serious relational development work is not that it’s irrelevant. It’s that it’s too relevant — close enough to the actual material that it makes people uncomfortable, which is exactly why it functions as a precision lens.

Let me be direct about what I mean by that.

How someone navigates desire — whether they can acknowledge it, communicate it, tolerate the vulnerability of wanting something — maps closely to how they navigate desire in non-sexual contexts: wanting recognition, wanting influence, wanting a partnership to work. The relational dynamics are structurally similar. The sexual context just makes the pattern more visible because there’s less room to intellectualize it.

How someone navigates vulnerability in intimate contexts — whether they can stay present when they feel exposed, or whether they default to control, withdrawal, performance, or deflection — is frequently the same pattern they carry into high-stakes professional relationships. The strategy changes. The underlying structure does not.

The same is true for boundary dynamics. Someone who consistently dissolves their own preferences in intimate contexts to avoid disrupting a partner’s comfort tends to do the same in professional settings. Someone who uses intensity to override another person’s stated limits in intimate contexts tends to do that in business negotiations, too. These are not separate personality domains. They are expressions of the same relational architecture, surfacing in different contexts.

This is not a new observation. It’s one that practitioners across disciplines have made for decades. What’s underused is the willingness to work with it directly — to treat sexual dynamics not as a separate personal category, but as a diagnostic domain: a place where relational patterns become visible with unusual speed and clarity.

The research supports this framing more than most people realize. A 2019 study published in the Journal of Sex & Marital Therapy found that mindful awareness during sexual experience correlates with higher relationship satisfaction and significantly reduced relational anxiety. The operative mechanism appears to be the capacity for presence under vulnerability — which is exactly the capacity that determines relational quality across all domains. Developing it in an intimate context has upstream effects.

Attachment research adds a relevant dimension here. The patterns established early in life around proximity, safety, and self-disclosure — what researchers call attachment styles — are activated most strongly in intimate contexts. This is not coincidence. Intimacy is the original relational laboratory. The patterns formed there are foundational. Working with them in an informed, structured setting is not indulgent — it is strategically efficient.

Somabase’s Relational Intelligence cohort includes sexuality as a domain of inquiry because leaving it out would mean leaving out the most precise signal available for understanding how someone actually relates. That’s an analytical choice, not a provocative one.

To be clear about what this means in practice: this is not sex education. It is not couples counseling. It is structured group coaching with clear ethical parameters, facilitated by practitioners who know how to hold this material with care. The cohort explores the relational patterns that surface in sexual contexts — desire, vulnerability, power, boundaries, intimacy, reciprocity — as a window into the broader relational architecture that shapes every significant relationship in a person’s life.

The topics are handled with precision and without gratuitousness. The aim is not exposure for its own sake. It is pattern recognition — developing the observational capacity to see your own relational tendencies clearly enough to work with them consciously.

There is also a dimension here that is often missed in the professional development framing: the relationship with self. How someone relates to their own desire — whether they can tolerate it, acknowledge it, act on it with integrity — is a direct expression of their relationship with self. And the relationship with self is the upstream variable for every external relationship. The degree to which someone can stay honest with themselves about what they want, what they’ll accept, what they need in close relationship — this determines the quality of their relational output in every direction.

A person who has done serious work in this domain shows up differently. Not because they’ve acquired new techniques for managing conflict. Because the underlying relational capacity is more developed — the ability to stay present under vulnerability, to maintain identity clarity in the context of closeness, to navigate power and desire without defaulting to avoidance or aggression. That capacity transmits into every relationship: romantic, professional, collaborative.

The reason this hasn’t been integrated more widely into relational development work is not because it doesn’t belong. It’s because it requires a container that can hold it — skilled facilitation, clear structure, a cohort of people who have opted in with genuine intention, and ethical boundaries that make the exploration safe. Somabase is building that container.

I want to be honest about the experimental nature of this. Including sexuality as a dimension of relational intelligence development in a cohort format is not a common approach. We are building this carefully, adjusting as we go, and doing it with people who are ready to engage seriously with what it surfaces. It is not for everyone. For the people it is for — the ones who sense that the relational dimension is where the real leverage is — it may be the most useful development work available.

If you’re curious about what that container looks like and whether it’s the right fit, we’re having those conversations.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

MS360

MS360: A Mirror for Behavior Patterns

What it is. What it is not.

By Erik Horbacz · February 2026

One of the persistent challenges in behavioral development work is the lag between what someone intends and what they actually do — and the further lag between what they do and their awareness of having done it. Patterns are, by definition, automatic. They run below the level of conscious deliberation. This is what makes them efficient, and it’s also what makes them difficult to change: you can’t work with what you can’t see.

This is the problem MS360 is designed to address.

MS360 — MindScape360 — is Carinda Salomon’s biometric translation technology. I want to be precise about that attribution, because it matters: this is Carinda’s work. I am Somabase’s co-founder. She is the developer of this technology. What we are building together is an integration — an exploration of how MS360’s visibility layer can work alongside Somabase’s cohort model to accelerate the behavioral pattern recognition that the group work is designed to develop.

What the technology does is translate internal physiological states into visible patterns. Biometric data — the kind that reflects internal states — becomes a readable signal rather than an invisible undercurrent. The body is always tracking. MS360 makes some of that tracking legible.

It does not diagnose. It does not interpret meaning. It does not tell you what to do. It provides data — a physiological layer that participants can reference as they move through the relational and collaborative work of the cohort.

The research basis for this kind of external visibility is substantial. Biofeedback research has consistently shown that when people gain access to information about their internal states, the rate of behavioral pattern recognition accelerates. The mechanism is straightforward: the feedback loop shortens. Instead of requiring weeks of behavioral observation to identify a pattern, a participant can see what their internal state does in specific kinds of moments. The connection between trigger, state, and behavior becomes observable rather than reconstructed from memory.

Somatic research adds a complementary finding. Work by Payne, Levine, and colleagues (2015), and by Brom and collaborators (2017), demonstrates that body-based approaches produce measurable outcomes in behavioral regulation that cognitive approaches alone often don’t reach. This makes intuitive sense: the patterns encoded in habitual behavioral responses are not purely cognitive. They involve the whole physiology. Accessing them through a physiological signal — rather than only through verbal reflection — engages a different and often more direct route to the underlying structure.

MS360 is used in parallel to the Somabase cohort work, not embedded within the group sessions themselves. This is an important distinction. The cohort sessions are a relational container — a structured group practice space where human dynamics are the primary material. The MS360 layer is something participants work with alongside that, providing a complementary data signal they can bring to their own reflection and, where relevant, to the coaching context.

The hypothesis we are testing is this: when people can see their patterns — physiologically, not just behaviorally — they develop the capacity to change them faster. When the invisible becomes visible, it becomes workable. This is not a guarantee. It is a working premise that we are testing in real time, with real participants, in an early-stage experiment.

I want to be honest about where we are with this. MS360 is not a validated clinical instrument in the sense that it has been through the full arc of peer-reviewed longitudinal study. It is a sophisticated technology in active development, being integrated with a cohort model that is itself in active development. Carinda and I are building this carefully and iteratively. The early signals are promising. We are treating them as signals, not conclusions.

There is a particular quality of self-knowledge that becomes available when behavioral patterns are visible rather than inferred. Most people have some awareness of their patterns — the circumstances under which they become reactive, the relational dynamics that reliably produce a specific response. But awareness of a pattern in the abstract is different from watching it unfold in real time. The latter is more actionable. It creates a reference point: a moment you can return to, examine, and use as the basis for a different choice next time. MS360 is designed to provide exactly that kind of reference point — a data signal that is personal, specific, and grounded in what your own physiology actually did, not what you think it did.

What makes this integration worth pursuing is the nature of the problem it addresses. The Somabase cohort work — whether in Relational Intelligence or AI Collaboration — is designed to develop stable, grounded behavioral capacity over time. That development happens through practice, reflection, and the feedback that a skilled facilitated group provides. Adding a physiological visibility layer does not replace that process. It creates a complementary data channel that participants can use to sharpen their own self-observation and accelerate the pattern recognition that the group work initiates.

The goal is not technology for its own sake. The goal is acceleration of genuine behavioral development — the kind that persists beyond the cohort container and shapes how someone actually functions in their relationships and their work. If MS360 can meaningfully contribute to that, it earns its place in the model. We believe it can. We are finding out.

For people interested in the fuller picture — the combination of cohort-based relational work with the physiological visibility that MS360 provides — that integration is something Somabase offers as part of its experimental suite. It is not the right fit for every participant. For the ones who want access to every available signal in their own development, it is a meaningful addition.

Carinda’s work deserves its own exploration, and we’ll be publishing more on the technology itself in coming months. For now, this is the orientation: a technology that makes internal states visible, integrated with a group practice model that makes relational patterns workable. Two complementary approaches to the same fundamental challenge — developing the capacity to see clearly enough to change.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library

Field Notes

Human–Technology Collaboration Principles

Principles for the next decade of working with intelligence.

By Erik Horbacz · February 2026

What follows is not a manifesto. It is a working document — a set of principles distilled from the experiment that Somabase is running, refined through the cohort work we’ve done so far, and held loosely enough to be revised as the evidence develops. These are operating hypotheses, not conclusions. They are the intellectual foundation we’re building on, stated plainly so they can be examined, challenged, and tested.

If you’re building in this space — or thinking seriously about what it means to work with intelligent systems well — these are the premises I’m working from.

1. Your state determines your output.

This is the most fundamental principle in the model, and the most consistently underestimated. The quality of what you produce — your decisions, your communications, your collaborations — is downstream of your internal state. An intelligent system can amplify your capacity. It will also amplify your noise. If you are operating from a state of chronic reactivity, avoidance, or fragmentation, AI will not solve that. It will scale it.

Human stability is not a precondition for working with AI in the way that, say, hardware compatibility is a precondition. It is a precondition in the deeper sense: the quality of the person operating the tool shapes the quality of what the tool produces. Before asking what AI can do, it’s worth asking what state you’re in when you’re using it.

2. Delegation without discernment becomes dependency.

Intelligent systems are extraordinarily capable of handling tasks that were previously time-consuming. The efficiency gains are real. The risk is that the relief of offloading work can gradually extend to offloading judgment — the evaluative capacity that determines what to do with the outputs, how to use them, and whether they serve the actual goal.

Discernment is the capacity to judge well. It is developed through experience, through making decisions and living with their consequences, through building pattern recognition in real-world contexts. Delegating tasks is leverage. Delegating judgment is atrophy. The distinction requires ongoing attention.

3. The quality of your relationships determines the quality of your collaboration — with humans and with AI.

How you relate to other people shapes how you relate to every collaborative process. Relational patterns are not domain-specific. A person who defaults to control in close relationships tends to use technology in a controlling way — extracting answers rather than exploring possibilities, using AI to confirm what they already think rather than to genuinely challenge it. A person with high relational integrity tends to collaborate well with both humans and systems.

This is one of the core hypotheses of Somabase’s model: relational intelligence is not separate from AI collaboration capacity. It is foundational to it.

4. Velocity without coherence produces escalation, not progress.

AI dramatically increases the speed at which work can be produced. This is valuable when the direction is sound. It is genuinely dangerous when the direction is unclear, when the underlying thinking hasn’t been examined, or when the person operating the system is in a reactive state. Faster in the wrong direction is worse than slow.

Coherence — clarity of intent, alignment between what you say you want and what you’re actually doing — is the prerequisite for productive velocity. Somabase’s cohort work is substantially about developing coherence: the capacity to act from a stable, examined position rather than from the momentum of habit.

5. Identity boundaries are the prerequisite for productive partnership — with any intelligence.

Partnership requires distinction. You cannot have a genuine collaboration with another intelligence — human or artificial — without a clear enough sense of yourself to know where you end and the collaboration begins. When identity boundaries are weak, collaboration tends toward either enmeshment (losing yourself in the process) or reactivity (defending against the process because it feels threatening).

Identity clarity does not mean rigidity. It means knowing your own values, preferences, and positions well enough to engage with difference without losing yourself in it. This is as relevant to working with AI systems as it is to close human relationships.

6. Behavioral patterns are consistent — they surface in relationships, in work, and in how you interact with technology.

One of the most reliable observations from behavioral research is that patterns transfer across contexts. The person who avoids directness in personal relationships tends to avoid it with AI — crafting prompts that hedge, that avoid specificity, that don’t actually ask for what they need. The person who defaults to dominance in professional contexts tends to use AI transactionally, extracting outputs without real engagement. The person with high relational integrity tends to interact with intelligent systems with a similar quality of presence.

This is not determinism. It is pattern recognition. And it is an invitation — if you want to understand how you actually relate, look at how you’re relating right now, in every context that’s in front of you.

7. Visibility accelerates change.

Patterns that operate below awareness are difficult to change. Patterns that are visible — behaviorally, physiologically, through the reflection of a skilled practitioner or a cohort — become workable. The reason Somabase integrates the MS360 biometric visibility layer into the cohort model is not a belief in technology for its own sake. It is an application of this principle: when you can see your patterns, you can work with them. When you can’t, you’re working in the dark.

This is also why the cohort format itself is valuable. Other people reflect your patterns back to you in ways that solo reflection doesn’t reach. Visibility is not only a technological function. It is a relational one.

8. Community is the container — individual development happens fastest in structured group settings.

Behavioral change is not primarily an information problem. Most people who struggle with relational integrity, with stability under pressure, with decision coherence — they don’t lack information about what good behavior looks like. They lack the conditions for sustained practice and feedback in real-world relational contexts. Those conditions require other people.

Research from Frontiers in Psychology (2025) and the Brandon Hall Group’s work on learning effectiveness both document this: structured group containers produce more durable behavioral change than individual learning. The cohort is not just a delivery mechanism for content. It is the development environment itself.

These eight principles are the framework I’m building from. They are being tested through Somabase’s cohort work, refined by what we observe, and revised when the evidence demands it. Some of them will hold. Some will be refined beyond recognition. That’s the nature of working at the edge of something genuinely new.

The broader project — understanding how human beings can develop the stability, relational capacity, and behavioral coherence required to work with intelligent systems well — is not finished. It may be the defining developmental project of the next decade.

Somabase is an early attempt to build infrastructure for it. These principles are where we’re starting.

Erik Horbacz is the co-founder of Somabase, a human-technology collaboration platform exploring the intersection of relational intelligence and AI. He is building alongside Corvia (an AI music community) and The New VC.

← Back to Library