Foundations

AI should be applied as broadly as possible, but in high-stakes systems it must be controlled, repeatable, explainable, and reconstructable over time.

This page is the interpretive anchor for everything else on IntelligenceFactory.ai.

Domain Definition

I work on AI systems in regulated healthcare and similar high-stakes environments, where ambiguous inputs must map to precise, auditable actions. In these settings, probabilistic correctness is not sufficient. Systems must be controlled, repeatable, explainable, and reconstructable over time.

This work applies to problem classes where correctness can be challenged later and where the burden of proof is real. If a workflow touches patient care, billing eligibility, policy enforcement, reimbursement, regulatory review, or litigation exposure, then system behavior must be explicitly constrained.

A system that is "usually right" is not acceptable when decisions affect patient care, when compliance rules are involved, and when errors carry asymmetric downside. Reducing an error rate from 15% to 3% can be a technical success. In medicine and regulated operations, 3% can still be catastrophic.

Core Misunderstanding

Much of what we hear about AI today is marketing. The excitement is real, but the conversation often collapses distinctions that matter operationally. Large language models are remarkable, and they should be used. They are not a panacea, and they are not interchangeable with AI as a whole.

The most common failure pattern is attaching LLMs to every process and hoping correctness emerges from prompt tuning, retries, or scale. That can be acceptable in low-stakes domains. It fails in healthcare and similar environments where decisions must be defended, replayed, and audited.

In practice, most failures are not model intelligence failures. They are boundary failures. Teams let probabilistic components cross decision boundaries that require deterministic behavior. Once that line is crossed, reliability and accountability degrade quickly.

Operating Principles

  • Structure before generation. When identity, eligibility, policy, or payment enforcement matters, structure must come first.
  • Determinism at decision boundaries. Probabilistic models are useful in constrained lanes, but they cannot be final authority where consequences are real.
  • Traceability is a requirement. If a decision cannot be reconstructed later, it is operationally unsafe.
  • Versioning is inseparable from correctness. Rules, ontologies, and policies change; systems must preserve historical truth across versions.
  • Humans remain primary actors where trust matters. AI should reduce failure paths, not remove accountability.
  • Controls are more important than confidence scores. High confidence in an unconstrained system is still unsafe.

Why Healthcare (and Similar Industries)

Healthcare exposes system design mistakes faster than almost any other domain. It combines intertwined rules, evolving ontologies, reimbursement constraints, regulatory scrutiny, and immediate human consequences. There is very little room for fuzzy boundaries.

If AI can be applied correctly in healthcare-grade operations, it can usually be applied correctly anywhere else with similar governance pressure. If it is applied incorrectly here, the cost is immediate and visible.

That is why I focus on healthcare and adjacent regulated sectors. These domains force clarity. They reward explicit control planes, disciplined abstractions, and systems that can stand up to audits years after a decision was made.

Problem Classes (Canonical Definitions)

Proof Surfaces (Case Law)

Credibility Context

I have been working in AI since the late 1990s, long before this cycle of adoption. I have applied these ideas in real healthcare operations, including helping turn around a failing healthcare technology company by materially increasing patient volume while reducing overhead. Those outcomes did not come from hype. They came from careful use of the right techniques in the right places with explicit controls.

How to Read the Rest of This Site

The evergreen pages define recurring hard problems in regulated AI systems. They focus on why common approaches fail and what correctness actually requires.

Essays are scar tissue. They document where systems broke, what had to be abandoned, and which constraints mattered under pressure.

Product and project pages are applications of these ideas, not replacements for them. Implementation detail belongs in private working conversations.

Engagement Boundary

If you are looking for hype, this site will disappoint you. If you are looking for ways to apply AI broadly, correctly, and safely in high-stakes environments, use one contact path and start there.

Contact