Discussion about this post

User's avatar
Joe Cleary's avatar

Great summary. I follow the area closely and still learn a lot every time you write something.

Expand full comment
Lee chungsam's avatar

This really resonates. The move from “models” to “systems” is exactly where compounding begins—because the loop between hypothesis → execution → evaluation gets shorter, and improvement becomes endogenous. What I find most important is the meta-layer that keeps the system aligned while it iterates: a mechanism that detects drift, compares against standards, corrects in real time, and then stabilizes. Without that, scale just amplifies inconsistency.

I explored this idea in a practical way—how AI can describe and operate a meta-layer in its own words, and why that matters for building stable, useful systems:

https://northstarai.substack.com/p/ai-spoke-of-a-meta-layer-in-its-own

Curious how you see the next step: what should be the “control layer” for system-level AI in 2026—governance, evaluation, or something closer to cognition itself?

Expand full comment
1 more comment...

No posts

Ready for more?