What PSF Offers Practitioners

The Proxy Seduction Framework (PSF) is a diagnostic framework, not a consulting framework. It does not tell organizations what to do. It tells them what they can no longer see, and it provides specific instruments to recover partial visibility.

The framework's posture
What PSF offers

A diagnostic that reveals what engagement conceals

PSF provides instruments for detecting evaluative erosion before it surfaces as visible failure. The framework identifies observable patterns (proxy traps), maps organizational activities by vulnerability (material braking), and provides questions that expose hidden dependencies (judgment stock).

These instruments work because they do not assume the evaluator is stable.

What PSF does not offer

Prescriptive remedies or implementation playbooks

PSF does not prescribe guardrails, best practices, or AI governance frameworks. Every other framework already does this (e.g., Mollick's delegation calculus, Raisch's paradox management, the bounded rationality remedies).

Those prescriptions assume a stable evaluator who can assess whether the guardrails are working. PSF's contribution is explaining why that assumption is structurally unreliable.

Why this is principled, not incomplete

The absence of prescription follows from the mechanism

If an organization's evaluative capacity has already been partially reconstituted by engagement, then any remedial guidance PSF offers would be evaluated through the very frame the guidance is trying to correct. This applies at every level: the CEO assessing strategy, the team lead evaluating output quality, the junior practitioner judging their own development.

The desire for prescriptive guidance is itself a data point. The instinct to ask "so what do I do?" before asking "what can I no longer see?" is exactly the move PSF predicts: the demand for actionable metrics over diagnostic judgment. This instinct operates identically whether it comes from a board, a middle manager, or an individual contributor.

Three diagnostic instruments
Instrument 1

Detection infrastructure

The twenty-one proxy traps are observable patterns a CEO, a team lead, or anyone in the organization can listen for in their own discourse. When the leadership team says "AI handles the routine so we can focus on what matters," that is a testable claim, not a fact.

PSF asks: is the capacity to identify "what matters" itself sustained through the routine work being displaced?

In practice: Run a team's recent AI-related communications (Slack threads, strategy decks, town halls) through the proxy traps. Which traps appear? Do they cluster? Are they strategic (deliberate framing) or constitutive (sincere belief)? The strategic/constitutive distinction is itself diagnostic: a high ratio of constitutive traps indicates that proxy seduction has progressed further than a high ratio of strategic traps.
What this produces

A map of which evaluative assumptions the organization has internalized without examination, and where the discourse has naturalized the erosion. Not a fix. A picture of what needs fixing.

Instrument 2

The braking audit

PSF predicts that evaluative erosion severity varies with material braking: the degree to which the domain's properties force degradation into view. A CEO can map their organization's activities along this dimension now.

Activities with strong braking (physical outputs, regulated thresholds, immediate consequences) are naturally resistant. Activities with weak braking (strategic judgment, design quality, talent evaluation, mentorship) are where proxy seduction operates most freely.

In practice: List the organization's AI-engaged activities. For each, ask: if evaluative capacity degraded by 30% in this activity, how long would it take for someone to notice, and through what signal? If the answer is "months" or "we'd see it in the metrics" (which are themselves proxy metrics), that activity sits in the weak-braking zone. It is not necessarily in trouble, but it is the place where proxy seduction would operate without detection.
What this produces

A vulnerability map. Not a list of activities to stop using AI for, but a map of where the organization needs independent evaluative capacity that does not depend on the metrics the engagement itself produces.

Instrument 3

The judgment stock question

Every AI engagement decision has a visible component (what does it produce now?) and an invisible component (what developmental activity does it displace?). Current output quality can mask a drawdown of accumulated judgment stock.

PSF gives the CEO a question: through what activities did the people currently doing this work develop the judgment to do it well, and does this engagement preserve or bypass those activities?

In practice: Take the organization's most AI-engaged activity. Identify the three most experienced practitioners in that activity. Ask: how did they develop the judgment they bring to this work? What specific experiences, struggles, feedback loops, and effortful practice built the competence they now exercise? Then ask: does the current AI engagement preserve those developmental pathways for the next generation of practitioners, or does it route around them? If the senior practitioners' judgment is being consumed (they evaluate AI output using judgment they built through pre-AI practice) but not replenished (junior practitioners skip the developmental activities that built that judgment), the organization is drawing down a non-renewable resource.
What this produces

A generational assessment. Current quality may be high because the senior practitioners carry forward judgment stock developed under different conditions. The question is whether the organization is producing the next generation of practitioners who can do the same, or whether it is unknowingly living on inherited capital.