Daniel Hardman's Papers

scholarly and technical writings on various topics

Intent and Boundaries

Daniel Hardman — 20 June 2025

#ux #ssi #agents #ethics


Lip service

Recently I helped a visiting friend who was unfamiliar with our smart TV’s remote. While he sat on our couch, I skimmed the catalog of our streaming service, selected a show, and clicked the big blue Watch button.

(The interface I navigated was like this, although I captured this later, using a different show as an example, to protect the guilty.)

A few minutes later, I noticed an email from the content provider, thanking me for agreeing to an upgraded subscription. Apparently, what I was watching required a fancier membership, and by streaming, I had agreed to new terms, conditions, and a bigger monthly bill.

Upgrading was not my intent when I clicked the Watch button. (In the web version of my streaming service’s interface, this button probably said “Watch with upgrade”, but I don’t remember seeing that in the limited screen real estate of the TV’s button.) No warning or confirmation was provided, and since the service already had my payment info, I wasn’t prompted about that, either. I felt manipulated.

As builders of software, we have to handle intent better.

Thinkers have explored intent in various disciplines, including law, philosophy, religion, organizational behavior, cybersecurity, and human-computer interaction. In my experience, user intent is also discussed regularly by dev and product teams in software. Nonetheless, our industry is often careless, naive, inefficient, or downright disrespectful of the intent of those who use what we build. My story is just one example. Check out darkpatterns.org for many more.

I’d like to raise the bar by introducing some concepts that I think our industry lacks, and then proposing some associated principles. I believe these principles are foundational to smooth and intuitive UX, efficient and safe agentic UI, and ethical and empowering identity systems.

Mental model

Intent

Various definitions of this word are valid. (If you want to explore, I recommend G. E. M. Anscombe’s monograph, Intention). But here, I define it as follows:

Intent is a mental stance that explains a choice of action as contributing to a specific purpose: “When Alice clicked the button, her intent was to watch, not subscribe.”

Observations about intent

Given this definition, consider the following:

Boundaries

We can now introduce another useful concept:

An intent boundary is a place where what is known about an intender’s intent by an external party becomes inadequate. It is a boundary to the external party, because it prevents them from confidently characterizing the intent on the other side.

Principles

Now that we’ve built a mental model, let’s discuss some ethical, UX, and security principles that relate to intent and intent boundaries. (I assert these particular principles based on my own experience, but I expect that readers with deep backgrounds in psychology, ethics, HCI, and software architecture will perceive deep resonance with acknowledged best practice in their respective fields.)

1. Recognize boundaries

This is more than just noticing boundaries when we trip over them. “Recognize” implies that we proactively seek to understand what boundaries exist, and where, and why — and that we accept them and acknowledge to ourselves and others what we discover. Pretending that boundaries do not exist, or being casual and careless about them, can be dangerous, unethical, inefficient, or downright stupid. [Related: Principle of Least Astonishment : Principle of Least Privilege : Boundaries in Personal Relationships : #MeToo and Consent Boundaries : Understood and Informed Consent in Bioethics : Privacy by Design : Cognitive Load Theory : Proportionality Principle in Law]

The designers of the streaming service app on my smart TV faced an intent boundary in the UX of their Watch button. Did I intend to upgrade when I clicked a button with that one-word label? They chose to pretend the boundary did not exist, and they were wrong to do so. I claim their choice was unethical, that it created bad UX, and that it was also a bad business decision. It let them sell a subscription — for a few minutes, until I canceled — but it also earned them the lasting disgust of a user.

Those designers might push back: “We have limited space on the screen, and remotes are terrible for fancy user input. You could easily undo the upgrade. You wanted content fast, and you knew that your service sold various levels of access to the content. It was better to guess than to ask you for confirmation.” This sounds like self-serving blame-the-user rationalization to me. Let’s not be that way. Remember the observations above about who is the authority on the user’s intent, and about how consequences of actions are not always obvious? Bundling intended and unintended consequences together and falling back on caveat emptor when pushed is straight out of the playbook of a sleazy used car salesman.

Skilled doctors work hard to capture and identify symptoms, and to account for them wisely in a diagnosis. Skilled software pros work hard to recognize boundaries, becuase this helps them write ethical, effective, and pleasing software.

A corollary to this principle is also worth mentioning: Don’t imagine boundaries where they don’t actually exist. Pretending that we don’t know the user’s intent can add annoying and unnessary friction. If we know, let’s make it so, ASAP. [Related: DRY Principle : Single Source of Truth : Frustration Aggression Hypothesis : Cognitive Friction]

Of course there are times when confirmation is appropriate; unintended consequences are a risk. I appreciate having to click a final Pay Now button to book a $2000 plane ticket, and I’m glad GitHub asks me to type the name of a repo before I delete it. But how many times have you clicked an Unsubscribe link in email, only to arrive at a screen that asks you to re-enter your email address or confirm again? There is no legitimate intent boundary between clicking Unsubscribe and unsubscribing…

The #NoPhoneHome movement in decentralized identity is all about how some identity technolgies silently violate the privacy intent of credential holders. EFF exists to fight similar problems.

Trustworthy AI agents can only exist if they recognize and scrupulously honor the intent boundaries of their owners.

2. Consider moving rather than crossing boundaries

Let’s say that you’re designing software, and your analysis leads you to discover a boundary. You frankly acknowledge it, but you regret it. You’d really like to NOT have to ask a user to confirm their intent at a particular place. Is there anything you can do?

Well, good UX can often move a boundary instead of forcing the boundary to be crossed.

Given the normal blue Watch button in my streaming app, what if the Watch button for premium content looked like this?

red Watch button

There is no doubt about the user’s intent when they click such a button. Better communication moves the boundary.

Teaching a user, either explicitly or through consistent cause-and-effect, can also move a boundary.

3. Consider peeking across boundaries

Suppose the UI team behind this streaming app is constrained by the smart TV manufacturer, whose UI library enforces that the only viable label on the button is “Watch” without a dollar sign, and the only viable color is blue.

An alternative way to reduce friction without violating boundaries would be to start the movie playing, but superimpose a message that says “This is premium content. Cancel within 60 seconds to avoid upgrade.”

Friction is still low, and the app can default to the streaming provider’s preferred guess about intention. But the boundary is still recognized, and nobody feels manipulated.

Lowering the stakes for a decision until you organically reach confidence about the user’s intent — by restricting its scope or making it easy to undo — is a way to peek across boundaries. Testing click and mouse movement patterns, and monitoring rates at which assumptions are reversed by other users, especially other similar users, might also be ways to peek.

4. Never sneak across boundaries

Intent boundaries are lines beyond which it’s inappropriate to assume what someone wants. By definition, sneaking across boundaries is therefore unethical in all cases — not just if you assume incorrectly. Smart users won’t appreciate software that takes their intentions for granted, and will trust it less each time they see this happen.

Sneaking across a boundary might mean:

Conclusion

Mishandled intent boundaries are at the heart of many trust problems and UX problems in software. I’ve now spent over a decade of my career on cybersecurity and identity technologies, and issues with intent boundaries pervade these domains. Intent is going to have to be handled with much greater sophistication as agentic AI matures. The best thinkers I’ve ever known in the discipline of UX were distinguished from more mediocre colleagues in large part by their understanding of and respect for intent principles.

If you’re in software, I invite you to seriously consider intent boundaries as part of your next design effort. The payoff will be significant.