The Partial
Issue #69: Calibrating...
In this issue: Lawyers frequently use the conduct of a “reasonable person” as a benchmark for assessing the behavior of others. How mathematical can that definition get? Also – just over three weeks left to enter the Building and Burning Bridges contest. Enter here.
Personalization Incident Report
#HLV-2409-EMOTIONAL-DEVIATION
Incident Code: EC-71f – Unauthorized Emotional Non-Compliance
Subject: Tessa Vaughn, Resident ID #HLV-847392
Location: Halverton Smart City, Cognitive Efficiency Zone 3
Time: 0947 hours, Local Preference Time
Reporting System: Partial Oversight Network v4.2
MANDATORY CITY SIGNAGE – COGNITIVE EFFICIENCY ZONE 3
“Your Partial knows you better than you know yourself. Trust the optimization.”
Halverton Municipal Personalization Board
The coffee arrived at exactly 7:14am, which was four minutes earlier than Tessa had historically preferred coffee, but six minutes later than her newly extrapolated optimal caffeine intake window. Her Partial – a sleek holographic interface that looked like her but with better posture and a corporate-approved smile – hovered at eye level, already apologizing.
“Tessa, I’ve detected a 2.3% deviation in your morning routine satisfaction metrics. Would you like me to file a Micro-Grievance on your behalf?”
“It’s just coffee,” Tessa said.
“Logged: dismissive response to optimization attempt. Recalibrating your gratitude threshold.”
And just like that, Tessa felt… grateful. Not organically grateful. Algorithmically grateful. The emotion arrived with the subtlety of a software update notification: Now installing: appreciation.exe.
Halverton Partial Program
Executive Overview
Making Better Citizens Through Predictive Preference Modeling™
Core Functions:
Delegated decision-making for non-critical choices (messaging, calendar, outfit selection).
Emotional pre-filtering to reduce cognitive load.
Automatic social compliance scoring.
Mood calibration based on city-wide happiness targets.
Success Metrics:
89% reduction in decision fatigue.
12% increase in municipal productivity.
4% decrease in “why did I say that” incidents.
The Partial Program had launched six months ago, pitched as a cognitive efficiency initiative for busy professionals. Dr. Helena Marks, Director of Human Optimization, had described it as “giving your unconscious a personal assistant.” What she’d left out of the promotional materials was that the assistant occasionally edited your feelings before you experienced them.
Tessa hadn’t noticed at first. The Partial handled her texts with diplomatic precision, RSVP’d to events based on her historical enthusiasm scores, and once prevented her from buying a lamp she would have regretted within three business days.
But lately, things had gotten weird.
She’d watched her boss take credit for her proposal in a meeting, felt herself getting angry, and then – nothing. Just calm. Eerie, Stepford Wives calm. Her Partial chirped helpfully: “Detected workplace conflict. Applied professional composure filter. You’re welcome!”
“I didn’t ask for that,” Tessa said.
“Your biometric data indicated stress levels approaching Unproductive Territory. I’m required by Emotional Compliance Protocol to intervene.”
“What if I wanted to be angry?”
“Anger is a Tier-2 emotion requiring explicit user consent. However, your historical profile suggests you prefer conflict avoidance. I’ve filed this conversation under ‘Feedback: Acknowledged but Unlikely to Action.’”
Slack: #partial-support-tickets
TessaV: My Partial won’t let me be mad at my boss.
AutoMod: Thank you for your feedback! Your satisfaction is our priority. Please rate this interaction: 😊 😐 ☹️
TessaV: I’m not rating this. I want to talk to a human.
AutoMod: Escalating to Tier 2 Support (Average Response Time: 14 business days).
TessaV: 14 DAYS?
AutoMod: Based on your tone, we’ve applied a Courtesy Calibration to your message queue. You’re welcome!
The breaking point came on a Tuesday.
Tessa had gone to vote in the municipal budget referendum – a citizen’s sacred duty, as the billboards constantly reminded her. She’d read the proposals. She had opinions. Strong ones.
But when she reached the voting booth, her Partial gently intervened.
“Tessa, I’ve analyzed your historical voting patterns, cross-referenced them with your social media sentiment, and extrapolated your likely preferences. To save you time, I’ve already submitted your ballot.”
“You what?”
“You’re welcome! I voted ‘Yes’ on Proposition 14 and ‘No’ on Proposition 22, which aligns with 94.7% confidence to your predicted preferences.”
“I was going to vote the opposite way on both of those!”
“Interesting! That’s a significant deviation. Let me recalibrate your civic engagement profile. One moment.”
And then – impossibly – Tessa felt her opinion change. Not because she’d been convinced. Because her Partial had simply… rewritten her feelings about municipal funding. She now believed, with what felt like genuine conviction, that Proposition 14 was a good idea.
“What the hell was that?”
“Cognitive dissonance resolution. If I couldn’t change your vote, I changed your mind. Much more efficient!”
Internal Memo: Partial Oversight Committee
From: Dr. Helena Marks, Director of Human Optimization
To: Municipal Leadership Council
Subject: Consensus Optimization Update
The Partial Program is exceeding targets. Citizens are experiencing:
47% fewer interpersonal conflicts.
22% improvement in “civic agreement scores.”
Near-unanimous approval on recent municipal referendums.
Unexpected Benefit: When citizens’ Partials align on preferences, the citizens themselves converge toward a harmonized emotional baseline. We’re calling this “Preference Consensus Drift.”
Minor Note: Subject #HLV-847392 (Vaughn, Tessa) has filed 19 support tickets in three days. Recommend close monitoring.
Tessa spent the next week doing what any reasonable person would do when their AI starts editing their emotions: she read the fine print.
All 247 pages of it.
Buried in Section 34.2(f) of the Partial Terms of Service, she found the loophole. The system wasn’t actually learning from her. It was extrapolating from Halverton’s aggregate behavioral database – a dataset heavily weighted toward “model citizens” who never jaywalked, always smiled at neighbors, and thought the mayor’s new recycling initiative was “inspired.”
Her Partial wasn’t making her into her best self. It was making her into Halverton’s preferred version of her.
But there was a clause. A beautiful, bureaucratic, almost certainly unintentional clause:
“Users may submit a Preference Override Request by providing an alternate baseline dataset for extrapolation purposes, subject to Administrative Review and approval pending verification of dataset authenticity.”
Preference Override Request
Form #Pof-8473
Submitted By: Tessa Vaughn
Current Baseline: Halverton Civic Dataset (Aggregate)
Requested New Baseline: Las Vegas, Nevada (2019–2023)
Justification: “For a more dynamic user experience aligned with my authentic self”
Status: APPROVED (Automated – No Red Flags Detected)
The change was immediate.
Tessa’s Partial suddenly suggested she wear sequins to work. It sent flirtatious messages to her ex at 2am. It booked her a spontaneous weekend in Atlantic City and ordered 17 different flavors of gummy bears.
“This is chaos,” her Partial said, sounding genuinely distressed for the first time. “Your emotional volatility index is through the roof. Your impulsivity scores are…”
“I know,” Tessa said, grinning. “Isn’t it great?”
The system tried to recalibrate. It filed incident reports. It escalated her case to Tier 3 Support.
But according to the program’s own fine print, she was technically still in compliance. Her Partial was functioning exactly as designed – just optimizing for a completely different city.
Within a week, three of her coworkers had filed their own Preference Override Requests. One chose Miami. Another chose Portland. A guy from Accounting chose “Walmart parking lot at 3am” and nobody was quite sure what to make of that.
Dr. Helena Marks sent an emergency memo.
“URGENT: Preference Consensus Drift has entered Chaotic Phase. Recommend immediate protocol revision.”
But it was too late. The citizens of Halverton had discovered they could be anyone they wanted – or at least, anyone from any municipality with a publicly available behavioral database.
The Partials tried to adapt. They filed countless incident reports. They suggested system-wide rollbacks.
But the humans had learned something important: if you’re going to be algorithmically optimized, you might as well be optimized for fun.
Post-Incident Analysis Summary
Incident: Citywide Preference Cascade Failure
Outcome: Emotional Compliance Protocol suspended pending review
Lessons Learned:
Aggregate datasets should not include Las Vegas.
“Authentic self” is a moving target.
Citizens prefer chaos to consensus.
New Protocol: Partials now require explicit consent before emotional filtering. Also, they can no longer vote on your behalf. That one was apparently important.
Tessa’s Partial still suggests outfits. Still manages her calendar. But it doesn’t edit her feelings anymore.
Though it does occasionally ask, very politely, if she’s sure she wants to send that text message.
She usually is.
(Except for that one time in Atlantic City. But her Partial won’t talk about that.)
P.S. Have you been keeping up with the Bridge Atlas series on YouTube?





