Not subscribed? Sign up to get it in your inbox every week.

⚙ Hi {{first_name_tally|Operator}},
I was spending 90 minutes a day triaging email. That's a part-time job I didn't apply for.
So I built Dossier. It auto-categorizes everything and sends me two daily briefs with just the stuff that matters.
We're onboarding now. Schedule here.
While you schedule that timeslot…
Goals are voltage. Praise is amperage. Most managers blow the circuit by Tuesday.
Every training program that evaporates, every task that boomerangs back, every best employee who quits—that's a circuit that couldn't carry the load. You kept cranking voltage without adding capacity.
Here's how to fix it.
- Rameel

PRESENTED BY BELAY
What Would You Choose if Your Time Were Actually Yours?
You know the moment: the inbox wins. The day slips. You feel smaller than the leader you know you are.
Great leaders hand that weight to U.S.-based executive assistants who think at their level so they can move freely while the business scales.
Here's our guide to help you feel like yourself again.

How to Actually Rebuild Confidence Infrastructure
I've been thinking about why confidence infrastructure is so hard to fix. Most companies treat this like sentiment—"let's do a pulse check on morale"—when really it's electrical engineering.
You're managing voltage (goals, pressure, demands) and amperage (capacity to handle that pressure without blowing the circuit). Every time someone routes a decision back to you "just to check," every time a training program evaporates 90 days later, every time your best PM quits, that's a circuit that couldn't carry the load.

My PM homies out there getting burned everyday
Here's what's worked for me (and what definitely hasn't).
1. Install Circuit Breakers People Actually Believe In
The Andon cord wasn't genius because Toyota installed a rope. It was genius because they made the policy so clear and so consistently enforced that workers believed pulling it wouldn't wreck their career.
The first person who uses your "circuit breaker" is basically running a test. Everyone else is watching to see what happens.
The one metric I've found that actually matters here is time-to-surface. The gap between when someone first knew about a problem and when they told someone who could fix it. If problems only surface in postmortems, your infrastructure is already failing.
I learned this through an expensive mistake. I was at a startup where I tried to install a "no stupid questions" policy in our weekly ops review. First person who used it, a junior PM, asked why we were prioritizing Feature X over Feature Y, got a five-minute explanation from the VP about "strategic alignment" that was basically a polite way of saying "because I said so."
That PM never asked another question in that meeting. Neither did anyone else. Policy died in 48 hours.
When I tried this again at my next company, same question came up. This time I said "You know what, that's a great question and I don't actually have a good answer. Let's park this for 10 minutes after the meeting and figure it out." We did. Turned out she was right, we were building the wrong thing.
I made a point of saying this out loud in the next meeting: "Jenny caught a $40K mistake by asking why. That's exactly the kind of question I want more of."
The thing I keep coming back to is when signals decrease, treat it as an emergency.
Fewer problems being raised doesn't mean things are going better. It usually means people stopped believing it's safe to raise them.
2. Separate Feedback from Consequences (This Is Harder Than It Sounds)
Managers can't separate "what we learned about our systems" from "what I think about this person's competence." I do this. You probably do this. It's instinct.
The discipline is catching yourself and asking: "Am I learning about a broken process or am I deciding this person isn't good at their job?"
Make it explicit policy that postmortem discussions can't be referenced in performance reviews. Ever. Write it down. Enforce it.
The first time a manager says in a performance review "Well, there were those three incidents..." you've killed the program. Word spreads fast.
If people can't admit confusion, they'll pretend to understand and execute badly based on their misunderstanding.
I watched a team spend six weeks building the wrong feature because nobody wanted to admit they didn't understand what "seamless integration" meant in the spec. Cost the company $120K in wasted engineering time. Could have been prevented by one person saying "Wait, what does this actually mean?" in the kickoff meeting.
They'd learned that admitting confusion made you look stupid, so they guessed. And guessed wrong.
3. Treat Praise Like Amperage, Not Sentiment
This is the metaphor that finally made sense to me: Goals are voltage. They create pressure. Praise is amperage; it provides capacity to handle that pressure.
Keep cranking voltage without increasing amperage and you blow the circuit.
Most managers do this without realizing it. I definitely did. Stack three new projects on someone, give zero recognition for the two they just shipped, then act surprised when they burn out or start routing every decision back for approval.
A previous manager told me about this 2:1 rule: two praises for every goal assigned. Not because 2:1 is some magic ratio (it's not), but because goals create cognitive load and praise builds capacity to handle it. The ratio just ensures you're not stacking demands on infrastructure that's already overloaded.
But here's the thing that took me forever to figure out: Generic praise reads as manipulation.
Bad: "Great job on that project!"
Good: "The way you restructured that stakeholder update saved us three hours of rework by catching the missing dependency before it became a blocker. That kind of proactive communication is exactly what we need more of."
Specific praise reinforces what to do more of. Generic praise just makes people wonder what you're trying to get them to do.
I track this now like infrastructure: If I'm assigning goals weekly but only praising monthly, my amperage is too low for the voltage I'm running. The circuit's going to blow—either through burnout, boomeranged decisions, or that quiet quitting thing where people show up but stop trying.
I learned this from a mistake I made at Uber Freight. Had a program manager who was absolutely crushing it—shipping features ahead of schedule, catching integration issues early, unblocking other teams. I gave her three new projects in one month. Zero recognition for the previous work. Just "thanks, here's more."
She quit six weeks later for a lateral move at another company. Exit interview: "I felt like a machine. Every time I finished something, you just loaded more in without acknowledging what I'd done."
Cost us three months to replace her and another two months to get the replacement up to speed. All because I didn't understand I was running 120 volts through a 90-volt circuit.
4. Make Delegation Explicitly Iterative (Not Binary)
Most delegation gets framed as all-or-nothing: Either you own this completely or you don't own it at all. Which creates a terrifying amount of pressure if you've never done the thing before.
What's worked better for me: graduated autonomy with explicit checkpoints. Basically acknowledging that confidence builds through reps, not through a single declaration of "I trust you."
Level 1: I'm checking for landmines
"You own this. Run your plan by me before executing so I can flag anything you might not see yet."
You're doing the work. I'm just trying to prevent catastrophic mistakes. This is the training wheels phase.
Level 2: I'm standing by
"You've run this process three times now. This time, execute without checking in first. If something breaks, we'll fix it together."
You're doing the work. I'm watching from a distance. Training wheels are off but I'm still running alongside the bike.
Level 3: You're autonomous
"Make the call. I'll only step in if you ask."
You're doing the work. I'm not even watching anymore.
I used to skip straight to Level 3 because I'd read something about "radical delegation" and wanted to be seen as trusting. Then I'd be shocked when tasks boomeranged back.
What I realize now: Going straight to Level 3 isn't trusting more. It's asking someone to climb two ladders at once—learning the skill while also carrying the full weight of autonomous decision-making—and then wondering why they freeze.
The one metric I watch here: delegation success rate. What percentage of tasks I delegate actually complete without boomeranging back or escalating unnecessarily? If it's below 60%, I'm probably skipping levels. People are routing everything back to me because I haven't built the confidence infrastructure to support autonomous decision-making.
5. Measure Load Capacity, Not Sentiment
Engagement surveys tell you how people feel. Infrastructure metrics tell you whether the system can actually carry the weight you're asking it to carry.
These are the five load tests I've found actually matter:
Training transfer rate: What percentage of skills taught in training programs show up in actual work 90 days later? Below 30% usually means your infrastructure can't support skill acquisition. People are learning things in classroom settings that evaporate the moment they return to an environment where mistakes get punished.
Delegation success rate: What percentage of delegated tasks complete without boomeranging back? Below 60% means your infrastructure can't support autonomous decision-making.
Time-to-surface: How long between when someone discovers a problem and when they raise it? If problems only surface in postmortems, your infrastructure is failing.
Rework rate: What percentage of completed work requires significant revision? Above 15% is usually a signal of fear-based mistakes that could have been caught earlier if people felt safe asking for clarity.
Error-reporting frequency: Are people surfacing problems when they're small and fixable, or only after they've metastasized into crises? I track the ratio of "I think this might be a problem" versus "this is definitely a disaster." The ratio tells you whether your infrastructure supports early detection.
These aren't sentiment metrics. These are actual infrastructure load tests. And when the metrics are bad, the interventions tend to become pretty obvious.
What This Actually Costs
The invisible costs are huge. Projects that shipped three months late because people were too scared to make decisions. Rework from fear-based mistakes someone spotted early but was afraid to surface. The best PM who quit because she was tired of working in survival mode. The new hire who spent six months asking permission for everything.
Add it all up and Sarah's training budget was the smallest number on the page. The confidence infrastructure failure probably cost the company $3-4M in missed opportunities, wasted effort, and talent exodus.
Toyota proved the fix works in 1984. Same workers, same union, same equipment. They rebuilt the wiring and got 10x results.
You can't scale on infrastructure that's already overloaded. The training programs will keep evaporating. The delegation will keep boomeranging. Your best people will keep leaving.
Or you rebuild the wiring.
You're already paying for this. In training that doesn't transfer, rework that shouldn't exist, replacing talent you shouldn't have lost. The question is whether you're going to keep bleeding or fix the infrastructure.



