A clinician we work with told us, almost in passing, that she had matched a shift she would have taken in a heartbeat---and didn’t see the listing for two days. By then it was filled. The job had been right for her on rate, on radius, on specialty. She had texted us a week earlier saying she was ready to work. The listing went live at 11pm on a Sunday. She opened the app on Tuesday afternoon, on a break, and scrolled past the row where it used to be.
That is not an unusual story. It is the median story.
Clinical job search is a perpetual, low-information task layered on top of a job. Most clinicians don’t refresh listings daily. They check the app the way the rest of us check the weather: intermittently, on the way to something else. The shifts that fit them open at inconvenient hours. Late on a Sunday night when a scheduler closes the week. A Wednesday morning when a unit posts last-minute coverage. The intersection of clinician is looking and right shift is available is small, and most of the time it doesn’t happen at the moment of the clinician’s attention.
The category response has been a decade of better recommendations. Smarter feeds. Better push notifications. Morning digests. They help. They don’t solve the problem. The architecture assumes the clinician is the actor. The system curates; the clinician applies. When the clinician’s attention is the bottleneck, no amount of curation moves the needle past it.
The inversion
Auto-Apply is the opposite of a recommendation feed. Instead of asking the clinician to browse and decide, the system applies on her behalf, based on a preference profile: specialty, role, radius, rate floor, concurrent application cap. The clinician opts in once. The system runs from there.
The default cap is five simultaneous live applications; the system refills the slot when one closes. The clinician sets the floor on rate, the radius around their preferred location, the role types they’ll take. The system fills the rest.
The shift is small to describe and large to live with. It moves the clinician’s attention out of the critical path of every application and into the critical path of one decision: do I trust this profile to apply on my behalf? Once that’s yes, the rest is the system’s job. The listing that goes live at 11pm on a Sunday gets applied to before the clinician wakes up on Monday.
This is the same posture we wrote about in Built for the frontier: when the system is competent enough to act, the right move is to let it act. The form gets filled out by an agent, the application gets submitted by an agent, the clinician confirms the parts that need human judgment.
That’s the bet. Here’s the honest retrospective on how we made it safe to leave running.
V0: the pilot was a spreadsheet
The first version of Auto-Apply wasn’t a feature. It was a Google Sheet, a 30-minute polling habit, and a canned message.
In Q1 2023, a small CX pilot ran the whole thing by hand. Clinicians opted in over the phone with their advocate. Preferences (location, pay range, shift, specialty) went into a shared sheet. Every half hour, a teammate refreshed it, watched for newly ingested jobs that matched, and manually created the application in our system. The application got tagged for follow-up. A canned message went out to the clinician: “we submitted you to this job; here’s the window to withdraw.”
It worked. Submission speed went up. Clinicians liked not having to initiate every application themselves. The manual version surfaced the demand before anyone wrote a line of code.
It also surfaced the problems. Candidates sometimes didn’t want the jobs they’d been submitted to. References weren’t ready. Skills checklists had expired. Advocates spent time fielding withdrawals. Submission-to-offer rate on auto-submissions trailed the rate on clinician-initiated applications. Setting a preference profile three weeks ago is not the same signal as tapping Apply on the listing in front of you.
We knew that going in, in the abstract. Running the thing for a few months taught us how much it mattered.
V1 through V4, in order
V1 shipped into the product in late 2023 with safeguards kept minimal on purpose: get the loop in production, watch what happens, add guardrails to fit the actual failure surface rather than the imagined one. Each subsequent version added guardrails in response to what broke.
| Version | Guardrail added | Failure that prompted it |
|---|---|---|
| v1 | Opt-in plus a five-application cap, nothing else | Applied to any matching job for opted-in clinicians, with no placement, offer, or qualification checks |
| v2 | Auto-deactivation on entering a placement phase, active-offer protection, seven-day stale-preferences timer | Applications going out for clinicians already on placement or with offers in hand |
| v3 | Pre-qualification check against the role experience rule before applying | Advocates spending afternoons clearing applications that would never have passed qualification |
| v4 | Facility exclusions, MSP exclusions, configurable per-clinician cap (5 to 25), radius filtering, working-clinician guard (placement end after job start) | Clinicians submitted to hospitals they’d explicitly excluded; queue patterns that didn’t match reality |
Every guardrail in the current feature exists because a specific problem in production made it necessary. That isn’t a recommendation for how to build software. It’s a description of what happened. Inventing the V4 list in a design session in 2023 and shipping it once would have been preferable. The list isn’t the kind of thing you can invent that way. Read end to end, the deactivation list is a log of what the production system does to clinicians, advocates, and facilities when each of those guardrails is missing.
A historical window we learned from
A deeper issue surfaced in mid-2025. The clinician base on Auto-Apply had been growing, the feature had been generating applications at scale, and the validation workflow that turns applications into submission-ready packets had just been overhauled.
In the 30 days after the new validation workflow shipped, submission rate on auto-applied applications fell from around 42% to around 30% (historical figures from that specific window, not current benchmarks). Operational load roughly doubled on the validation side. The applications themselves weren’t worse; the analysis on a sample looked like the previous month’s. The throughput to move them through to submission was the bottleneck.
The mistake was assuming validation could scale proportionally with application volume. It couldn’t. Each application still required a human touch. Generating twice as many applications without twice as much qualification throughput just built a bigger queue. The applications existed; the path to submission didn’t.
The fix had two parts. We pushed harder on auto-qualification so more applications cleared without human review, the same direction the rules-engine work has been pulling for a while. And we calibrated Auto-Apply volume to actual throughput, not just opt-in count. The per-clinician cap is part of this. It exists partly to protect the clinician from too many simultaneous commitments, and partly to keep the operations side from being flooded.
About those numbers: the 42-to-30 drop was a real event in a specific 30-day window after a specific workflow change. It’s a useful historical anchor, not a current benchmark. The lesson the window taught us was less about the validation workflow itself and more about the relationship: a feature that generates work for a downstream queue is, in effect, a request that the queue scale to match. If it doesn’t, the work shows up in the queue length instead of in the submission count, and the feature looks like it’s underperforming when what it’s actually doing is overproducing.
The intent signal
The subtler lesson sits underneath the guardrails: opt-in preference is a weaker signal than per-job intent. A preference profile says here is the shape of jobs I would consider, three weeks ago. A tap on Apply says I want this one, now. Those are not the same. Treating them as if they were is the original sin of silent auto-apply, and most of the guardrails above are different angles on the same correction.
A clinician who taps Apply on a specific job has looked at the listing. She’s seen the pay, the facility, the unit, the start date. She’s made a per-job decision. A clinician whose Auto-Apply fired because the system found a match may not remember setting the preferences, may not be paying attention this week, may have taken another job in the meantime. Every guardrail we’ve added is a different way of asking is the preference profile still valid right now, because the system can’t observe per-job intent directly.
The stale-service timer is the clearest acknowledgment of this in the feature itself. Seven days without preference updates, and the system stops acting. It’s a crude proxy for are you still the clinician who set these preferences three weeks ago. The right answer is often mostly yes, sometimes no, and the system can’t always tell which.
The real fix, to the degree there is one, has been on the notification side. The application going out is one event. The notification reaching the clinician, and the clinician actually seeing it, is the other. We’ve put more weight on the notification flow (timing, content, channel) than on the eligibility logic at this point. A clinician who sees the notification and doesn’t withdraw is a different signal than one who never saw it.
Which is where Job Alerts comes in. Job Alerts is the closer-to-real-time sibling to Auto-Apply: the system surfaces a matching shift, the clinician taps once, and the application goes in. The honest read on the silent version is that Job Alerts plus one-tap confirmation has cleaner economics, because the intent signal is cleaner. A tap is the strongest possible per-job signal short of the clinician writing the application herself.
That doesn’t make silent Auto-Apply wrong. It makes it a different tool for a different part of the intent spectrum. A clinician who genuinely wants the system to act on her behalf, with a recent preference profile and an active engagement pattern, is well-served by silent Auto-Apply. A clinician in active search mode, refreshing the app and looking at listings, is better-served by Job Alerts plus a tap. Both have a place. Neither replaces the other.
Where the human still lands in the loop
The hardest part of building this wasn’t the apply step. It was deciding when a human still needs to be in the loop.
Most auto-applied shifts close themselves. The clinician sees the application in her queue, confirms interest, and the downstream flow takes over. But a meaningful fraction of auto-applied clinicians don’t engage. They’ve opted in, the system has applied on their behalf, and then---nothing. The clinician hasn’t opened the app or responded. The shift sits open. The facility is waiting. The system has technically done its job and the actual outcome has stalled.
For that case we ship a signal, not an action. A real-time notification lands in the clinician’s advocate thread in Front the moment a clinician under Auto-Apply hasn’t engaged with a shift the system applied to. The advocate sees it inside the tool they already live in. They can pick up the phone, send a text, escalate inside their own workflow.
This is where the architecture pays off. The system applying on the clinician’s behalf would be a worse outcome than the clinician applying themselves if the system applied and then nothing else happened. It’s a better outcome when an advocate gets paged the moment the application stalls, with context on which shift, which clinician, and how long it’s been quiet. The human is escalated to the place their judgment is highest-value: the conversation that gets a wavering clinician across the line.
Where it is now
V4 Auto-Apply is the version we run today. Eligibility rules are sophisticated enough that false positives are rare. The per-clinician cap prevents queue flooding. Advocates have controls to configure the feature on behalf of clinicians where that’s appropriate. Deactivation triggers (placement, offer in flight, stale preferences) turn the system off in the situations where it should be off.
What we won’t claim is that the problems are solved. Throughput-to-volume is still something we tune. The intent-signal asymmetry between Auto-Apply and Job Alerts is real, and the feature works inside that asymmetry rather than past it. The list of deactivation triggers is the visible record of every place the system has needed a brake; the next one gets added when the next failure shows up.
A few things we’d do differently with what we know now.
Start with opt-in per-job, not opt-in per preference set. The preference-based model creates ambiguity about whether the clinician is genuinely interested in any given match. Confirming per-match before submission preserves most of the speed benefit while keeping the intent clean. Job Alerts is essentially that bet.
Build throughput capacity before turning on volume. Auto-Apply should scale with the qualification capacity available to process its output, not with the number of clinicians who opt in. The mid-2025 validation window taught us that the hard way. We’d like to learn it once.
Treat deactivation triggers as surfaced features, not silent failsafes. The stale timer, the placement deactivation, the offer guard were added reactively to stop specific problems. Designed proactively, they’d show up to the clinician as clear state: Auto-Apply is paused because you started a placement. It’ll reactivate when you’re back in search. That transparency would make the feature feel trustworthy instead of opaque.
--- Engineering