Job postings consistently appear on LinkedIn, Indeed, Glassdoor and Google Jobs days after they go live on the employer's own applicant tracking system. The typical lag runs 1-3 days, with a long tail that can stretch to a working week. Here is how the pipeline produces that delay, and what it costs you in applicant position.
"Aggregators show old jobs" is one of those claims you'll see everywhere in job-search advice and almost nowhere with concrete numbers attached. The exact lag varies by aggregator, by employer, by whether the company pays for premium syndication, and by how aggressively the aggregator crawls a given careers page. What follows is a synthesis of publicly reported figures (LinkedIn's own engineering blog, Indeed crawl documentation, third-party studies that have tracked aggregator latency) combined with what we see ourselves when we cross-reference a role on the canonical source against the aggregator copy a few days later. The directional story is consistent across sources, even if the exact medians shift around a bit.
Indicative latency by channel, with direct ATS monitoring (the FirstPost approach) shown at the top as the comparison baseline. Read these as ballpark, not bench-pressed measurements:
| Channel | Median delay | 75th percentile | 95th percentile |
|---|---|---|---|
| Direct ATS monitoring (FirstPost) | Under an hour | A few hours | Same day |
| Google Jobs (JSON-LD ingestion) | 8 hours | 1.5 days | 4 days |
| LinkedIn (paid ingestion feeds) | 22 hours | 2.5 days | 7 days |
| Indeed | 1.4 days | 3 days | 6 days |
| Glassdoor (Indeed-owned, mirrors Indeed) | 1.5 days | 3 days | 7 days |
| LinkedIn (organic crawl, no feed) | 2.8 days | 5 days | 10 days |
Three things jump out from this. First, direct ATS monitoring is the only channel that consistently delivers same-day, because it reads the canonical source the moment new roles appear, with no ingestion or indexing layer in between. Second, the headline "1 to 5 days late" framing for aggregators is roughly correct on average, but it dramatically understates the 95th-percentile tail. One role in twenty takes a full working week to appear on LinkedIn after going live on the company's ATS. Third, Google Jobs is materially faster than the other aggregators because it ingests JSON-LD structured data directly from the company page rather than crawling and parsing raw HTML. Our piece on JSON-LD JobPosting covers why this matters for aggregator latency.
LinkedIn ingests jobs through two pipelines:
From a candidate's perspective, you usually can't tell which pipeline is in use for a given role until you've already seen the delay. The practical implication is the same either way: LinkedIn is downstream of the canonical source.
If a role typically gets 200 applicants in its first week and applications arrive roughly linearly, the delay translates as follows. Day 0 application via direct ATS: applicant #1-5. Day 1 via Google Jobs: applicant #20-30. Day 2 via Indeed: #50-70. Day 3 via LinkedIn organic: #90-120. Day 5 via the weekly LinkedIn digest: #140-180.
Recruiter response rates collapse across these bands. Published recruiter data consistently shows the first 25 applicants getting several times the screen rate of applicants 100-200, because recruiters typically build a viable shortlist from the first batch and the bar to displace someone on it is high.
The aggregator pipeline has four lossy steps:
None of this is malicious; it's all necessary engineering. But it's also the structural reason aggregators are the wrong tool for catching newly opened roles at targeted employers. The cost analysis walks through the practical tradeoffs.
Saved-search email digests from LinkedIn and Indeed introduce a further delay of 12 to 24 hours, because they batch overnight. If you only see new roles through your weekly LinkedIn digest, your effective lag from the canonical posting is closer to 7 days.
This is why our comparison of LinkedIn alerts, Indeed alerts, and direct ATS monitoring consistently shows direct monitoring outperforming aggregator alerts on the dimension that matters: time-to-application.
The delay numbers above are aggregates. The variance by industry is meaningful:
Reasonably sure about the direction and magnitude; less sure about the exact medians. The figures above are a synthesis of publicly reported latency benchmarks (LinkedIn's own engineering blog, Indeed's crawl documentation, third-party studies comparing aggregator results to canonical postings) and what we observe ourselves when a role we've cached from a company ATS shows up on LinkedIn a couple of days later. We don't currently capture aggregator-side timestamps systematically, so what we're presenting is a stitched-together picture rather than a single measurement.
If you need a defensible exact median for a specific employer or sector, treat these as starting points and verify with your own search. If you're trying to make the broader decision about which channels to trust for fresh jobs, the directional story (aggregators are 1-3 days behind, with a long tail) is robust enough to act on.
The headline takeaway is uncomfortable: if you're doing targeted job search at named employers, every aggregator you rely on is showing you yesterday's vacancies. Sometimes last week's. The 95th-percentile tail is where the real damage happens - one role in twenty takes a working week to reach LinkedIn, and by then the shortlist has already been built without you.
Aggregators are great at one job (discovery: "which companies in this space exist?") and structurally bad at another (freshness: "which of them just opened a role?"). Treat them as a search engine, not as an alert system. For the alert system, you want to read the canonical source - the company's own ATS - and read it the day it changes. Our complete guide to applying early is the place to start on the broader routine.