Lead scoring is one of those features that every CRM has, every RevOps team configures, and almost nobody gets actual value from. The reason is simple: most scoring models measure the wrong things.
Traditional lead scoring assigns points based on who someone is (job title, company size, industry) and what they have done in the most superficial sense (opened an email, downloaded a whitepaper, attended a webinar). The result is a number that tells you almost nothing about whether this person is actually going to buy.
A VP of Engineering at a 500-person SaaS company who downloaded your whitepaper six months ago and has not been back since? High score. A Senior Director of Revenue Operations who visited your pricing page three times this week, read two case studies, and whose company just posted a RevOps job? Lower score, because they have not filled in a form yet.
This is the fundamental problem. Traditional scoring rewards demographic fit and basic activity. Signal-based scoring rewards buying behaviour.
The typical scoring model looks something like this: VP or C-level title gets +20 points, company size 200-1000 gets +15 points, opened email gets +5 points, clicked link gets +10 points, downloaded asset gets +15 points, attended webinar gets +20 points, industry match gets +10 points. Hit 80 points and you are an MQL. Sales gets notified.
The problem is obvious once you think about it: none of these signals indicate buying intent. They indicate that someone matches your ICP profile and has interacted with your marketing. But interaction is not intent. A competitor researching your positioning will score high. A student writing a dissertation will score high. A consultant benchmarking tools for a client will score high. None of them are going to buy.
Meanwhile, the actual buyer who is quietly evaluating you against two competitors, visiting your site from multiple devices, and building internal consensus with their team scores low because they have not engaged with your email nurture sequence.
The model rewards the wrong behaviour because it was built around the wrong assumption: that marketing engagement equals buying intent.
Signal-based scoring starts from a different premise. Instead of asking "does this person match our ICP and have they interacted with us?", it asks "is this person showing behaviour patterns that historically correlate with becoming a customer?"
The signals that actually predict buying:
Someone who visited 8 pages in the last 3 days is a stronger signal than someone who visited 20 pages over 6 months. Acceleration matters more than accumulation. A sudden spike in activity almost always means something has changed internally: budget approved, problem became urgent, competitor failed.
Not all page views are equal. A pricing page visit is worth 10x a blog post visit. A case study in their industry is worth 5x a generic feature page. A comparison page is the strongest signal of active evaluation. Weight pages by proximity to purchase decision, not just by the fact that they were visited.
When multiple people from the same company start engaging independently, that is a far stronger signal than one person engaging heavily. Three people from the same account visiting your site in the same week usually means they are building a business case internally. Most scoring models evaluate contacts individually and completely miss this.
G2 category research, Bombora intent topics, LinkedIn ad engagement from the account level. These signals exist outside your own properties but indicate that the company (not just one contact) is actively researching solutions in your category.
Job postings for roles related to your product (a company hiring a RevOps Manager is more likely to buy RevOps consulting). Funding rounds (money to spend). Technology changes visible in their stack (just added HubSpot? They might need implementation help). These come from Clay enrichment running on a schedule, not from the lead doing anything on your site.
A signal from yesterday is worth more than a signal from last month. Traditional scoring accumulates forever (or uses crude 90-day resets). Signal-based scoring applies exponential decay so that recent behaviour always dominates.
Signal-based scoring is not just a different point allocation in HubSpot. It requires a slightly different architecture:
Most teams only enrich a lead when it first enters the system. Signal-based scoring requires ongoing enrichment: weekly checks for hiring signals, funding events, technology changes, and news. Clay can run these on a schedule and write back to HubSpot properties that feed the scoring model.
HubSpot's native lead scoring is limited to simple point allocation. For signal-based scoring, use calculated properties or Operations Hub custom code actions that can evaluate velocity (change over time), recency decay, and multi-contact account-level signals. The output is a single "Signal Score" property (0-100) that combines all inputs.
A contact score tells you about individual engagement. An account score aggregates signals across all contacts at that company plus third-party intent data. Both inform routing decisions. A contact with a mediocre individual score but a high account score (because their colleagues are all engaging) should still get prioritised.
When a signal score crosses a threshold, something should happen immediately: lead gets re-routed, rep gets alerted with context, SLA clock starts, personalised outreach sequence triggers. The score is not just a number for reporting. It is a trigger for the next action in the revenue system.
Scoring makes sense when you have enough volume that you cannot manually evaluate every lead. If you are getting 10 inbound leads a month, just talk to all of them. The overhead of building and maintaining a scoring model is not worth it until you have a volume problem.
The threshold varies, but roughly: if your team can respond to every lead within 5 minutes without a scoring model telling them which ones to prioritise, you do not need scoring yet. Invest that time in generating more demand instead.
Scoring also fails when applied too early in a company's lifecycle. You need enough historical data to know which signals actually correlate with closed-won deals. If you have fewer than 50 closed deals to analyse, your scoring model is going to be based on assumptions rather than data. Start with the assumptions, but validate them quarterly as you accumulate real conversion data.
MQL volume drops but quality increases. You will generate fewer MQLs because the bar is higher (actual buying behaviour, not just profile match). But the MQLs that do pass through convert at 2-3x the rate. Sales stops complaining about lead quality because the leads they get are genuinely showing intent.
Speed-to-lead improves for the right leads. When scoring accurately identifies high-intent buyers, reps can prioritise appropriately. The VP who visited pricing three times this week gets a call within minutes. The marketing manager who downloaded a whitepaper gets a nurture sequence. Everyone is treated proportionally to their actual intent.
Sales and marketing alignment improves. The eternal argument about MQL quality goes away when both teams can point to specific signals that define a qualified lead. "They visited pricing, read the enterprise case study, and their account is showing G2 research activity" is a definition everyone can agree on.
Look at your last 30-50 closed-won deals and work backwards. What did those contacts do on your site in the 30 days before they became an opportunity? Which pages did they visit? How many times did they come back? Were there multiple contacts from the same account? This historical analysis gives you the signal patterns to encode into your scoring model.
Yes. First-party signals (your own website analytics, email engagement, form submissions, content consumption) are enough to build a strong signal-based model. Third-party intent data adds another dimension but is not required. Start with what you have and layer in additional data sources as the model matures.
Quarterly at minimum. Pull your recent closed-won deals and check whether the signals that predicted buying 6 months ago still hold true. Markets shift, buyer behaviour evolves, and your content mix changes. A scoring model that is never updated slowly drifts from reality.
At minimum: a CRM with workflow automation (HubSpot, Salesforce), an enrichment tool (Clay, Apollo, Clearbit), and website analytics with page-level tracking. For account-level signals: a tool that aggregates activity across contacts at the same company. For third-party intent: G2 Buyer Intent, Bombora, or LinkedIn Revenue Attribution.
Signal scores feed directly into routing decisions. High signal score + high fit = immediate rep assignment. Low signal + high fit = nurture with monitoring. The score does not just label the lead; it determines what happens next. See our guide to signal-based lead routing for the full architecture.