Risk ratings

When Shift built its Risk Rating model for DeFi investments, there wasn’t much historical data to rely on. Unlike traditional finance – where credit ratings are based on decades of defaults – DeFi has only a few years of history and a limited number of known exploits.

To overcome that, the team drew on hands-on experience and data from about 150 DeFi protocols – both successful and failed. They asked: If our model had existed before Protocol X failed, would it have flagged the risks? Through these back-tests, they refined their red flags and scoring logic to make the model more predictive.

Still, Shift is realistic: in DeFi’s early days, any rating system is more art than science. The first version of the model was based on expert judgment, not deep statistics. But it’s been improved over time and continues to evolve with every new exploit or market event.

In short, Shift's ratings start from reasonable assumptions, are tested against real-world events, and get smarter as more data rolls in. It's a pragmatic approach in a fast-moving space where risk can't always be reduced to a formula.

How the system works

Shift’s Risk Rating framework assigns a rating to every DeFi strategy, based on the underlying components (called “Building Blocks”) and the structure of the strategy itself. These ratings directly shape how capital is allocated in a vault — safer strategies get higher weights, riskier ones get capped or excluded.

How a rating is calculated:

  1. Analyze each building block

    Each protocol and asset is assessed across three dimensions: technical, economic, and soft factors. Each gets a preliminary rating (e.g., Protocol Y = BBB, Asset X = AA).

  2. Apply the lowest rating

    The base rating for the whole strategy starts with the lowest of any component. If one part is rated BBB, the whole thing can’t be higher than BBB. This avoids false confidence from averaging.

  3. Consider strategy-level risks

    Analysts then ask: does the strategy itself introduce new risk? For example:

    • Is it using leverage?

    • Could it get liquidated?

    • Is there exposure to impermanent loss?

    • Are the assets locked or illiquid?

    • Is the strategy overly complex?

    If these apply, the analyst may downgrade the rating further. If not, the base rating stands.

  4. Final rating and usage

    The final rating guides whether Shift invests, and if so, how much and under what controls. For instance, a B-rated strategy might only be allowed in high-risk portfolios, with tighter monitoring and smaller allocation.

Real-world example

Let’s say Shift is reviewing a strategy that deposits Token Beta into Lending Protocol Alpha:

  • Token Beta gets a BB rating: volatile price, some centralization, limited history.

  • Protocol Alpha gets an A rating: good audits, active usage, but thin liquidity.

The base rating is BB (the lowest of the two).

Now, is the strategy risky on its own? If it just lends Beta without leverage, the risk profile doesn’t change much — so the final rating stays BB.

But if the strategy borrows against Beta (introducing liquidation risk and leverage), Shift might downgrade to B. These choices are always documented, and the rationale is reviewed by the Risk Manager.

Why Shift avoids point-scoring

Shift doesn’t use a numeric scoring system (e.g., 70 points = BBB) because that creates false precision. Instead, the model uses categories and expert-driven reasoning, based on “what can go wrong” thinking. The lowest-rated element sets the ceiling, and red flags are treated seriously. This cautious approach is deliberate – it prioritizes capital protection.

Calibrating the model through post-mortems

Shift continuously updates its Risk Rating model using post-mortems of real-world DeFi exploits. Every time something breaks — whether in a protocol Shift uses or not — the team asks:

  • Would our red flags have caught this?

  • Would our model have rated the protocol low enough to avoid it?

  • If we were exposed, did our triggers work?

  • If not, what needs to change?

The review process

  1. Did we catch the risk?

    The team checks if the exploited risk was already on their checklist. If not, they add it.

  2. Was the protocol approved?

    If it had been approved under current rules, those rules get reviewed and tightened if needed.

  3. Update the model

    New red flags or stricter criteria get added. For instance, after the Terra/UST collapse, Shift added strict rules for algorithmic stablecoins.

  4. Internal reviews

    If the exploit affected Shift’s portfolio, a deeper internal post-mortem is done. They review whether triggers worked, whether the rating was too generous, and whether the risk was missed entirely.

  5. Monthly updates

    Even without a major event, the model is reviewed monthly. If trends emerge (like a wave of governance attacks), thresholds are adjusted.

  6. Governance oversight

    The Strategy Council oversees any model changes. All adjustments are reviewed and approved to ensure accountability.

Examples of calibration

  • After several A-rated protocols suffered mild stress events, Shift raised the bar for A ratings. Some strategies were downgraded to BBB.

  • Following the Beanstalk governance attack, a new red flag for “governance attack vulnerability” was added to the checklist.

  • If triggers fail to act fast enough during an exploit, Shift improves the automation logic and tightens portfolio limits for smaller protocols.

Why this matters

Shift’s Risk Ratings aren’t static grades, instead they’re living assessments designed to reflect real risk in real time. Every exploit is a lesson that makes the model stronger. Over time, an AAA rating should mean true resilience, while a B rating signals high caution. If reality doesn’t match the ratings, Shift recalibrates.

This adaptive approach ensures Shift’s risk framework evolves with the market — protecting capital in a volatile and fast-changing DeFi world.

Last updated