Skip to main content

[DORA 2.0] Feature Importance

Learn how to read and interpret Feature Importance graphs from your DORA 2.0 Prediction Map.

Updated this week

Overview

Feature Importance graphs (also known as SHAP values) help you understand how different input features influenced your Prediction Map.

They show which features the model relied on most and how each feature increases or decreases the likelihood of mineralization

This helps you validate whether the model aligns with your geological understanding and identify areas for improvement.


Quick Links


What is Feature Importance?

Feature Importance shows how each input feature contributes to the model’s predictions.

In DORA 2.0, features may include geological structures, geochemical concentrations, and geophysical anomalies.

Each feature is assigned a relative importance based on how consistently it influences predictions across the AOI.

Feature Importance helps you identify the most influential inputs, understand how feature values impact predictions, and validate whether results align with your exploration model


How to Interpret Feature Importance

The Feature Importance graph shows both which features matter and how they influence predictions.

How to read the graph:

  • Feature Ranking (Y-axis). Features are ordered from most important (top) to least important (bottom)

  • SHAP Value (X-axis). Shows how much a feature pushes a prediction away from a neutral baseline

    • Right → pushes toward mineralization

    • Left → pushes away from mineralization

  • Each data point represents a pixel in your AOI.

    • Red/pink → high feature values

    • Blue → low feature values

  • Density (shape of the plot). Wider sections indicate more data points.

What this means in practice:

  • If high values (red) cluster on the right → high values increase prospectivity

  • If high values (red) cluster on the left → high values decrease prospectivity

  • If low values (blue) cluster on the right → low values increase prospectivity (inverse relationship)

  • If points cluster near zero → the feature has little influence

DORA 2.0's baseline prediction for a pixel is 50%, which translates to a coin toss, or a neutral guess. Feature Importance show how features push that prediction higher or lower from that baseline:

  • Positive SHAP → increases likelihood of mineralization

  • Negative SHAP → decreases likelihood

The more consistently a feature shifts predictions away from neutral, the more important it is.


What To Do If Results Aren’t Optimal

In most cases, features ranked highly in Step 4: Select Input Features will also appear among the top drivers in the Feature Importance results.

This is expected, as DORA 2.0's scoring identifies features likely to be most relevant. However, Feature Importance reflects how the model actually uses the data during prediction, not just how features were initially scored. Because of this, differences can occur.

⚠️ Avoid Confirmation Bias: Unexpected results may signal the need to adjust model parameters, or they may reveal surprising insights. Reach out to the VRIFY team to help validate.

If Feature Importance differs from your expectations (e.g., if a key feature shows low importance or an unexpected feature ranks highly), review the following:

  • Data quality and coverage to identify gaps, noise, or limited variation

  • Learning Data representation to confirm your Learning Points reflect the geological context

  • Feature redundancy to determine whether similar information is already captured by another layer

These differences are not always a problem. They can reveal data gaps, overlapping inputs, or more complex geological relationships than expected.


Learn More


Still Have Questions?

Reach out to your dedicated DORA contact or email support@VRIFY.com for more information.

Did this answer your question?