Overview
This article explains how to interpret the Prediction Accuracy gauge, F1 Score, and Prediction Breakdown matrix on your Prediction Map. These outputs help you evaluate how well the model distinguishes between mineralized and barren areas, and what to do if results are not reliable.
Prediction Accuracy Gauge
When you open your results, you’ll see the Prediction Accuracy gauge. This circular display shows:
Accuracy Score – Percentage of times the model was correct. It adds up the correct predictions, both positives and negatives, and divides that by the total number of points in the Performance Breakdown matrix below.
Label – Qualitative status based on accuracy range
Labels
0–75% = Underfitted (red)
76–90% = Optimal (green)
91–100% = Overfitted (red)
An Optimal score means the model is generalizing well based on your entire data set and input settings.
To explore the results further, click to view detailed model outputs.
F1, Precision, and Recall
F1 Score
The F1 Score combines Precision and Recall into one value. It increases only when both Precision and Recall are strong, making it a good indicator of overall model reliability.
Labels
0-70% - Poor (red)
71-80% - Fair (yellow)
81-100% - Optimal (green)
A higher score means the model is performing well at identifying true mineralized areas, while avoiding unnecessary false positives.
Precision
Precision reflects how often the model is correct when it predicts a location as mineralized.
Formula: True Positive / (True Positive + False Positive)
A high Precision value means the model produces fewer false positives.
Recall
Recall reflects how many of the actual mineralized locations the model successfully identifies.
Formula: True Positive / (True Positive + False Negative)
A high Recall value means the model finds most of the true mineralized areas.
Performance Breakdown Matrix
Below the gauge, you’ll find a Performance Breakdown, which shows how well the model's predictions match actual outcomes from your entire dataset.
Your Learning Data, which was configured in Step 3: Set Up Learning Data, is the foundation for this output. It helps the model learn to distinguish between mineralized and unmineralized zones, which is reflected in the results.
This breakdown contains four key outcomes:
True Positive (TP): The model correctly predicts a mineralized location.
False Positive (FP): The model incorrectly predicts a mineralized location when it’s actually barren.
False Negative (FN): The model incorrectly predicts a barren location when it’s actually mineralized.
True Negative (TN): The model correctly predicts a barren location.
These outcomes are arranged in a matrix format, making it easy to see how many of each type the model is producing.
The shading helps you interpret the results at a glance. Darker cells along the diagonal represent stronger performance.
How to Interpret the Performance Breakdown
Ideally, True Positives and True Negatives should be high, while False Positives and False Negatives should be as low as possible.
High True Positives confirm that the model identifies mineralized areas correctly.
High False Positives mean barren ground is flagged incorrectly as prospective, leading to wasted effort.
High False Negatives mean potential mineralized ground is missed.
High True Negatives confirm that barren areas are ruled out reliably.
It’s also important to check for overfitting (when a model is too complex and performs poorly on new data) and underfitting (when a model is too simple and misses key patterns). A balanced model should generalize well across unseen data.
We prioritize maximizing True Negatives in our modeling (over True Positives) to rule out barren areas more accurately, ensuring mineral systems are not overlooked. Following up on targets that yield poor results is a more pragmatic exploration approach than missing out on potential discoveries.
What to Do if Results Are Not Optimal
If you see a high rate of False Positives and/or False Negatives, it means the model is not providing reliable predictions. This is typically reflected in a low F1 Score, which combines both Precision and Recall.
In this case, you may need to adjust your inputs, parameters, or model settings to improve performance.
Try the following adjustments, in this order:
Revise model parameters
In Step 5: Build Predictive Model, adjust the cluster size, and minimum points per cluster.
Explore Advanced Settings for more control over algorithm performance.
Review your features
In Step 2: Select Input Features, confirm that only relevant geoscience layers are included. Remove noisy or unrelated inputs.
Revise your target thresholds
In Step 3: Set Up Learning Data, revise target thresholds.
Adjust the AOI resolution
In Step 1: Select AOI, modify the height and width of your Area of Interest to better match geological context and data coverage.
Review Learning Data files
Check your Learning Points shapefile (Step 3: Set Up Learning Data) to ensure that data is accurate and covers the AOI.
Learn More
Interpret other DORA’s Result Graphs:
Create a DORA Prediction Map:
Still Have Questions?
Reach out to your dedicated DORA contact or email support@VRIFY.com for more information.




