Skip to main content

Prediction Accuracy and Performance Breakdown

Learn how to read and interpret the Prediction Accuracy, F1, and the Performance Breakdown matrix.

Updated this week

Overview

This article explains how to interpret the Prediction Accuracy gauge, F1 Score, and Prediction Breakdown matrix on your Prediction Map. These outputs help you evaluate how well the model distinguishes between mineralized and barren areas, and what to do if results are not reliable.


Prediction Accuracy Gauge

When you open your results, you’ll see the Prediction Accuracy gauge. This circular display shows:

  • Accuracy Score – Percentage of times the model was correct. It adds up the correct predictions, both positives and negatives, and divides that by the total number of points in the Performance Breakdown matrix below.

  • Label – Qualitative status based on accuracy range

Prediction Accuracy gauge

Labels

An Optimal score means the model is generalizing well based on your entire data set and input settings.

To explore the results further, click to view detailed model outputs.


F1, Precision, and Recall

F1 Score

The F1 Score combines Precision and Recall into one value. It increases only when both Precision and Recall are strong, making it a good indicator of overall model reliability.

Labels

  • 0-70% - Poor (red)

  • 71-80% - Fair (yellow)

  • 81-100% - Optimal (green)

A higher score means the model is performing well at identifying true mineralized areas, while avoiding unnecessary false positives.

Precision

Precision reflects how often the model is correct when it predicts a location as mineralized.

  • Formula: True Positive / (True Positive + False Positive)

  • A high Precision value means the model produces fewer false positives.

Recall

Recall reflects how many of the actual mineralized locations the model successfully identifies.

  • Formula: True Positive / (True Positive + False Negative)

  • A high Recall value means the model finds most of the true mineralized areas.


Performance Breakdown Matrix

Below the gauge, you’ll find a Performance Breakdown, which shows how well the model's predictions match actual outcomes from your entire dataset.

Your Learning Data, which was configured in Step 3: Set Up Learning Data, is the foundation for this output. It helps the model learn to distinguish between mineralized and unmineralized zones, which is reflected in the results.

This breakdown contains four key outcomes:

  • True Positive (TP): The model correctly predicts a mineralized location.

  • False Positive (FP): The model incorrectly predicts a mineralized location when it’s actually barren.

  • False Negative (FN): The model incorrectly predicts a barren location when it’s actually mineralized.

  • True Negative (TN): The model correctly predicts a barren location.

These outcomes are arranged in a matrix format, making it easy to see how many of each type the model is producing.

The shading helps you interpret the results at a glance. Darker cells along the diagonal represent stronger performance.

Performance breakdown validation samples


How to Interpret the Performance Breakdown

Ideally, True Positives and True Negatives should be high, while False Positives and False Negatives should be as low as possible.

  • High True Positives confirm that the model identifies mineralized areas correctly.

  • High False Positives mean barren ground is flagged incorrectly as prospective, leading to wasted effort.

  • High False Negatives mean potential mineralized ground is missed.

  • High True Negatives confirm that barren areas are ruled out reliably.

Performance Breakdown Matrix

It’s also important to check for overfitting (when a model is too complex and performs poorly on new data) and underfitting (when a model is too simple and misses key patterns). A balanced model should generalize well across unseen data.

We prioritize maximizing True Negatives in our modeling (over True Positives) to rule out barren areas more accurately, ensuring mineral systems are not overlooked. Following up on targets that yield poor results is a more pragmatic exploration approach than missing out on potential discoveries.


What to Do if Results Are Not Optimal

If you see a high rate of False Positives and/or False Negatives, it means the model is not providing reliable predictions. This is typically reflected in a low F1 Score, which combines both Precision and Recall.

In this case, you may need to adjust your inputs, parameters, or model settings to improve performance.

Try the following adjustments, in this order:

  1. Revise model parameters

  2. Review your features

  3. Revise your target thresholds

  4. Adjust the AOI resolution

    • In Step 1: Select AOI, modify the height and width of your Area of Interest to better match geological context and data coverage.

  5. Review Learning Data files


Learn More


Still Have Questions?

Reach out to your dedicated DORA contact or email support@VRIFY.com for more information.

Did this answer your question?