Skip to main content
All CollectionsVRIFY AIVRIFY AI Prediction Maps
Interpreting VRIFY AI’s Result Graphs
Interpreting VRIFY AI’s Result Graphs

Learn how to interpret VRIFY AI’s result output in Target Generation, and tips for optimizing your outcomes based on the results.

Updated over 2 months ago

Overview

Closely analyzing VRIFY AI’s outputs through our visual graphs is essential for ensuring the accuracy and reliability of the prospectivity predictions.

These graphs allow you to clearly assess how well the model is performing, explore where it is making errors, and which features are influencing its decisions. This process helps identify areas for improvement and ensures the model aligns with the data and exploration objectives, ultimately enhancing the quality of predictions.


Visual Analysis of the Results

Users should examine the VPS score and its spatial distribution to ensure it aligns with their understanding of the area. This review helps validate that the data highlights regions already identified as mineralized or considered potential targets.


Confusion Matrix

What this graph tells you

The confusion matrix visualizes the performance of the predictive model by showing whether or not the model is more frequently making correct predictions in comparison to misclassifications. It compares the model’s predictions with known results within your dataset.

The confusion matrix is used to evaluate the performance of VRIFY AI’s predictions by comparing actual outcomes with the model's predictions, from the validation set. This is tested using validation (or holdout) data from your data set, which is 20% of the data. It shows four key values:

  • True Positive: The model correctly predicts a positive outcome (above threshold) the validation point.

  • True Negative: The model correctly predicts a negative (below threshold) learning point.

  • False Positive: The model incorrectly predicts a positive result when it was actually a negative (Type I error)

  • False Negative: The model incorrectly predicts a negative result when it was actually a positive. (Type II error)

How to interpret this graph

There are four quadrants on this graph that collectively represent how frequently the AI model made accurate predictions compared to inaccurate predictions.

  • The bottom-right quadrant shows the True Positives (correct positive predictions).

  • The top-left quadrant shows the True Negatives (correct negative predictions).

  • The top-right quadrant shows False Positives (wrongly predicted as positive).

  • The bottom-left quadrant shows False Negatives (wrongly predicted as negative).

Ideally, the True Positive and True Negative boxes should have the highest values, while the False Positive and False Negative boxes should be as low as possible, indicating the model is making accurate predictions.

When configuring your model and evaluating the results, it's important to strike the right balance between overfitting and underfitting:

  • Overfitting: Happens when a model is too complex, capturing noise along with patterns, resulting in high accuracy on training data but poor performance on new data.

  • Underfitting: Occurs when a model is too simple, missing key patterns, leading to poor performance on both training and test data

If the model accurately identifies a target (True Positive) or correctly rules out barren areas (True Negative), this indicates higher reliability of the predictions. While the other result graphs may indicate adjustments are needed, having a high rate of true positives and true negatives is a strong indicator of a well trained model.

Conversely, a high rate of False Positives means the model incorrectly identified a barren area as valuable exploration grounds, while high False Negatives would indicate the model missed an area with actual mineralization. In this case, you should make adjustments to the parameters set in earlier steps.

We prioritize maximizing True Negatives in our modeling (over True Positives) to accurately rule out barren areas more accurately, ensuring no mineral system is overlooked. Following up on targets that yield poor results is a more pragmatic exploration approach than missing out on potential discoveries.

What to do if the results are not optimal

If you have a high rate of false negatives and/or false positives, this is an indication that the model is not going to provide reliable target predictions.

There are advanced settings in the Predictive Modelling step that can be adjusted to help optimize your results. In this case, please reach out to your dedicated VRIFY AI contact to help make adjustments.


SHAP Values and Feature Importance

What are SHAP values

SHAP (SHapley Additive exPlanations) values are a way to explain the output of any AI model. It uses a game theoretic approach that measures each feature's contribution to the final outcome (predictive model value). In AI, each feature is assigned an importance value representing its contribution to the model's output. This can be seen as the % value besides the feature name. SHAP values show how each feature affects each final prediction, the significance of each feature compared to others, and the model's reliance on the interaction between features.

What this graph tells you

The SHAP value graph shows how much each feature (such as geological data, geochemical elements, geophysical anomalies, etc.) contributed to the AI model's predictions, helping you understand the importance and impact of different features on the model’s outcomes, giving you deeper insights into the reasoning behind the model’s recommendations.

The features are ranked by importance, with the highest feature importance at the top of the graph and the lower at the bottom. The higher the SHAP value for a feature (see % beside the feature name), the more weight the feature has in making a prediction. Ideally, we want reliable and unbiased exploration data to have higher SHAP values.

For each feature, the SHAP values are reported along the x-axis of the graph, together with the feature’s input value as the colour scale. If you look at the feature “bedding_strike_field”, you will see that it is mostly high input values (red) with a positive SHAP value (positive value on the x axis).

How to interpret this graph

SHAP Values for each feature are indicated by the x-axis distribution, the colour scale represents the input feature values, and the importance indicated by its rank on the graph (the higher the feature, the higher importance) as well as by a percentage.

A feature with a high SHAP value indicates that it played a significant role in the model's decision-making process for a particular prediction. For instance, if the SHAP values show that certain geophysical anomalies strongly contributed to the model's identification of a promising drill target, it suggests these anomalies are critical for mineral discovery in your dataset. Conversely, features with low SHAP values had less impact on the prediction.

Understanding SHAP values allows you to prioritize the most important features in your exploration strategy and evaluate whether the model’s focus aligns with your geological understanding. If unexpected features are driving predictions, this may indicate areas where the model needs refinement.

What to do if the results are not optimal

Results are considered not optimal if certain features you expected to be influential (e.g., geophysical anomalies) show as low feature importance, or if features with high feature importance seem irrelevant to your geological model. This may indicate that the AI model is not aligned with real-world mineral exploration principles.

In this case, you may need to revisit the model’s training data or adjust the selected features to improve the model's accuracy. Some approaches are:

  • Revisit Select Features and adjust the features included in the experiment (omit features that you feel may not be relevant to your model).


Z Coordinate Prediction Accuracy

What this graph tells you

The linear regression plot compares VRIFY’s predicted z coordinates for each cell in the predictive model with the actual (true) z coordinates from existing drilling data. Each point on the graph represents a learning point that is part of the validation set, where the x-axis and y-axis correspond to the predicted and actual z locations. Ideally, the points should be aligned closely along the diagonal 1:1 line, indicating that the predictions are very close to the true z-value depths.

This coordinate prediction versus true coordinates of learning points graph helps you visualize how accurately the model is predicting the depth of potential mineral-rich targets.

The closer the points are to the diagonal, the better the model is performing.

How to interpret this graph

In a scatter plot comparing predicted versus true coordinates, the diagonal line represents perfect predictions where the AI’s estimated coordinates match exactly with the true coordinates. Points close to this line show accurate predictions, while points further away represent greater errors in prediction.

For example, if the AI predicts a drill target’s location as 10 meters off from the actual known location, this will appear as a point deviating from the diagonal by a certain distance. The greater the distance from the line, the larger the error in the coordinate prediction.

What to do if the results are not optimal

If you see that many points are far from the diagonal line, it suggests that the model is inaccurately predicting the locations of drill targets.

You may need to revisit the data inputs selected in Input Features or adjust the model’s parameters in Embed Visual Features and within the Build Predictive Models section to improve accuracy.


ROC Curve

What this graph tells you

The ROC (Receiver Operating Characteristic) curve visualizes the performance of the AI model by plotting the True Positive Rate (Sensitivity) against the False Positive Rate (1 - Specificity) at different threshold levels. This graph helps you understand how well the model distinguishes between positive and negative examples (e.g., mineral-rich versus barren learning examples). A model with strong performance will have a curve that hugs the top-left corner, indicating a high True Positive Rate with a low False Positive Rate.

In mining AI, the ROC curve shows how effectively the model can predict promising drill targets versus barren areas, allowing you to assess the model’s ability to reduce false discoveries and accurately identify valuable exploration sites.

What the results mean

The ROC curve allows you to visualize the trade-offs between sensitivity (how well the model identifies positive learning points) and specificity (how well it avoids false positives). A curve that stays near the top-left corner of the plot indicates strong model performance, with a high rate of correctly identified targets and minimal false alarms. The area under the ROC curve (AUC) provides a single score for the model’s performance, with an AUC of 1.0 being perfect and 0.5 indicating no better than random guessing.

If the ROC curve is closer to the diagonal line (AUC near 0.5), this suggests that the model is performing poorly and struggles to distinguish between mineral-rich and barren areas.

Models that are too close to the top-left corner can be considered over-fitted, see section about overfitting and under-fitting.

What to do if the results are not optimal

If your ROC curve is close to the diagonal or shows poor performance, it suggests that the AI model is not effectively distinguishing between valuable and barren drill targets. You may need to retrain the model with more relevant data, adjust its threshold settings to improve classification, or change the predictive model advanced parameters to allow for closer fitting predictions. On the contrary, to limit the over-fitting, you might have to revisit the feature selected for your predictions and also the advanced parameters of the predictive modelling. The goal is to increase the True Positive Rate while minimizing the False Positive Rate without overfitting.


Still have questions?

Reach out to your dedicated VRIFY AI Contact or email Support@VRIFY.com for more information.

Did this answer your question?