Overview
This section is where you configure which Vision Transformer Model (ViT) will be used for your predictions and determine the number of times your dataset will pass through the algorithm to conduct local fine-tuning, adjusting the general models to your local domain. This parameter is referred to as the number of epochs.
A vision embedding is a multi-dimensional latent space visual representation of the relationships between the input feature data, making it easier to see patterns and understand the relationships in the data. This step is an exercise in data dimension reduction that could be generally comparable to a Principal Component Analysis (PCA). In this process, your input features will be translated into 24 dimensions that help the algorithm make better predictions.
Read on for more context and explanations about what this step entails.
Key Concepts by Parameter
Parameter: Vision Transformer Model
The Vision Transformer processes, embeds, encodes and classifies the input features allowing for a prediction to be made for each patch (grid of pixels).
The Vision Transformer Models available are for specific mineral system types, including a General Model that can be used if there is not a model available for your target mineral system.
Choosing the correct vision transformer model, whether it’s specialized for a specific mineral system type or using the General Model, allows for alignment of data characteristics, increased prediction accuracy and reduces the risk of overfitting.
If you are not sure which Vision Transformer Model to use, start with the Master_Model.pt.
If you want to use an untrained model, use the None option. This is useful in the case where your deposit is extremely specific to your location.
Parameter: No. of Epochs
Setting the number of epochs controls the number of full passes through the training data.
Low Epochs: Fewer epochs result in shorter training time. This can be sufficient if the model quickly learns from the data, but it risks underfitting—where the model fails to capture all the patterns in the dataset.
High Epochs: More epochs provide more training cycles, potentially improving the model’s performance. However, after a certain point, additional epochs may no longer improve accuracy.
It may be beneficial to increase the number of epochs if your model has not fully converged (i.e., the loss is still decreasing and the accuracy is still increasing).
An early stop is built into VRIFY AI to monitor model performance and stop training when performance no longer improves. This avoids unnecessary epochs and prevents overfitting, improving the reliability of the model and its predictions while reducing calculation run time.
Still have questions?
Reach out to your dedicated VRIFY AI Contact or email Support@VRIFY.com for more information.