🔮Make Predictions
Once you have an algorithm you can start making predictions in seconds. In this tutorial you will learn how to use them.
Last updated
Once you have an algorithm you can start making predictions in seconds. In this tutorial you will learn how to use them.
Last updated
Once your project is ready you will receive an email with the confirmation. You can log into your profile and you will find your project in the Dashboard with the status Done.
You can click on your project and Arkangel AI will show you all the algorithms it found ranked and classified by our Performance Score (0-100), the higher the better:
The Performance Score will give you a balanced view of all the performance metrics at the Test set. You can switch to rank your model based on a specific metric like Specificity, AUC, and Sensitivity by clicking the dropdown next to the Performance column.
The Leaderboard represents the ranking of the algorithm based on your prioritized metric.
The Speed will give you a sense of the number of inferences your algorithm can do per minute.
The Status will give you a classification of the overall quality of your algorithm.
You can easily click any algorithm to see the details of it:
On this tab, you can see all the performance metrics and a new section with the dataset partition:
Training records are the segment of your dataset used for training.
Test records are the number of records used for providing an unbiased estimate of the final model.
Validation records are the number of records used for optimizing the model parameters.
Developing AI is an iterative process. Sometimes you don't find the best algorithm on the first try. If you encounter this challenge try multiple times with different (1) Data improvement configurations, follow (2) Data Best Practices, or collect more information in Arkangel AI by deploying your existing algorithm.
Finally, at the bottom of the Model details section you can find the Confusion Matrix generated by the model.
A confusion matrix consists of four key components:
True Positive (TP): The number of instances correctly predicted as positive (correctly classified as the target class).
True Negative (TN): The number of instances correctly predicted as negative (correctly classified as not the target class).
False Positive (FP): The number of instances incorrectly predicted as positive (incorrectly classified as the target class when it's not).
False Negative (FN): The number of instances incorrectly predicted as negative (incorrectly classified as not the target class when it is).
The confusion matrix helps assess the performance of a model by providing evaluation metrics metrics. It allows you to analyze the types of errors made by the model and gain insights into its strengths and weaknesses.
In Arkangel we utilize this kind of confusion matrix, where the x axis (columns) represent the predictions made by the algorithm trained and the y axis (rows) represent the true label of the prediction, in this case we are going to take the 1 as the positive case and 0 as the negative case.
Now that you have found your preferred algorithm you can use the algorithm through a custom web app by clicking the Use web app button on your algorithm detail page or integrate it into your workflow through our API by clicking the Use API button.
Arkangel AI will generate a form with the required inputs, and create fields that match the input type to allow you to include a new and unseeing record for prediction.
Simply input the new information in each field and click the Send inference button.
Arkangel AI will give you a sense of the usual range for a given field so that you know the representation of the training group.
You can easily click the magic wand to generate a new record to test the algorithm.
Once your model finishes it will load your prediction details page:
On the left you will have the record with the following values:
Feature: This is the name of the field.
Value: The value you gave to that field.
Impact: A score we calculate that represents the impact each field has for that specific prediction. The closer to +100 the more positive impact is having for that prediction, and the closer to -100 the more negative impact is having for that prediction. To learn more about this calculation read SHAP (SHapley Additive exPlanations)
Prediction: You will find the prediction at the top. In this case "3=CIRRHOSIS"
Confidence score: below the prediction, you will find the confidence score for that prediction. In this case 90.4% confidence score.
For making multiple predictions at the same time you can select the Multi Inferences tool.
Once selected, a new page can be accessed, where you must click on the "download example file" highlighted text in order to download a CSV file that can be filled with the data needed to generate the multiple inferences. The new CSV file as for 5/29/2023 can be as large as 930 KB, any file larger than that can not be used, since the results take too much time to be processed.
Once filled drop or select the CSV file and a new CSV will be generated, that adds the columns, predictionLabel and predictionConfidence, which represent specifically the label predicted by the algorithm and the confidence of the algorithm in making the prediction.