If a model correctly predicts 90 out of 100 total cases, which metric describes this overall correctness?
In spam detection, if a model labels 30 emails as spam and 24 of them are actually spam, what metric measures the proportion of spam predictions that are correct?
A disease detection test correctly identifies 80 out of 100 actual positive cases; which metric quantifies this ability to find all positives?
Which metric combines both precision and recall into a single value using their harmonic mean?
If a model predicts 50 items as positive, but only 20 are actually positive, which metric would decrease due to these many false positives?
In face recognition, if there are 40 real faces and the system finds 35 of them, what does the ratio 35/40 represent?
An increase in false positives in a binary classification problem will mostly cause which metric to decrease?
Out of 150 predictions, a classifier got 120 correct and 30 wrong; what is the accuracy?
Why would you use F1-Score instead of just accuracy for a very imbalanced dataset?
In a system where missing a positive instance is very costly (e.g., medical diagnosis), which metric should be prioritized?