Yeah, but in ML you train a random model that gives decent % correct results when you test it and it's either always around the same % or completely fucked. These losers made a model that's untestable without time travel and produces wildly different results.One of the more alarming things about the recent rise of machine learning is that models produced by machine learning are generally stochastic - for example, I evaluated an ai module (using tensorflow) that would automate spine segmentation (e.g. identifying where the vertebrae are in spine mris automatically). You would train the models by sending in dicom mri images of spines paired with images with the vertebrae annotated. After training the model, you would then send in spine images without the annotations and see if the model could annotate the image automatically (e.g. drawing red over the vertebrae in the output images.)
Every academic makes a model of something these days, so always it's easy to find something that fits (so far) by coincidence. We can throw out models that are far from observable reality and then watch the rest also go far away from it.