Feedback, automation is nice thing, but we need to have proper monitoring
reporting stuff in place to check for bias in model prediction.
Especially when the model decides it’s next set of input we need to be
careful to verify that it is learning on general representative data.
All data is biased, but trasperancy about how, why it was collected, etc
can help us understand inherant bias in our data set.
Model learning in production from feedback can create +ve feedback loop and
extrapolate inherant baises in data. Domino effect (Small brick starting
huge chain of reactions)