Ultimately I think humans are also saying "I dunno, but here's what everyone else seems to think is a chair". But doing so more accurately, because they're more correctly modeling what people don't think are chairs.
I suspect that there's a fundamental difference in how humans learn from positive vs negative feedback, and in particular that there's a more likely to be a symbolic step in negative feedback. When we see an exception to a rule we look at the rest of the data set to find the basis vector on which the exception is operating in a way that I suspect we don't when the data point expands but doesn't conflict with the current model.
no subject
Date: 2024-06-17 05:38 pm (UTC)I suspect that there's a fundamental difference in how humans learn from positive vs negative feedback, and in particular that there's a more likely to be a symbolic step in negative feedback. When we see an exception to a rule we look at the rest of the data set to find the basis vector on which the exception is operating in a way that I suspect we don't when the data point expands but doesn't conflict with the current model.
AI could be doing this, but I don't think it is.