totient: (Default)
phi ([personal profile] totient) wrote 2024-06-17 05:38 pm (UTC)

Ultimately I think humans are also saying "I dunno, but here's what everyone else seems to think is a chair". But doing so more accurately, because they're more correctly modeling what people don't think are chairs.

I suspect that there's a fundamental difference in how humans learn from positive vs negative feedback, and in particular that there's a more likely to be a symbolic step in negative feedback. When we see an exception to a rule we look at the rest of the data set to find the basis vector on which the exception is operating in a way that I suspect we don't when the data point expands but doesn't conflict with the current model.

AI could be doing this, but I don't think it is.

Post a comment in response:

(will be screened)
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting