If all logicians know that semantics comes from models, why don’t more AI people talk about how meaning should come from mental models?
I’m filling out an application to the Center for Applied Rationality. It’s an organization in the San Francisco Bay area that teaches classes, does research, and otherwise promotes and improves what they call “the art of rationality”. Specifically, the application says it’s for “Teachers and Curriculum Developers”; both are roles I think I would be happy in.
I’m excited about the possibility of getting hired there and moving to SF, but also terrified at the prospect of such a big change.
Postscriptum: Do any of you know about CFAR or know any people there? Any information would be appreciated.
I am not going to tell you that quantum mechanics is weird, bizarre, confusing, or alien. QM is counterintuitive, but that is a problem with your intuitions, not a problem with quantum mechanics. Quantum mechanics has been around for billions of years before the Sun coalesced from interstellar hydrogen. Quantum mechanics was here before you were, and if you have a problem with that, you are the one who needs to change. QM sure won’t. There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.