I came across this post at LanguageLog through this tweet by Ms Moximer (@Melody).
It is all good, but for me the best parts were these remarks by Martin Kay (emphasis added)
Chomsky and Norvig are talking about different problems.
Chomsky looks at the work which has been done, for example in machine translation, and finds it inadequate because machines make some very obvious blunders. These blunders are evidently because the machine has no "understanding" of that which it translating. However, how could a machine ever understand language? It would require it to understand our way of life, to understand what we mean by "love", and "fear" and "hope" and "home" and "laughter", and indeed "understanding". It would have to have beliefs, comparable to the beliefs we humans hold. How many other animals would understand what we mean by "laughter"? Would an intelligent alien species understand?
This is not to say that Chomsky is wrong. Maybe this is what Wittgenstein would say too; that people use language, and that to understand language, you need to be a person, and you need to understand how we live. I just liked the distinction between questions about language and questions about the world.
It is all good, but for me the best parts were these remarks by Martin Kay (emphasis added)
Now I come to the fourth point, which is ambiguity. This, I take it, is where statistics really come into their own. Symbolic language processing is highly nondeterministic and often delivers large numbers of alternative results because it has no means of resolving the ambiguities that characterize ordinary language. This is for the clear and obvious reason that the resolution of ambiguities is not a linguistic matter. After a responsible job has been done of linguistic analysis, what remain are questions about the world. They are questions of what would be a reasonable thing to say under the given circumstances, what it would be reasonable to believe, suspect, fear or desire in the given situation. If these questions are in the purview of any academic discipline, it is presumably artificial intelligence.This is precisely the point.
Chomsky and Norvig are talking about different problems.
Chomsky looks at the work which has been done, for example in machine translation, and finds it inadequate because machines make some very obvious blunders. These blunders are evidently because the machine has no "understanding" of that which it translating. However, how could a machine ever understand language? It would require it to understand our way of life, to understand what we mean by "love", and "fear" and "hope" and "home" and "laughter", and indeed "understanding". It would have to have beliefs, comparable to the beliefs we humans hold. How many other animals would understand what we mean by "laughter"? Would an intelligent alien species understand?
This is not to say that Chomsky is wrong. Maybe this is what Wittgenstein would say too; that people use language, and that to understand language, you need to be a person, and you need to understand how we live. I just liked the distinction between questions about language and questions about the world.
No comments:
Post a Comment