This is copied from my esteemed colleague, Craig Trim and "Natural Language Processing for Online Applications"
One approach to NLP is rooted in linguistic analysis of semantics, syntax, pragmatics and context. It is sometimes characterized as "symbolic", because it consists largely of rules for the manipulation of symbols (eg. grammatical rules that say whether or not a sentence is well formed). Given the heavy reliance of traditional artificial intelligence upon symbolic computation, it has also been characterized informally as "Good old-fashioned AI"
A second approach, which gained wider currency in the 1990s, is rooted in the statistical analysis of language. It is sometimes characterized as "empirical", because it involves deriving language data from relatively large corpora of text, such as news feeds and web pages.
Symbolic NLP vs Empirical NLP
A shorter answer to compare the two approaches:
In one case, the grammar is formally defined vs. empirical where the grammar is whatever is most statistically probable.
Having said that, solutions to real world problems will combine these two approaches. Watson took a more corpus-based approach; Wolfram-alpha seems to take more of the former approach, but it's often hard to delineate the two camps.