Tokenization The process of segmenting running text into words and sentences. Electronic text is a linear sequence of symbols (characters or words or phrases). Naturally, before any real text processing is to be done, text needs to be segmented into linguistic units such as words, punctuation, numbers, alpha-numerics, etc. This process is called tokenization. In English, words are often separated from each other by blanks (white space), but not all white space is equal. Both “Los Angeles” and “rock 'n' roll” are individual thoughts despite...
[More]
Tags: 
natural_language_processi...
opennlp
tokens
lexical_analysis
segementation
lexer
syntax
nlp
parse
rdf
parsing
open_nlp
parser
watson
tokenization
stanford
semantics
apache