Humans and computers have serious communication problems: while machines can only understand specific programming languages or at least need structured data to process, human beings speak and understand “natural” language, with all the imprecision and ambiguity that this entails.
One of the fundamental aims of natural language processing (NLP) is therefore to simplify communication between humans on the one hand and machines on the other. Natural language processing, part of the wider field of artificial intelligence, provides technologies which enable computers to understand, interpret and generate unstructured human language.
The origins of natural language processing date back to the 1940s.
After many decades of slow progress, today natural language processing is a highly dynamic field, thanks in large part to more powerful hardware and innovations such as machine learning.
Though advances in the field are certainly far from over, a whole range of applications based on natural language processing, from the specialised to the everyday, are already in use:
No matter what sort of linguistic content it processes, a computer must be able to distinguish the individual components of that content and recognise their meaning before it can understand the whole. Therefore, the theoretical tools for natural language processing are drawn from the field of linguistics, particularly computational linguistics. The best way to understand how an NLP system works is to take a closer look at the individual phases of language and language processing. Depending on whether the system works with written or spoken language, one of the following aspects will be central.
An NLP system that works with spoken inputs records and analyses sound waves, encodes them into a digital signal and then interprets the data using various rules or by comparing it with an underlying language model. The theoretical foundations of speech recognition come from the linguistic disciplines of phonology and phonetics.
Regardless of whether an input is received as an audio file or as written text, a natural language processing system must parse the input into its individual components before it can discern the meaning of an utterance.
At the sentence and phrase level – the syntactical level – natural language processing determines the grammatical structure of an utterance. Below the syntactical level, morphological processes identify individual words and their constituent parts. The goal here is to understand the meaning of each individual term at the lexical level and so create the conditions for understanding the utterance as a whole.
The combination of information about the structure of a sentence and the meaning of its individual elements provides clues about the sentence’s meaning. Finally, placing the individual elements into context and so ideally understanding multiple elements of a coherent utterance correctly is a matter for semantics.
A natural language processing system may use various procedures falling within the domain of semantics. These include entity extraction (also called named entity recognition), sentiment analysis and disambiguation.
Because natural language processing is so multi-faceted, it has become common practice to categorise narrowly focussed applications into one of two recognised fields. Natural language understanding (NLU) and natural language generation (NLG) are both regarded as subdisciplines of natural language processing.
Natural language understanding focusses primarily on enabling machines to understand written texts or the spoken word. An application that analyses a news item on a website and uses entity extraction to identify elements such as people, places, and events, “only” uses natural understanding. But if it responds to the content it has identified, as a chatbot does for instance, it is classed as an NLP application.
Natural language generation, by contrast, refers to the production of text using an algorithm. To do this, an application needs structured data, as can be found in stock market information, sports results and weather data. Automatic text generation is then used to create any amount of content in real time. Because natural language generation turns data into language, it too is considered to be a sub-field of natural language processing.
Natural language processing can be colloquially defined as “computers doing things with language”. Information scientist Elizabeth D. Liddy provides a scientific definition:
“Natural language processing is a theoretically motivated range of computational techniques for analyzing and representing naturally occurring texts at one or more levels of linguistic analysis for the purpose of achieving human-like language processing for a range of tasks or applications.”
https://www.dataversity.net/fundamentals-natural-language-processing-natural-language-generation/