A Tour of Python NLP Libraries

NLP, or Natural Language Processing, is a field within Artificial Intelligence that focuses on the interaction between human language and computers. It tries to explore and apply text data so computers can understand the text meaningfully.

As the NLP field research progresses, how we process text data in computers has evolved. Modern times, we have used Python to help explore and process data easily.

With Python becoming the go-to language for exploring text data, many libraries have been developed specifically for the NLP field. In this article, we will explore various incredible and useful NLP libraries.

So, let’s get into it.

NLTK

NLTK, or Natural Language Tool Kit, is an NLP Python library with many text-processing APIs and industrial-grade wrappers. It’s one of the biggest NLP Python libraries used by researchers, data scientists, engineers, and others. It’s a standard NLP Python library for NLP tasks.

Let’s try to explore what NLTK could do. First, we would need to install the library with the following code.

pip install -U nltk

 

Next, we would see what NLTK could do. First, NLTK can perform the tokenization process using the following code:

import nltk from nltk.tokenize
import word_tokenize

# Download the necessary resources
nltk.download('punkt')

text = "The fruit in the table is a banana"
tokens = word_tokenize(text)

print(tokens)

 

Output>> 
['The', 'fruit', 'in', 'the', 'table', 'is', 'a', 'banana']

 

Tokenization basically would divide each word in a sentence into individual data.

With NLTK, we can also perform Part-of-Speech (POS) Tags on the text sample.

from nltk.tag import pos_tag

nltk.download('averaged_perceptron_tagger')

text = "The fruit in the table is a banana"
pos_tags = pos_tag(tokens)

print(pos_tags)

 

Output>>
[('The', 'DT'), ('fruit', 'NN'), ('in', 'IN'), ('the', 'DT'), ('table', 'NN'), ('is', 'VBZ'), ('a', 'DT'), ('banana', 'NN')]

 

The output of the POS tagger with NLTK is each token and its intended POS tags. For example, the word Fruit is Noun (NN), and the word ‘a’ is Determinant (DT).

It’s also possible to perform Stemming and Lemmatization with NLTK. Stemming is reducing a word to its base form by cutting its prefixes and suffixes, while Lemmatization also transforms to the base form by considering the words’ POS and morphological analysis.

from nltk.stem import PorterStemmer, WordNetLemmatizer
nltk.download('wordnet')
nltk.download('punkt')

text = "The striped bats are hanging on their feet for best"
tokens = word_tokenize(text)

# Stemming
stemmer = PorterStemmer()
stems = [stemmer.stem(token) for token in tokens]
print("Stems:", stems)

# Lemmatization
lemmatizer = WordNetLemmatizer()
lemmas = [lemmatizer.lemmatize(token) for token in tokens]
print("Lemmas:", lemmas)

 

Output>> 
Stems: ['the', 'stripe', 'bat', 'are', 'hang', 'on', 'their', 'feet', 'for', 'best']
Lemmas: ['The', 'striped', 'bat', 'are', 'hanging', 'on', 'their', 'foot', 'for', 'best']

 

You can see that the stemming and lentmatization processes have slightly different results from the words.

That’s the simple usage of NLTK. You can still do many things with them, but the above APIs are the most commonly used.

SpaCy

SpaCy is an NLP Python library that is designed specifically for production use. It’s an advanced library, and SpaCy is known for its performance and ability to handle large amounts of text data. It’s a preferable library for industry use in many NLP cases.

To install SpaCy, you can look at their usage page. Depending on your requirements, there are many combinations to choose from.

Let’s try using SpaCy for the NLP task. First, we would try performing Named Entity Recognition (NER) with the library. NER is a process of identifying and classifying named entities in text into predefined categories, such as person, address, location, and more.

import spacy

nlp = spacy.load("en_core_web_sm")

text = "Brad is working in the U.K. Startup called AIForLife for 7 Months."
doc = nlp(text)
#Perform the NER
for ent in doc.ents:
    print(ent.text, ent.label_)

 

Output>>
Brad PERSON
the U.K. Startup ORG
7 Months DATE

 

As you can see, the SpaCy pre-trained model understands which word within the document can be classified.

Next, we can use SpaCy to perform Dependency Parsing and visualize them. Dependency Parsing is a process of understanding how each word relates to the other by forming a tree structure.

import spacy
from spacy import displacy

nlp = spacy.load("en_core_web_sm")

text = "SpaCy excels at dependency parsing."
doc = nlp(text)
for token in doc:
    print(f"{token.text}: {token.dep_}, {token.head.text}")

displacy.render(doc, jupyter=True)

 

Output>> 
Brad: nsubj, working
is: aux, working
working: ROOT, working
in: prep, working
the: det, Startup
U.K.: compound, Startup
Startup: pobj, in
called: advcl, working
AIForLife: oprd, called
for: prep, called
7: nummod, Months
Months: pobj, for
.: punct, working

 

The output should include all the words with their POS and where they are related. The code above would also provide tree visualization in your Jupyter Notebook.

Lastly, let’s try performing text similarity with SpaCy. Text similarity measures how similar or related two pieces of text are. It has many techniques and measurements, but we will try the simplest one.

import spacy

nlp = spacy.load("en_core_web_sm")

doc1 = nlp("I like pizza")
doc2 = nlp("I love hamburger")

# Calculate similarity
similarity = doc1.similarity(doc2)
print("Similarity:", similarity)

 

Output>>
Similarity: 0.6159097609586724

 

The similarity measure measures the similarity between texts by providing an output score, usually between 0 and 1. The closer the score is to 1, the more similar both texts are.

There are still many things you can do with SpaCy. Explore the documentation to find something useful for your work.

TextBlob

TextBlob is an NLP Python library for processing textual data built on top of NLTK. It simplifies many of NLTK’s usage and can streamline text processing tasks.

You can install TextBlob using the following code:

pip install -U textblob
python -m textblob.download_corpora

 

First, let’s try to use TextBlob for NLP tasks. The first one we would try is to do sentiment analysis with TextBlob. We can do that with the code below.

from textblob import TextBlob

text = "I am in the top of the world"
blob = TextBlob(text)
sentiment = blob.sentiment

print(sentiment)

 

Output>>
Sentiment(polarity=0.5, subjectivity=0.5)

 

The output is a polarity and subjectivity score. Polarity is the sentiment of the text where the score ranges from -1 (negative) to 1 (positive). At the same time, the subjectivity score ranges from 0 (objective) to 1 (subjective).

We can also use TextBlob for text correction tasks. You can do that with the following code.

from textblob import TextBlob

text = "I havv goood speling."
blob = TextBlob(text)

# Spelling Correction
corrected_blob = blob.correct()
print("Corrected Text:", corrected_blob)

 

Output>>
Corrected Text: I have good spelling.

 

Try to explore the TextBlob packages to find the APIs for your text tasks.

Gensim

Gensim is an open-source Python NLP library specializing in topic modeling and document similarity analysis, especially for big and streaming data. It focuses more on industrial real-time applications.

Let’s try the library. First, we can install them using the following code:

pip install gensim

 

After the installation is finished, we can try the Gensim capability. Let’s try to do topic modeling with LDA using Gensim.

import gensim
from gensim import corpora
from gensim.models import LdaModel

# Sample documents
documents = [
    "Tennis is my favorite sport to play.",
    "Football is a popular competition in certain country.",
    "There are many athletes currently training for the olympic."
]

# Preprocess documents
texts = [[word for word in document.lower().split()] for document in documents]

dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]


#The LDA model
lda_model = LdaModel(corpus, num_topics=2, id2word=dictionary, passes=15)

topics = lda_model.print_topics()
for topic in topics:
    print(topic)

 

Output>>
(0, '0.073*"there" + 0.073*"currently" + 0.073*"olympic." + 0.073*"the" + 0.073*"athletes" + 0.073*"for" + 0.073*"training" + 0.073*"many" + 0.073*"are" + 0.025*"is"')
(1, '0.094*"is" + 0.057*"football" + 0.057*"certain" + 0.057*"popular" + 0.057*"a" + 0.057*"competition" + 0.057*"country." + 0.057*"in" + 0.057*"favorite" + 0.057*"tennis"')

 

The output is a combination of words from the document samples that cohesively become a topic. You can evaluate whether the result makes sense or not.

Gensim also provides a way for users to embed content. For example, we use Word2Vec to create embedding from words.

import gensim
from gensim.models import Word2Vec

# Sample sentences
sentences = [
    ['machine', 'learning'],
    ['deep', 'learning', 'models'],
    ['natural', 'language', 'processing']
]

# Train Word2Vec model
model = Word2Vec(sentences, vector_size=20, window=5, min_count=1, workers=4)

vector = model.wv['machine']
print(vector)

 


Output>>
[ 0.01174188 -0.02259516  0.04194366 -0.04929082  0.0338232   0.01457208
 -0.02466416  0.02199094 -0.00869787  0.03355692  0.04982425 -0.02181222
 -0.00299669 -0.02847819  0.01925411  0.01393313  0.03445538  0.03050548
  0.04769249  0.04636709]

 

There are still many applications you can use with Gensim. Try to see the documentation and evaluate your needs.

Conclusion

 

In this article, we explored several Python NLP libraries essential for many text tasks. All of these libraries would be useful for your work, from Text Tokenization to Word Embedding. The libraries we are discussing are:

  1. NLTK
  2. SpaCy
  3. TextBlob
  4. Gensim

I hope it helps

Collect and visualize MySQL server logs with the updated MySQL integration for Grafana Cloud

Today, we are excited to announce that the MySQL integration has received an important update, which includes a new pre-built MySQL logs dashboard and the Grafana Agent configuration to view and collect MySQL server logs.

The integration is already available in Grafana Cloud, our platform that brings together all your metrics, logs, and traces with Grafana for full-stack observability.

Why you need logs

Of all the three pillars of observability, metrics are the most widely used: They are easier to gather and store than logs or traces. They are great for detecting problems and understanding system performance at a glance. Still, metrics are often not enough to understand what caused an issue.

On the other hand, logs can tell you many more details about the root cause, once you narrow down the time and location of the problem using metrics.

Getting started with the MySQL integration

Grafana Agent is the universal collector and is all you need to send different telemetry data to the Grafana Cloud stack, including metrics, logs, and traces.

If you already use the embedded Agent integration to collect Prometheus metrics, your Agent configuration could look like this:

yaml

integrations:
  prometheus_remote_write:
    - url: https://<cloud-endpoint>/api/prom/push
  mysqld_exporter:
    enabled: true
    instance: mysql-01
    data_source_name: "root:put-password-here@(localhost:3306)/"

Adding MySQL logs is just adding some extra lines of Grafana Agent config.yml:

yaml

metrics:
  wal_directory: /tmp/wal
logs:
  configs:
  - name: agent
    clients:
    - url: https://<cloud-logs-endpoint>/loki/api/v1/push
    positions:
      filename: /tmp/positions.yaml
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: integrations/mysql 
      static_configs:
        - labels:
            instance: mysql-01
            job: integrations/mysql
            __path__: /var/log/mysql/*.log
      pipeline_stages:
        - regex:
            expression: '(?P<timestamp>.+) (?P<thread>[\d]+) \[(?P<label>.+?)\]( \[(?P<err_code>.+?)\] \[(?P<subsystem>.+?)\])? (?P<msg>.+)'
        - labels:
            label:
            err_code:
            subsystem:
        - drop:
            expression: "^ *$"
            drop_counter_reason: "drop empty lines"

integrations:
  prometheus_remote_write:
    - url: https://<cloud-endpoint>/api/prom/push
  mysqld_exporter:
    enabled: true
    instance: mysql-01
    data_source_name: "root:put-password-here@(localhost:3306)/"
    relabel_configs:
      - source_labels: [__address__]
        target_label: job
        replacement: 'integrations/mysql'

The additional configuration above locates and parses MySQL server logs by using an embedded Promtail Agent.

The most crucial configuration part is to make sure that the labels job and instance match each other for logs and metrics. This ensures that we can quickly dive from graphs to corresponding logs for more details on what actually happened.

You can find more information on configuring the MySQL integration in our MySQL integration documentation.

To learn more and get a better understanding of how to correlate metrics, logs, and traces in Grafana, I also recommend checking out the detailed talk by Andrej Ocenas on how to successfully correlate metrics, logs, and traces in Grafana.

Start monitoring with the MySQL logs dashboard

New logs dashboard in the My SQL integration for Grafana Cloud
New logs dashboard in the My SQL integration for Grafana Cloud

Along with coming packaged with pre-built dashboards as well as metrics and alerts, the MySQL integration for Grafana Cloud now bundles a new MySQL logs dashboard that can be quickly accessed from the MySQL overview dashboard when you need a deeper understanding of what’s going on with your MySQL server:

The important thing to note is that if you jump from one dashboard to another, the context of the MySQL instance and time interval will remain the same.

Try out the MySQL integration

The enhanced MySQL integration with log capabilities is available now for Grafana Cloud users. If you’re not already using Grafana Cloud, we have a generous free forever tier and plans for every use case. Sign up for free now!

It’s the easiest way to get started observing metrics, logs, traces, and dashboards.

For more information on monitoring and alerting on Grafana Cloud and MySQL, check out our MySQL integration documentation,  the MySQL solutions page, or join the #integrations channel in the Grafana Labs Community Slack.