Cosine Similarity in Question-Answering Apps


Cosine similarity is a measure that calculates the cosine of the angle between two given n-dimensional vectors in an n-dimensional space. Mathematically, it’s the dot product of the two non-zero vectors divided by the product of their magnitudes. The cosine similarity algorithm is proven to be very relevant when it comes to computing the similarity between two things. It can be used to create a movie recommendation application which suggests movies to a  user based on preferences and previous viewing history. Aside from this use case, it can also be used by a company to create a chatbot to response to the most frequently asked questions about the company. In this article, we will discuss the dot product (the backbone to cosine similarity) and how to use cosine similarity to answer questions.

The dot product

One thing you will notice is that the dot product of two vectors is a real number and not a vector. In the second example, the dot product of the vectors is a zero. What does it mean to have a zero-dot product? To answer this question, it is reasonable to define the dot product geometrically, which is

The question above is answered. Thus, the dot product is 0 if the first vector is orthogonal to the second vector as shown below:



The cosine similarity, as explained already, is the dot product of the two non-zero vectors divided by the product of their magnitudes. We can find the cosine similarity equation by solving the dot product equation for cos cos0 :

If two documents are entirely similar, they will have cosine similarity of 1. On the other hand, when the cosine similarity is -1, the documents are perfectly dissimilar. With that said, let us now dive into practice.


NB: I’m using Python 3.7 and scikit-learn 0.19.2.

We need to define our training questions and answers documents where each question has its corresponding answer in the answer document. Note that the index of a question and its answer is the same. For instance, if the question is at index 1 in the questions document, its answer is at index 1 in the answers document.

  questions = [  

  1.     ‘How many regions are in Ghana?’,  
  2.     ‘What is the favorite food for people in the Ashanti region of Ghana?’,  
  3.     ‘What is the name of the king of the Asantes?’,  
  4.     ‘What cash crop does Ghana export?’,  
  5.     ‘What is the primary occupation in Ghana?’,  
  6.     ‘Which country is the leading producer of cocoa in Africa?’,  
  7.     ‘Who is the minister of Food and Agriculture in Ghana?’,  
  8.     ‘What is crop rotation?’,  
  9.     ‘What is a cash crop?’,  
  10.     ‘What is arable farming?’,  
  11.     ‘What is the dominant native language in Ghana?’,  
  12.     ‘What is the current population of Ghana?’,  
  13.     ‘What is the capital city of Ghana?’ ]  

 answers = [  

  1.     ‘Ten’,  
  2.     ‘Fufu’,  
  3.     ‘Otumfuo Osei Tutu I’,  
  4.     ‘Cocoa’,  
  5.     ‘Farming’,  
  6.     “Cote D’Ivoire”,  
  7.     ‘Dr. Owusu Afriyie Akoto’,  
  8.     ‘The practice of growing a series of different types of crops in the same area in sequenced seasons’,  
  9.     ‘An agricultural crop grown for sale to return profit.’,  
  10.     ‘A kind of farming in which the land is ploughed and used to grow crops.’,  
  11.     ‘Twi’,  
  12.     ‘28.8 million’,  
  13.     ‘Accra’ 

The next thing is to use the sklearn “tfidf” vectorizer to transform all the questions into vectors. So, let’s import and instantiate the vectorizer.

  1. from sklearn.feature_extraction.text import TfidfVectorizer  
  2. vectorizer= TfidfVectorizer()  
  3. X =   
  4. array = X.transform(questions).toarray()  
  5. print(array[0])  
  6. [0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0]  

Since we have our documents modeled as vectors (with TF-IDF counts), we can now write a function to compute the cosine similarity of the angle between any given two vectors.

  1. import numpy as np  
  2. def cosine_similarity(a, b):  
  3.     “””Takes 2 vectors a, b and returns the cosine similarity according  
  4.     to the definition of the dot product  “””  
  5.     dot_product =, b)  
  6.     norm_a = np.linalg.norm(a)  
  7.     norm_b = np.linalg.norm(b)  
  8.     return dot_product / (norm_a * norm_b)  

When a user asks a question, we will transform the question into a vector with the same length as the question’s vectors.

  1. test_question =[  
  2.     ‘Briefly explain crop rotation’  
  3. test_vector = X.transform(test_question ).toarray()  

Now, we will find the cosine similarity between the test question (the test vector) and each training question (the training vector). We’ll then print the answer to the training question that is most similar to the test question as the answer to the question asked.

  1. response = ‘ ‘  
  2. most_sim = 0  
  3. for i in range(len(questions)):  
  4.     if most_sim < cosine_similarity(array[i], test_vector[0]):  
  5.               most_sim = cosine_similarity(array[i], test_vector[0])  
  6.              answer_index = i                    #get the index of the current most similar question
  7.        response = answers[answer_index] #get the answer of the most similar question.
  8. Print(response)


The outcome:

The practice of growing a series of different types of crops in the same area in sequential seasons.

Do you think you can beat this Sweet post?

If so, you may have what it takes to become a Sweetcode contributor... Learn More.

Yussif Mustapha Tidoo is from Tamale, Ghana. He is creative, hardworking and very active in sports. He loves learning new things and blending them with old ones. Mustapha is currently at Ashesi University offering B.Sc computer science.


Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *