Vector embeddings part 1: Word2vec with Gensim

Since the advent of neural networks, vector embeddings for text processing have gained traction in both scientific and applied text classification problems, for example in text sentiment analysis. Using (pre-trained) embeddings has become a de facto standard for attaining a high rating in scientific sentiment analysis contests such as SemEval. However, vector embeddings are finding their way into many other fascinating applications as well.

In the current post, I want to provide a intuitive, easy to understand introduction to the subject. I will touch upon both vector embeddings in general, and applied to text analysis in particular. Furthermore, I’ll provide some quite simple code with Python and Gensim for generating vector embeddings from text. In further posts, I will apply vector embeddings to transforming categorical to continuous variables and generating vector definitions in N-dimensional space for entities.

I’ll cycle through the following points:

  • What are vector embeddings?
  • How do you use vector embeddings?
  • Why do you need vector embeddings?
  • How do you generate vector embeddings?
  • How does the neural network optimize the vector matrix?
  • A practical example (with code): word vectorization for tweets.

What are vector embeddings?

In a nutshell, a vector embedding for a certain item (a word, or a person, or any other object you can think of) is a mathematical definition for that specific item. This definition is comprised of a set of float numbers between -1 and 1, which is called a vector.

A vector is a specific location in N-dimensional space, in comparison to the zero-point in that space. So, a vector embedding for an item means, that the item is represented by a vector, embedded in a space with as many dimensions as there are numbers in the vector. Every position in the vector, indicates the location of the item on a certain axis; the first number represents the X-axis, the second represents the Y-axis, and so forth.

To keep it simple, let’s say the vector embedding for the word ‘bird’ equals [0.6, -0.2, 0.6]. Therefore, ‘bird’ is represented in a specific spot in 3-dimensional space. Simple, right?

Like the following image:

However, it gets more complicated when you start adding dimensions. As a rule of thumb, 50 dimensions is a usual vector length for representing things like grocery items, or people, or countries. For language-related tasks, somewhere in the range of 200-300 dimensions is more common. Our human brains are just not equipped for imagining anything beyond 3 to 4 dimensions, so we’ll have to trust the math.

Lastly, a constraint that is put on vector embeddings, is that they should always add up to 1.

How do you use vector embeddings?

Of course, a list of float numbers on its own does not mean anything. It only becomes interesting once you start factoring in that you could encode multiple items in high-dimensional space. Naturally, items that are closer to each other can be thought of as more similar or compatible. Consequently, items that are far away, are dissimilar. Items that are neither far away nor close by (think of it like a 90 degree angle, just in high-dimensional space) do not really have any discernible relationship to each other. A mathematical term for this is ‘orthogonal’.

The measure that is used for checking whether two vectors are similar / close, is called cosine similarity. Cosine similarity can be computed by calculating the dot product between two vectors. It goes like this;  Given two vectors [A, B, C] and [D, E, F], the numbers in each position are multiplied by each other; similarity equals A times D plus B times E plus C times F. This outcome is called a scalar in linear algebra. Subsequently, the dot product equals the sum of the previously calculated scalar.

The trick here is that for each position in the two vectors that are closer to each other within the -1 to 1 space, the result of the multiplication is higher. Therefore, the dot product between two vectors increases as they resemble each other more.

To test this out, let’s add two more words:

‘bird’                   = [0.6, -0.2, 0.6]

‘wing’                 = [0.7, -0.3, 0.6]

‘lawnmower’    = [-0.2, 0.8, 0.4]

Now what we have here, is actually our first vector matrix, which is just a fancy name for a set of vectors.

Intuitively, ‘bird’ and ‘wing’ should be quite similar, so have a high dot product. Let’s try it out:

Dot([0.7, -0.3, 0.6], [0.6, -0.2, 0.6]) = 0.42 + 0.06 + 0.36 = 0.86

On the opposite, ‘bird’ and ‘lawnmower’ shouldn’t really be related in any way, so the dot product should be close to 0:

Dot([0.6, -0.2, 0.6], [-0.2, 0.8, 0.4]) = -0.12 + -0.16 + 0.24 = 0.06

When you start adding lots of words to your vector matrix, it will start becoming a word cloud in N-dimensional space, in which certain clusters of words can be recognized. Furthermore, vector embeddings do not only represent bilateral relationships between items, but also capture more complicated relationships; In our language use, we have concepts of how words relate to each other. We understand that a leg is to a human, what a paw is to a cat. If our model is correctly trained on sufficient data, these types of relationships should be available.

Why do you need vector embeddings?

In any language-related machine learning task, your results will be best when using an algorithm that actually speaks the language that you’re aiming at analysing. In other words, you might want to use some technique that somehow captures the meanings of the words and sentences you’re processing.

Now, all ML algorithms require numerical INPUT. So, a requirement for an algorithm to capture language meaning, is that it should be numerical; every word should be represented by either a number or a series of numbers.

Generally, for transforming categorical entities such as words to numerical representations, one would apply one-hot encoding; for each of the amount of entities, which is called the cardinality of the variable, a separate column is generated, filled with 1s and 0s, like this:

Country Country_Greece Country_Chile Country_Italy
Greece 1 0 0
Chile 0 1 0
Italy 0 0 1
Chile 0 1 0

Essentially, one-hot encoding already transforms entities to vectors; each entity is represented by a list of numbers. However, this transformation has a number of drawbacks;

  • If you apply one-hot encoding to a text corpus, you’ll need to create a massive amount of extra columns. That’s both very inefficient and complicated to analyse and interpret. Text datasets often contain thousands of unique words. In other words; the created vector is too large to be useful.
  • Due to the Boolean nature of this transformation, each entity is equally as similar to all other entities. No possibility exists for somehow putting more similar items close to each other. Yet, we also want to capture meanings of words. Therefore,  we need a certain way of representing whether two words are closely related or not.

Embedding entities as vectors solves both these problems. Summarized, the method lets you encode every word in your dataset as a short list of float numbers, which are all between 1 and -1. Given that two words are similar, their respective lists resemble each other more.

How do you generate vector embeddings?

As I described before, similarity is essential to understanding vector embeddings. In the case of actually generating the embeddings from scratch, we’ll need a measure for similarity. Usually, in text analysis, we derive that from word co-occurrence in a text corpus. Essentially, words that are close together often within sentences, are theorized to be quite similar. The opposite is true for dissimilar words; they are never close together in text.

This is where neural networks come in: you use the neural network to either make two vectors more similar to each other each time the underlying entities occur together, or to make two vectors for items more dissimilar given that they don’t occur together.

Firstly, we need to transform our dataset to a usable format. Since the neural network will require some number to be predicted, we need a binary output variable. all similar word combinations will be accompanied with a 1, and all dissimilar combinations will be accompanied by a 0, like this;

First word Second word Binary output
‘bird ‘wing’ 1
‘bird’ ‘lawnmower’ 0
‘wing’ ‘lawnmower’ 0

These word combinations, accompanied by the output variable, are fed to the neural network. Essentially, the neural network predicts whether a certain combination of words should be accompanied by a 1 or a 0, and if it’s wrong compared to the real value, subsequently adjusts the weights in its internal vector matrix to make a more correct prediction next time that specific combination comes around.

How does the neural network optimize the vector matrix?

I’ll try to keep it simple. The neural network is trained by feeding it lots of bilateral word combinations, accompanied by 0s and 1s. Once a certain word combination enters the model, the two respective word vectors are retrieved from the vector matrix. At first, the numbers in the vectors are randomly generated, close to 0.

Subsequently, the dot product for those specific two vectors is calculated. Based on the outcome, the model will predict either a 0 (non-combination) or a 1 (actual combination). A sigmoid transformation is applied to the dot product of each combination, so that the outcome of the model is either 0, for a prediction of a non-combination, or a 1, for the prediction of a true combination.

The learning of the model, as with all neural networks, happens through backpropagation; after feeding a word combination to the network, the model prediction is compared with the actual outcome value. If those don’t match, this is propagated back to the embedding layer, where the embeddings are adjusted accordingly. If two words occur together frequently as a true combination, their vector embeddings will be adjusted to resemble each other more. The opposite happens for false combinations; those embeddings are adjusted so that they resemble each other less.

After training, we end up with an embedding matrix containing optimized embeddings for all words in the embedding matrix. We discard the sigmoid layer so we can examine dot products between different words.

A practical example (with code): word vectorization for tweets.

So, neural network vector embeddings can be challenging to understand. However, there are some libraries available that do all the heavy lifting in terms of programming; I’ll be using Gensim.

Some library imports:

import pandas as pd
import numpy as np
import csv
from gensim.models import word2vec
import nltk
import re
import collections

Firstly, we need some textual data. I chose for a set of 1.6 million tweets that were provided with a sentiment analysis competition on Kaggle. The data were provided in .csv format. Let’s load them. I’ll be keeping only a subset, for quick training.

##open kaggle file. We only need the tweet text, which is in 5th position of every line
with open('sentiment140\kaggle_set.csv', 'r') as f:
    r = csv.reader(f)
    tweets = [line[5] for line in r]
tweets = tweets[:400000]

Here’s what they look like.

print(tweets[:10])

#["@switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer.  You shoulda got David Carr of Third Day to do it. ;D", "is upset that he can't update his Facebook by texting it... and might cry as a result  School today also. Blah!", '@Kenichan I dived many times for the ball. Managed to save 50%  The rest go out of bounds', 'my whole body feels itchy and like its on fire ', "@nationwideclass no, it's not behaving at all. i'm mad. why am i here? because I can't see you all over there. ", '@Kwesidei not the whole crew ', 'Need a hug ', "@LOLTrish hey  long time no see! Yes.. Rains a bit ,only a bit  LOL , I'm fine thanks , how's you ?", "@Tatiana_K nope they didn't have it ", '@twittera que me muera ? ']

It appears that some data cleaning is in order.

##clean the data. We'll use nltk tokenization, which is not perfect. 
##Therefore I remove some of the things nltk does not recognize with regex
emoticon_str = r"[:=;X][oO\-]?[D\)\]pP\(/\\O]"
tweets_clean = []

counter = collections.Counter()
for t in tweets:
    t = t.lower()
    ##remove all emoticons
    t = re.sub(emoticon_str, '', t)
    ##remove username mentions, they usually don't mean anything
    t = re.sub(r'(?:@[\w_]+)', '', t)
    ##remove urls
    t = re.sub(r"http\S+", '', t)
    ##remove all reading signs
    t = re.sub(r'[^\w\s]','',t)
    ##tokenize the remainder
    words = nltk.word_tokenize(t)
    tweets_clean.append(words)

That’s better:

print(tweets_clean[:10])
##[['awww', 'thats', 'a', 'bummer', 'you', 'shoulda', 'got', 'david', 'carr', 'of', 'third', 'day', 'to', 'do', 'it', 'd'], ['is', 'upset', 'that', 'he', 'cant', 'update', 'his', 'facebook', 'by', 'texting', 'it', 'and', 'might', 'cry', 'as', 'a', 'result', 'school', 'today', 'also', 'blah'], ['i', 'dived', 'many', 'times', 'for', 'the', 'ball', 'managed', 'to', 'save', '50', 'the', 'rest', 'go', 'out', 'of', 'bounds'], ['my', 'whole', 'body', 'feels', 'itchy', 'and', 'like', 'its', 'on', 'fire'], ['no', 'its', 'not', 'behaving', 'at', 'all', 'im', 'mad', 'why', 'am', 'i', 'here', 'because', 'i', 'cant', 'see', 'you', 'all', 'over', 'there'], ['not', 'the', 'whole', 'crew'], ['need', 'a', 'hug'], ['hey', 'long', 'time', 'no', 'see', 'yes', 'rains', 'a', 'bit', 'only', 'a', 'bit', 'lol', 'im', 'fine', 'thanks', 'hows', 'you'], ['nope', 'they', 'didnt', 'have', 'it'], ['que', 'me', 'muera']]

Now building the actual model is super simple:

##now for the word2vec
##list of lists input works fine, you could train in batches as well if your set is too large for memory. 
model = word2vec.Word2Vec(tweets_clean, iter=5, min_count=30, size=300, workers=1)

And check out the results. The embeddings generate some cool results; you can clearly see that the most similar words to a certain word, are actually almost identical in semantic meaning:

print(model.wv.similarity('yellow','blue'))
print(model.wv.similarity('yellow','car'))
print(model.wv.similarity('bird','wing'))

#0.725584
#0.25973
#0.484771

print(model.wv.most_similar('car'))
print(model.wv.most_similar('bird'))
print(model.wv.most_similar('face'))

#[('truck', 0.7357473373413086), ('bike', 0.6849836707115173), ('apt', 0.6751911640167236), ('flat', 0.666846752166748), ('room', 0.6251213550567627), ('garage', 0.6184542775154114), ('van', 0.6047226190567017), ('house', 0.5993092060089111), ('license', 0.5938838720321655), ('passport', 0.5903675556182861)]
#[('squirrel', 0.7508804202079773), ('spider', 0.7459014654159546), ('cat', 0.7174966931343079), ('kitten', 0.7117133140563965), ('nest', 0.7008006572723389), ('giant', 0.6956182718276978), ('frog', 0.6906625032424927), ('rabbit', 0.685787558555603), ('mouse', 0.6779413223266602), ('hamster', 0.6754754781723022)]
#[('mouth', 0.6230158805847168), ('lip', 0.5884073972702026), ('butt', 0.5822890996932983), ('finger', 0.579236626625061), ('smile', 0.5790724754333496), ('eye', 0.5769941806793213), ('cheek', 0.5746228098869324), ('skin', 0.5726771354675293), ('arms', 0.5703110694885254), ('neck', 0.5571259260177612)]

That’s it for today! I’ll be elaborating more on how to generate vector embeddings by defining neural networks with Tensorflow and Keras, in a future post.

Averaging stocks: Time series analysis with Python

Regarding investments in the stock market, one of the most prevalent tips you’ll hear is that of averaging. What you do is this: You decide on an amount of money that you periodically have available for investing, e.g. weekly or monthly. Then you choose a certain stock or basket of stocks (e.g. an ETF) you believe will grow over the long term. Subsequently, you consistently invest that money in the stock, regardless of market conditions.

If you keep the amount of input consistent, that should result in a portfolio containing lots of cheap stock and little expensive stock. Let me explain. Say, you have $100 to spend monthly on investments. This month, a certain stock is valued at €5, so you buy 20. Next month, the same stock is valued at €10, so you buy 10. So you bought 30 stocks at €5 and €10, however your average buying price is (20 x 5 + 10 x 10) / 30 =  €6.67. Over the long term, your average buying price should be quite low compared to the general market. The cheaper a certain fund becomes, the more of that fund you’ll acquire. And the opposite of course.

This results in not having to time the correct moment for buying and selling, and your average buying price should be well below the average market price. Thus, your portfolio should outperform both the market, and stock picking in the long term. Of course, you won’t make any money in a market that has a long-term downtrend, but you’ll outperform a market that is moving sideward for a long period of time. Moreover, you actually capitalize on volatility in that situation.

The concept of averaging stocks is theoretically sound. However, I’d like to know:

  1. How does an averaged portfolio behave over the long term?
  2. How much of the time will my portfolio be worth more than what I put in? In other words; if I cash out at a random point, what are my odds of returning a profit?

I’ll use Amsterdam Exchange (AEX) data for the past 25 years to answer these questions. First, we’ll need some data. This is how you acquire stock data from Yahoo Finance:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
import fix_yahoo_finance as yf  
import datetime

start = datetime.datetime(1993,1,1)
end = datetime.datetime(2018,1,1)
data = yf.download('^AEX',start,end)

Now, we’ll need to write a definition that builds up a ‘portfolio’ based on a certain stock time series, certain periodic budget, and certain interval.

def avg_buy(series, budget, interval): 
    
    ##compute daily amounts bought
    amount_acquired = [int(budget / price) for price in series]

    ##compute avg price series
    avg_price_market, avg_price_bought, portfolio, portfolio_worth = [],[],[],[]

    for day, amount in enumerate(amount_acquired): 
        if day % interval == 0: 
            ##list of individual stocks acquired on that day with price 
            acquired = [series[day] for stock in range(amount)]
            ##compute portfolio
            portfolio = portfolio + acquired
            ##compute the actual value of the portfolio
            portfolio_worth.append(int(series[day] * len(portfolio)))
            ##compute mean price of portfolio
            for i in range(interval): 
                ##append the mean price of the portfolio
                avg_price_bought.append(np.mean(portfolio))
                ##append the mean price of the market
                avg_price_market.append(np.mean(series[:day+1]))
    
    return avg_price_bought, avg_price_market, portfolio_worth

The definition returns a list of average buying prices, a list of average market prices, and a list of portfolio values, for each day in the dataset.

Let’s apply the definition to the AEX data, with a 20-day interval and €10000 for monthly investments. Of course that amount is large, but the budget to stock value ratio should be quite large for this to work.

budget = 10000
timespan = 20

avg_price_bought, avg_price_market, portfolio_worth = avg_buy(AEX,budget, timespan)

Now, that already answers some questions. It is clearly visible that averaging in this situation outperforms the average market price. Also, the portfolio becomes more stable over time. When you acquire a larger portfolio, price movements aren’t going to make big dents into it anymore. Only three periods are visible during which you would have lost money on selling the portfolio.

This is what it looks like when you compare the amount invested to portfolio worth:

invested = [10000*i/20 for i in range(1,len(AEX),20)]

Now for determining exactly by how much averaging outperforms the market, I’ll first take some random subsets from the AEX series. Then, I’ll compute for each of the subsets the percentage of time the portfolio outperformed the AEX. I keep the portfolio at 10000, set the interval to weekly, and take 50 samples:

ratios = []
for i in range(50): 
    ##random start date
    start = random.randint(0, len(AEX))
    ##random end date, after the start date
    end = random.randint(start, len(AEX))
    ##take a subset, and if it's not too short, apply our definition
    subset = AEX[start:end]
    if len(subset) > 1000: 
        avg_price_bought, avg_price_market, portfolio_worth = avg_buy(subset,10000, 5)
        print(len(avg_price_bought), len(subset))
    ##compute proportion of time the portfolio is worth more than the original investment: 
        x = 0
        for idx, j in enumerate(subset): 
            if j > avg_price_bought[idx]: 
                x+=1

        ratio = x / len(subset)
        ratios.append(ratio)

print(np.mean(ratios))

This returns an average ratio of 0.61. Therefore, if you randomly sell your averaged portfolio at any time, you’ll turn a profit 61% of the time.

So there you have it! Blindly averaging your stocks and ignoring timing and market conditions, beats the market most of the time. In a next post, I’ll show what another common investment technique, compound interest, looks like.

Multivariate Outlier Detection with Isolation Forests

Recently, I was struggling with a high-dimensional dataset that had the following structure: I found a very small amount of outliers, all easily identifiable in scatterplots. However, one group of cases happened to be quite isolated, at a large distance from more common cases, on a few variables. Therefore, when I tried to remove outliers that were at three, four, or even five standard deviations from the mean, I would also delete this group.

Fortunately, I ran across a multivariate outlier detection method called isolation forest, presented in this paper by Liu et al. (2012).

This unsupervised machine learning algorithm almost perfectly left in the patterns while picking off outliers, which in this case were all just faulty data points. I’ve used isolation forests on every outlier detection problem since. In this blog post, I’ll explain what an isolation forest does in layman’s terms, and I’ll include some Python / scikit-learn code for you to apply to your own analyses.

Outliers

First, some outlier theory. Univariate outliers are all cases in one’s data that are quite far from the mean in terms of standard deviation on a certain variable. Don’t confuse them with influential cases. Influential cases may be outliers, and vice versa, but they’re not identical. Multivariate outliers, which we are discussing in this post, are essentially cases that display a unique or divergent pattern on variables.

Outliers may have two causes:

  • There may be mistakes present in your data. Maybe someone filled in a faulty number or just typed 999; maybe the application generating your data contains some weird logic. You definitely want to remove these cases before analysis.
  • Some cases are quite abnormal in your data, but are valid. In this case, it’s up to the data scientist to remove them or not. If you’re for example doing a regression, those outliers may strongly influence your results (Cook’s distance), and you’ll remove them from the data. However, a clustering algorithm will just put the abnormal group of cases in a separate cluster.

What I find isolation forests to do well, is that they first start at picking off the false or bad cases, and only when those are all identified, will start on the valid, abnormal cases. This is because valid cases, however abnormal, are often still grouped together, where bad cases are truly unique. This is not true for all analyses; if a default is 999 for example, there may be many cases with that value on some variable. It’s up to the data scientist to be vigilant in those cases.

Isolation Forests

As the name suggests, isolation forests are based on random forests. I’ll dig into decision trees and random forests some other time, but here’s what you need to know; decision trees split data into classes in order to minimize prediction error. On each iteration, the tree gets to make one split on one of the included variables to removing the most entropy, or degree of uncertainty. For example, a decision tree could first split cases into younger and older people, when predicting SES. Subsequently, it could split the younger group into people with and without college degrees to remove entropy, and so on.

Introduce random forests; large, powerful ensembles of trees, in which individual quality of each tree is diminished due to random splits, but with low prediction error due to trees outperforming other trees gaining a larger weighting in the final decision.

An isolation forest is based on the following principles (according to Liu et al.); outliers are the minority and have abnormal behaviour on variables, compared to normal cases. Therefore, given a decision tree whose sole purpose is to identify a certain data point, less dataset splits should be required for isolating an outlier, than for isolating a common data point. This is illustrated in the following plot:

Based on this, essentially what an isolation forest does, is construct a decision tree for each data point. In each tree, each split is based on selecting a random variable, and a random value on that variable. Subsequently, data points are ranked on how little splits it took to identify them. Given that the model’s instructions were to identify X% as outliers, the top X% cases on rank score are returned.

Isolation forests perform well because they deliberately target outliers, instead of defining abnormal cases based on normal case behaviour in the data. They are also quite efficient; I’ve easily applied them on datasets containing millions of cases.

Now for the practical bit. Let’s generate some data. I generate a large sample of definite inliers, then some valid outliers, then some bad cases.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import IsolationForest

##first, build a simulated 2-dimensional dataset
inliers = [[np.random.normal(50, 5, 1), np.random.normal(50, 5, 1)] for i in range(900)]
outliers1 = [[np.random.normal(70, 1, 1), np.random.normal(70, 1, 1)] for i in range(50)]
outliers2 = [[np.random.normal(50, 20, 1), np.random.normal(50, 20, 1)] for i in range(50)]

#merge and reshape the data points
df = inliers+ outliers1 + outliers2
v1, v2 = [i[0] for i in df],[i[1] for i in df]

df = pd.DataFrame({'v1': v1, 'v2': v2})
df = df = df.sample(frac=1).reset_index(drop=True)

##and look at the data
plt.figure(figsize = (20,10))
plt.scatter(df['v1'], df['v2'])
plt.show()

Subsequently, we’ll train an isolation forest to identify outliers and examine the results. 5% of the set were generated as outliers. However, some of these are likely to have ended up as inliers. Therefore I set the outlier identification threshold to 4%.

##apply an Isolation forest
outlier_detect = IsolationForest(n_estimators=100, max_samples=1000, contamination=.04, max_features=df.shape[1])
outlier_detect.fit(df)
outliers_predicted = outlier_detect.predict(df)

#check the results
df['outlier'] = outliers_predicted
plt.figure(figsize = (20,10))
plt.scatter(df['v1'], df['v2'], c=df['outlier'])
plt.show()

As you see, the isolation forest nicely separates bad cases from actual data patterns, with barely any errors. If I would now want to remove the rightmost cluster as well, I could just increase my removal threshold. The next plot is the result with the threshold set to 15%. One can clearly see that it now starts picking at cases in the smaller cluster and in the periphery of the larger cluster. Keep in mind that this algorithm will always incur slight error, due to random generation of splits.

Hyperparameter tuning

This is how to tune the following parameters for optimal performance:

  • n_estimators: The number of decision trees in the forest. According to Liu et al., 100 should be sufficient in most cases.
  • max_samples: Given large datasets, you might want to train on random subsets of cases to decrease training time. This parameter lets you determine subset size.
  • contamination: The proportion of the data you want to identify as outlier. As demonstrated before, this parameter requires some trial and error combined with scatterplot visualisation, given no prior knowledge.
  • max_features: The amount of variables that should be used to define outliers on. Should be set to the amount of variables that you have in almost any situation. This feature allows one to iterate over variables, and do univariate outlier detection on each variable without specifying a standard deviation threshold.

That’s all you need to know for applying isolation forests to multivariate outlier detection! Please refer to this blog if you use any information written here and the scikit-learn documentation in your own work.