MCB112: Biological Data Analysis (Fall 2017)
 Lestrade’s single cell RNAseq dataset
 1. reproduce Lestrade’s Kmeans result
 2. mixture negative binomial fitting
 3. find a simple fix for Kmeans
 turning in your work
 hints
 takehome lessons
homework 06:
a mixture of five
A former postdoc in the group, Gene Lestrade, left the lab suddenly, and left you a mess of his work in progress. He’s moved to a new job and isn’t responding to your emails, so you’re doing some detective work of your own.
Some of his last experiments were singlecell RNAseqs on differentiated sand mouse embryonic stem cells. You’ve found records from an experiment in which he was looking at two key early transcription factor genes called Caraway and Kiwi. Previous work had shown that Caraway and Kiwi are expressed at intermediate levels in ES cells, but upon differentiation, their mRNA expression patterns break into four different cell types with all four possible combinations of low vs. high expression of these two TFs.
Lestrade’s single cell RNAseq dataset
You’ve found a data file where Lestrade had collected mapped read counts for Caraway and Kiwi in 1000 single differentiated ES cells. Because he expected five “cell types” in the data, he used Kmeans clustering, with K=5, to try to assign each cell to one of the five cell types, and thus estimate the mean expression level (in mapped counts) of the two genes in each cell type, and the relative proportions of the cell types in the population.
However, it’s obvious from a figure you found taped into Lestrade’s notebook, and various profanities written therein, that the Kmeans clustering did not go as hoped. His visualization of his data does show five clear clusters, but his Kmeans clustering failed to identify them (Figure 1, right).
Figure 1: Lestrade's figure from his notebook, visualizing his Kmeans clustering of the 1000 cells in his single cell RNAseq experiment, for K=5. Black stars indicate the five fitted centroids from his best Kmeans clustering, and colors indicate the assignments of the 1000 cells to the 5 clusters.
You can see in his notebook that he understood that Kmeans is prone to local minima, so it’s not as if this is a oneoff bad solution. His notes indicate that he selected the best of 20 solutions, starting from different random initial conditions. You find the following data table, and a note that this solution had a best “final totdist = 465604.0”, of 20 solutions with “totdist” from 465604.0 to 476547.5.
mean counts  mean counts  

cluster  fraction  Caraway  Kiwi 
0  0.1060  505.7830  1769.7547 
1  0.6510  230.7450  226.6528 
2  0.0600  803.4000  3516.9833 
3  0.0770  3234.7273  1943.0390 
4  0.1060  2090.6698  304.6981 
Interestingly, it looks like all Lestrade was trying to do was to get Kmeans clustering to work. He must have also had the ES cells marked with reporter constructs that unambiguously labeled each of their cell types, because his data file includes a column for the true cell type (04), so the true clustering is known in these data (Figure 2, right).
Figure 2: Another figure from Lestrade's notebook, showing the true clustering of the 1000 cells.
1. reproduce Lestrade’s Kmeans result
Write a standard Kmeans clustering procedure. Use it to cluster Lestrade’s data into K=5 clusters. Plot the results, similar to his figure. You should be able to reproduce a similarly bad result.
You’ll want to run the Kmeans algorithm multiple times and choose the best. What is a good statistic for choosing the “best” solution for Kmeans? You should be able to reproduce Lestrade’s “totdist” measure.
Why is Kmeans clustering producing this result, when there are clearly five distinct clusters in the data?
2. mixture negative binomial fitting
Now you’re going to use what you’ve learned about mixture models, and about the negative binomial distribution for RNAseq data.
Write an expectation maximization algorithm to fit a mixture negative binomial distribution to Lestrade’s data, for Q=5 components in the mixture.
Assume there is a common dispersion . This means that all you need to reestimate in the EM algorithm are the means for each mixture component.
Like Kmeans, EM is a local optimizer, so you will want to run your EM algorithm from multiple initial conditions, and take the best one. What is an appropriate statistic for choosing the “best” fit?
What are the estimated mean expression levels of Caraway and Kiwi in the five cell types, and the relative proportions of each cell type in the 1000 cells?
Visualize your result in a plot similar to Lestrade’s.
3. find a simple fix for Kmeans
Suggest a simple fix for the problem in applying a standard Kmeans clustering algorithm to Lestrade’s single cell RNAseq data. Implement the fix, rerun the Kmeans clustering, pick a “best” solution; report and visualize it.
turning in your work
Email a Jupyter notebook page (a .ipynb
file) to
mcb112.teachingfellow@gmail.com. Please name your
file <LastName><FirstName>_MCB112_<psetnumber>.ipynb
; for example,
mine would be EddySean_MCB112_02.ipynb.
hints

The true cell type is noted in the data file, so you can check whether your clustering algorithms are working well. (It’s sort of obvious where the five clusters are anyway, visually, so it’s not like I’m giving much away.)

Kmeans (and fitting mixture models by EM) is prone to spurious local optima – as you’ll surely see. A lot of the art is in choosing initial conditions wisely. You may want to try some different strategies.

The mixture modeling EM algorithm will be eerily parallel to your Kmeans algorithm (cough that’s the point of the problem set cough cough). The expectation step, in which you calculate the posterior probability that each data point belongs to component , is analogous to the Kmeans step of assigning a data point to the current closest centroid . The maximization step, in which you reestimate the parameters given the posteriorweighted points, corresponds to the Kmeans step of estimating new centroids from the mean of their assigned points.

Because the dispersion is given to you, you only need to estimate , the means of each NB component.

A big hint on part (3): consider how Kmeans assigns its points to centroids, versus how we’re plotting the axes to visualize these data.

Kmeans is also prone to artifacts when the variances of the clusters are different, or when the variances aren’t uniform in the different directions (dimensions), because it implicitly assumes spherical Gaussian distributions of equal variance. I decided against fitting the NB dispersion parameters for this pset – which would be easy enough to do by “method of moment” fitting, though a moderate pain by maximum likelihood, as it turns out.

You locate the script Lestrade used to read his data file and produce Figure 2. This might help you avoid some of the routine hassles of parsing input and producing output, and focus on the good bit (Kmeans and EM fitting of a mixture model).
takehome lessons
 On the one hand, I’m using the very intuitive Kmeans algorithm to illustrate how an EM algorithm works. On the other hand, I’m using EM fitting of a statistical mixture model to illustrate how Kmeans is a simplified case of mixture modeling that makes strong implicit assumptions.