Paper Review: FuncISH: learning a functional representation of neural ISH images
Noa Liscovitch, Uri Shalit, & Gal Chechik (2013). FuncISH: learning a functional representation of neural ISH images Bioinformatics DOI: 10.1093/bioinformatics/btt207
This is part of the ISMB 2013 Proceedings series, which I am interested in as I'll be going to Berlin and is a Bioimage Informatics paper, which I'm keen to cover, so it was only natural I'd review it here.
§
The authors are analysing in-situ hybridization (ISH) images from the Allen Brain Atlas. Figure 1 in the paper shows an example:
Results
The authors use the images an input for a functional classifier. The input to this classifier is an image and the output are functional GO terms. At least a confidence level for each GO term in the vocabulary.
You can read the details in Section 3.1, but the system works to predict functional GO terms. Especially, as one would expect, neuronal categories. This is very interesting and I hope that the authors (or others) will pick up on the specific biology that is being predicted here and see if it can be used further. [1]
Alternatively, you can see this model as a dimensionality reduction approach, whereby images are projected into the space of GO terms. For this, one considers the continuous confidence levels rather than binary classifications.
In this space, it is possible to compute similarity scores between images, which operate at a functional rather than simply appearance level. The results are much better than simply comparing the image features directly (see Figure 4 for details). There is a lot of added value in considering the functional annotations rather than simple appearance.
Methodology
I was very interested in the methods and the details, as the authors used SIFT and a bag-of-words approach. I have a paper coming out showing that SURF+bag-of-words works very well for subcellular determination. This paper provides additional evidence that this family of techniques works well in bioimage analysis, even if the problem areas are different.
They do make an interesting a few interesting remarks which I'll highlight here:
Although their name suggest differently, SIFT descriptors at several scales capture different types of patterns.
The original SIFT were developed for natural image matching where the scale is unknown and may even vary within the same image (if a person is standing close-by and another one is far away, they will be at different scales). However, this is not the case with bioimage analysis.
§
Interestingly, the four visual words with the highest contribution to classification were the words counting the zero descriptors in each scale. This means that the highest information content lies in ‘least informative’ descriptors, and that overall expression levels (‘sparseness’ of expression) are important factors in functional prediction of genes based on their spatial expression.
This is interesting, although an alternative hypothesis is that the null descriptors capture a very different type of information. Since there are only 4 of them, these capture all this content. The other 2000 words are often highly correlated. Thus, they have high information content per group. Because of the penalized regression (in L2), the weight is spread around the correlated values.
§
Finally, I agree with this statement:
Combining local and global patterns of expression is, therefore, an important topic for further research.
[1] Unfortunately, my understanding of neuroscience does not go much beyond if I drink too much coffee, I get a headache. So, I cannot comment on whether these predictions make much sense.