graph – Samposium https://samnicholls.net The Exciting Adventures of Sam Sun, 12 Jun 2016 20:21:53 +0000 en-GB hourly 1 https://wordpress.org/?v=5.7.5 101350222 Status Report: May 2016 (Metahaplomes: The graph that isn’t) https://samnicholls.net/2016/06/12/status-may16/ https://samnicholls.net/2016/06/12/status-may16/#respond Sun, 12 Jun 2016 20:03:24 +0000 https://samnicholls.net/?p=699 It would seem that a sufficient amount of time has passed since my previous report to discuss how everything has broken in the meantime. You would have left off with a version of me who had not long solidified the concept of the metahaplome: a graph-inspired representation of the variation observed across aligned reads from a sequenced metagenome. Where am I now?

Metahaplomes

The graph that isn’t

At the end of my first year, I returned from my reconnaissance mission to extract data mining knowledge from a leading Belgian university with a prototype for a data structure that was fit to house a metahaplome; a probabilistically weighted graph that can be traversed to extract likely sequences of variants on some gene of interest. I say graph, because the structure and API does not look or particularly act like a traditional graph at all. Indeed, the current representation is a four dimensional matrix that stores the number of observations of a symbol (SNP) A at position i, co-occurring with a symbol B at position j.

This has proved problematic as I’ve had difficulty in explaining the significance of this to people who dare to ask what my project is about. “What do you mean it’s not a graph? There’s a picture of a graph on your poster right there!?”. Yes, the matrix can be exploited to build a simple graph representation, but not without some information loss. As a valid gene must select a variant at each site, one cannot draw a graph that contains edges from sites of polymorphisms that are not adjacent (as a path that traverses such an edge would skip a variant site1). We therefore lose the ability to encode any information regarding co-occurrence of non-adjacent variants (abs(i - j) != 1) if we depict the problem with a simple graph alone.

To circumvent this, edges are not weighted upfront. Instead, to take advantage of the evidence available, the graph is dynamically weighted during traversal (the movement to the next node is variable, and depends on the nodes that have been visited already) using the elements stored in the matrix.

Thus we have a data structure capable of being utilised like a graph, with some caveats: it is not possible to enumerate all possibilities or assign weights to all edges upfront before traversal (or for that matter, a random edge), and a fog of war exists during any traversal (i.e. it is not possible to predict where a path may end without exploring). Essentially we have no idea what the graph looks like, until we explore it. Despite this, my solution fuses the advantage of a graph’s simple representation, with the advantage of an adjacency matrix that permits storage of all pertinent information. Finally, I’ve been able to describe the structure and algorithm verbally and mathematically.

Reweighting

Of course, having this traversable structure that takes all the evidence seen across the reads into account is great, but we need a reliable method for rescuing more than just one possible gene variant from the target metahaplome. My initial attempts at this involved invoking stochastic jitter during traversal to quite poor effect. It was not until some time after I’d got back from putting mayonnaise on everything that I considered altering the observation matrix that backs the graph itself to achieve this.

My previous report described the current methodology: given a complete path, check the marginal probability for each variant at each position of the path (i.e. the probability one would select the same nucleotide if you were to look at variant site in isolation) and determine the smallest marginal. Then iterate over the path, down-weighting the element of the observation matrix that stores the number of occurrences of the i‘th nucleotide and the i+1‘th selected nucleotide, by multiplying the existing value by the lowest marginal (which will be greater than 0, but smaller than 1) and subtracting that value from the current count.

Initial testing yielded more accurate results with this method than anything I had tried previously, where accuracy is quantified by this not happening:

The algorithm is evaluated with a data set of several known genes from which a metagenome is simulated. The coloured lines on the chart above refer to each known input gene. The y axis represents the percentage of variants that are “recovered” from the metagenome, the x axis is the iteration (or path) number. In this example, a questionable strategy caused poor performance (other than the 100% recovery of the blue gene), and a bug in handling elements that are reweighted below 1 allowed the algorithm to enter a periodic state.

After implementing the latest strategy, performance compared to the above increased significantly (at least on the limited data sets I have spent the time curating), but I was still not entirely satisfied. Recognising this was going to take much more time and thought, I procrastinated by writing up the technical aspects of my work in excruciating mathematical detail in preparation for my next paper. To wrap my head around my own equations, I commandeered the large whiteboards in the undergraduate computing room and primed myself with coffee and Galantis. Unfortunately, after a hour or two of excited scribbling, this happened:

I encountered an oversight. Bluntly:

Despite waxing lyrical about the importance of evidence arising from non-adjacent variant sites, I’d overlooked them in the reweighting process. Although frustrated with my own incompetence, this issue was uncovered at a somewhat opportune time as I was looking for a likely explanation for what felt like an upper bound on the performance of the algorithm. As evidence (observation counts) for adjacent pairwise variants was decreased through reweighting, non-adjacent evidence was becoming an increasingly important factor in the decision making process for path traversal, simply by virtue of the counts being larger (as they were left untouched). Thus paths were still being heavily coerced along particular routes and were not afforded the opportunity to explore more of the graph, yielding less accurate results (less recovered variants) for more divergent input genes.

As usual in these critical oversights, the fix was trivial (just ensure to apply the same rules for reweighting the adjacent pairs of variants to the non-adjacent ones too), and indeed, performance was bumped by around 5%p. Hooray.

Evaluation

Generating test data (is still a pain in the arse)

So here we are, I’m still somewhat stuck in a data rut. Generating data sets (that can be verified) is a somewhat convoluted procedure. Whilst, to run the algorithm all one needs is a BAM of aligned reads, and an associated VCF of called SNP sites; to empirically test output, we also need to know what the output genes should look like. Currently this requires a “master” FASTA (the origin gene), a FASTA of similar genes (the ones we actually want to recover) and a blast hit table that documents how those similar genes align to the master. The workflow for generating and testing a data set looks like this:

  • Select an interesting, arbitrary master gene from a database (master.fa)
  • blast for similar genes and select several hits with decreasing identity
  • Download FASTA (genes.fa) and associated blast hit table (hits.txt) for selected genes
  • Simulate reads by shredding genes.fa (reads.fq)
  • Align reads (reads.fq) with bowtie to pseudo-reference (master.fa) to create (hoot.bam)
  • Call for SNPs on (hoot.bam) to create a VCF (hoot.vcf)
  • Construct metahaplome and traverse paths with reads (hoot.bam) and SNPs (hoot.vcf)
  • Output potential genes (out.fa)
  • Evaluate each result in out.fa against each hit in hits.txt
    • Extract DNA between subject start and end for record from genes.fa
    • Determine segment of output (from out.fa) overlapping current hit (from genes.fa)
    • Convert co-ordinates of SNP to current hit (gene.fa)
    • Confirm consistency between SNP on output, to
    • Return matrix of consistency for each output gene, to each input gene

Discordant alignments between discordant aligners

The issue at hand primarily arises from discordant alignment decisions between the two alignment processes that make up components of the pipeline; blast and bowtie. Although blast is used to select the initial genes (given some “master”), and its resulting hit table is also used to evaluate the approach at the end of the algorithm, bowtie is used to align the reads (reads.fq) to that same master. Occasional disagreements between both algorithms are inevitable on real data, but I assumed that given the simplicity of the data (simulated reads of uniform length, no errors, reasonable identity) that they would behave the same. It may sound like an obvious problem source, but when several genes are reported as correctly extracted with high accuracy (identity) and one or two are not, you might forgive me for thinking that the algorithm just needed tweaking rather than an underlying problem stemming from the alignment steps! This led to much more tail chasing than I would care to admit to.

For one example: I investigated a poorly reconstructed gene by visualising the input BAM produced by bowtie with Tablet (a rather nifty BAM+VCF+FA viewer). It turned out that for input reads belonging to one of the input genes, bowtie had called an indel1, causing a disagreement as to what the empirical base at each SNP following that indel should have been. That is, although all of the reads from that particular input gene were aligned by bowtie as having an indel (and thus shifting bases in those reads), and processed by my algorithm with that indel taken into account, at the point of evaluation, the blast hit table is the gold standard; what may have been the correct variant (indel withstanding) would be determined as incorrect by the alignment of the hit table.

I suppose the solution might be to switch to one aligner, but I’m aware that even the same aligner can make different decisions under differing conditions (read length).
It’s important to note that currently the hit table is also used to define where to begin sharding reads for the simulated metagenome, which in turn causes trouble if bowtie disagrees with where an alignment begins and ends. I’ve had cases where blast aligns the first base of the subject (input gene) to the first base of the query (master) but on inspection with Tablet, it becomes evident that bowtie clips the first few bases when aligning opening reads to the master. This problem is a little more subtle, and in current practice causes little trouble. Although the effect would be a reduction in observed evidence for variants at SNPs that happen to occur within the first few bases of the gene, my test sets so far do not have a SNP site so close to the start of the master. This is obviously something to watch out for, though.

At this time I’ve just been manually altering the hit table to reconcile differences between the two aligners, which is gross.

Bugs in the machine

Of course, a status report from me is not complete without some paragraph where I hold my hands up in the air and say everything was broken due to my own incompetence. Indeed, a pair of off-by-one bugs in my evaluation algorithm also warped reported results. The first, a regression introduced after altering the parameters under which the evaluation function determines an overlap between the current output gene and current hit, led to a miscalculation when translating the base position on the master to the base position on the expected input under infrequent circumstances, causing the incorrect base to be compared to the expected output. This was also accidentally fixed when I refactored the code and saw a very small increase in performance.

The second, an off-by-one error in the reporting of a new metric: “deviations from reference”, caused the results to suddenly appear rather unimpressive. The metric measures the number of bases that are different from the pseudo-reference (master.fa) that were correctly recovered by my algorithm to match an original gene from gene.fa. Running my algorithm now yielded a results table describing impressive gene recovery scores (>89%) but those genes only appeared to differ from the reference by merely a few SNPs (<10). How could we suck at recovering sequences that barely deviate from the master? Why does it take so many iterations? After getting off the floor and picking up the shattered pieces of my ego and PhD, I checked the VCF and confirmed there were over a hundred SNPs across all the genes. Curious, I inspected the genes manually with Tablet to see how they compared to the reference. Indeed, there were definitely more than the four reported for one particular case, so what was going on?

To finish quickly; path iteration numbers start from 0, but are reported to the user as iter + 1, because the 0’th iteration is not catchy. My mistake was using the iter + 1 to also access the number of deviations from the reference detected in the current iteration – in a zero indexed structure. I was fetching the number of deviations successfully extracted by the path after the this one, which we would expect to be poor, as the structure would have been reweighted to prevent that path from appearing again. Nice work, me. This fix made things a little more interesting:

More testing of the testing, is evidently necessary.

Conclusion

So where does that leave us? Performance is up, primarily because the code that I wrote to evaluate performance (and reweight) is now less broken. Generating data sets is still a pain in the arse, but I have got the hang of the manual process involved so I can at least stop hiding from the work to be done. It might be worth investigating consolidating all of my alignment activities into one aligner to improve my credit score. Results are looking promising, this algorithm is now capable of extracting genes (almost or entirely whole) from simulated (albeit quite simple) metagenomes.

Next steps will be more testing, writing the method itself as a paper, and getting some proper biological evidence from the lab that this work can do what I tell people it can do.

In other news


tl;dr

  • I continue to be alive and bad at both blog and implementing experimental data structures
  • I fixed my program not working by fixing the thing that told me it wasn’t working
  • If your evaluator has holes in, you’ll spend weeks chasing problems that don’t exist
  • Never assume how someone else’s software will work, especially if you are assuming it will work like a different piece of software that you are already making assumptions about
  • Always be testing (especially testing the testing)
  • This thing actually fucking works
  • An unpleasant side effect of doing a PhD is the rate of observed existential crises increases
  • Life continues to be a series of off-by-one errors, punctuated with occasional trips to the seaside

  1. Let’s not even talk about indels for now. 
]]>
https://samnicholls.net/2016/06/12/status-may16/feed/ 0 699