I’m working on a visualization/map of Wikipedia pages. The map will layout pages close to one another if they are similar. So, in order to create such a map I need to compute the similarity of any two Wikipedia pages.

For my first attempt at this, I decided to go with a cocitation measure of similarity. So, two Wikipedia pages will be said to be similar if other Wikipedia pages that link to one usually link to the other.

However, the naive way to compute this, looking at every pair of pages, is far too inefficient given that there are 650,000 pages in the English Wikipedia, and 14.5 million pagelinks. So I’ve worked up a much more efficient algorithm. Here’s the psuedocode…I hope someone, somewhere out in cyberspace will find this useful. (It can, in fact, be used to compute co-citation similarities for any data represented as nodes and links)

%nodes, %links //the wikipedia pages and pagelinks
%reverse = reverse(%links) //flipping the pagelinks around
%cocitations //2d hash for temporarily storing cocitation counts
%scores //2d hash storing the final similarity scores

foreach node (keys %nodes){ 

   foreach sourceNode (keys %reverse->{node}) //count cocitations for node (wiki page)
      foreach cocited (keys %links->{sourceNode})
         $cocitations{node}->{cocited} ++;

  citationCount = keys(%reverse->{node})
  foreach node2 ( keys %cocitations{node}) //similarities scores for node 
      citationCount2 = keys(%reverse->{node2})
      scores{node}->{node2} =
		2 * cocitations{node}->{node2} / (citationCount + citationCount2)
}
Share