If you do not want to use PhantomJS, you can also use the network sniffer in Firefox or Chrome development tools, and you will see that the HTML table data is returned using a POST request on the javascript server.
Then, instead of opening the original URL of the page with Nokogiri, you instead run this POST from your Ruby script and analyze and interpret this data. It looks like it's just JSON data with HTML embedded in it. You can extract the HTML code and submit it to Nokogiri.
This requires a little extra detective work, but I have repeatedly used this method with JavaScript web pages and scrapers. It works fine for most simple tasks, but it takes a little digging into the inner workings of the page and network traffic.
Here is an example of JSON data from a Javascript POST request:
Bonds:
https://web.apps.markit.com/AppsApi/GetIndexData?indexOrBond=bond&ClientCode=WSJ
CDS:
https://web.apps.markit.com/AppsApi/GetIndexData?indexOrBond=cds&ClientCode=WSJ
Here's a quick and dirty solution for you to understand. This will allow you to grab the cookie from the start page and use it in the request to receive JSON data, then parse the JSON data and pass the extracted HTML to Nokogiri:
require 'rubygems' require 'nokogiri' require 'open-uri' require 'json'
source share