I am trying to clear all accounts from two pages on the website of the French lower house of parliament. Pages cover 2002-2012. And make up less than 1000 accounts each.
To do this, I clear getURL
through this loop:
b <- "http://www.assemblee-nationale.fr" # base l <- c("12","13") # legislature id lapply(l, FUN = function(x) { print(data <- paste(b, x, "documents/index-dossier.asp", sep = "/")) # scrape data <- getURL(data); data <- readLines(tc <- textConnection(data)); close(tc) data <- unlist(str_extract_all(data, "dossiers/[[:alnum:]_-]+.asp")) data <- paste(b, x, data, sep = "/") data <- getURL(data) write.table(data,file=n <- paste("raw_an",x,".txt",sep="")); str(n) })
Is there a way to optimize the getURL()
function here? It seems I cannot use simultaneous loading by passing async=TRUE
parameter which gives me the same error every time:
Error in function (type, msg, asError = TRUE) : Failed to connect to 0.0.0.12: No route to host
Any ideas? Thanks!
source share