Honestly, the best way to deal with this situation is to edit the source file before reading it in R I cannot imagine any reason to avoid this, to justify writing some fancy R code to remove parentheses after reading in the data.
Open your choice of text editor and tell him (the editor) to remove all parentheses. Save the file (if necessary, a new file), then open the new file with read.csv .
But if you need to
foo<- read.csv(your_file) gsub('(','',foo) gsub(')','',foo) foo[,2]<-as.numeric(foo[,2])
EDIT: passed the speed test:
paren1<-function(file) { foo<- read.csv(file) gsub('[()]','',foo) #gsub(')','',foo) foo[,2]<-as.numeric(foo[,2]) } setClass("strippedL") setClass("strippedR") setAs("character", "strippedL", function(from) as.character( gsub("(", "", from, fixed=TRUE))) setAs("character", "strippedR", function(from) as.numeric( gsub(")", "", from, fixed=TRUE))) paren2<-function(file) { foo<- read.table(file,sep = ",", header = FALSE, colClasses = c("strippedL", "strippedR")) return(invisible(foo)) } library(microbenchmark) # my "paren.txt" has 860 lines in it microbenchmark(paren1('paren.txt'),paren2('paren.txt')) Unit: milliseconds expr min lq median uq max neval paren1("paren.txt") 3.341024 3.461614 3.486416 3.514639 4.060715 100 paren2("paren.txt") 2.164631 2.251439 2.285007 2.322211 5.681836 100
So, Anandaβs decision is noticeably faster. Okay:-)