the big vowel bang

Sometimes a conversation about (Greek) vowels can be artistically inspiring!

Here is the R code for that:

# using matplot.

par(bg=’black’, col.axis=’white’, col.main=’white’)

matplot(x, outer(x, 1:5, function(x, k) sin(k*pi * x^2)),

pch = c(“α”,”ε”,”η”,”ι”, “ο”), type = c(“b”,”p”,”o”), main=”the big vowel bang”, bg=’red’)



Manipulating Files in R: Renaming

R is not just for statistics, it provides many other functions that can make life much easier. In this post I provide a solution for renaming different files.


I wanted to remove the substrings ‘TextGrid’ and ‘Sound’ from a list of Praat generated filesnames. The filenames had the following form:



#Using R to Replace FileNames

path_origin <- “C:\\Sounds\\”

files <- list.files(path_origin) # Get a list of files in R.

xfiles <- paste(path_origin, files, sep = “”) # add the full path, (I removed the space inserted between two concatenated strings by using sep=””)

sapply(path_origin, function(path_origin) # sapply(list, function, …, simplify)


file.rename(xfiles, sub(“Sound “, “”, xfiles)) # renaming the files by using the sub() function, the first argument xfiles is the list of files, the second is the new name for each file. It simply removes “Sound “.



You may use this solution to change part of the filename or the file extension by choosing the appropriate sub() parameters.

! Be careful with the choice of the path and the sub() parameters; if you choose the wrong parameters then you may get unexpected results.

More Information

R Library: Advanced functions

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Playing with words (just for fun)!

This slideshow requires JavaScript.

The wordcloud or tag cloud is a visual representation for text data; the frequent words are bigger and the infrequent ones smaller.  This post’s graph is made on R (by using the wordcloud package). An article from the New York Times, is the input for the graph but you may use your own corpus. An annotated code follows, to understand its logic,

#using the wordcloud package


data(crude) #crude is a corpus with 20 text documents

crude <- tm_map(crude, removePunctuation) #normalization

crude <- tm_map(crude, function(x)removeWords(x,stopwords()))

tdm <- TermDocumentMatrix(crude) # Creating a term-document matrix (924 terms and 20 documents)

m <- as.matrix(tdm) #Converting tdm to a matrix

v <- sort(rowSums(m),decreasing=TRUE)

d <- data.frame(word = names(v),freq=v) #creating a dataframe

wordcloud(d$word,d$freq) # it gets the column ‘word’ along with the column frequency from the dataframe and creates the wordcloud.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.