Consider this example:
## <environment: R_GlobalEnv>
## [1] "f" "w"
You get a bit more information from ls.str():
## f : function (y)
## w : num 12
Next, we’ll look at how w and other variables come into play within f().
(If f() is called multiple times, h() will come into existence multiple times, going out of existence each time f() returns.)
What, then, will be in h()’s environment? Well, at the time h() is created, there are the objects d and y created within f(), plus f()’s environment (w). In other words, if one function is defined within another, then that inner function’s environment consists of the environment of the outer one, plus whatever locals have been created so far within the outer one. With multiple nesting of functions, you have a nested sequence of larger and larger environments, with the “root” consisting of the top-level objects.
Let’s try out the code:
## [1] 112
Keep in mind that h() is local to f() and invisible at the top level.
## function() {
## return(d*(w+y))
## }
It’s possible (though not desirable) to deliberately allow name conflicts in this hierarchy. In our example, for instance, we could have a local variable d within h(), conflicting with the one in f(). In such a situation, the innermost environment is used first. In this case, a reference to d within h() would refer to h()’s d, not f()’s.
Environments created by inheritance in this manner are generally referred to by their memory locations. Here is what happened after adding a print statement to f() (using edit(), not shown here) and then running the code:
## function(y) {
## d<-8
## h<-function() {
## return(d*(w+y))
## }
## return(h())
## }
## function(y) {
## d <- 8
## h <- function() {
## return(d*(w+y))
## }
## print(environment(h))
## return(h())
## }
## [1] 112
Compare all this to the situation in which the functions are not nested:
## function(y) {
## d<-8
## h<-function() {
## return(d*(w+y))
## }
## return(h())
## }
## <bytecode: 0x00000000151eaf40>
## function(y) {
## d <- 8
## return(h())
## }
## function() {
## return(d*(w+y))
## }
## function() {
## return(d*(w+y))
## }
The result is as follows:
## [1] 136
The fix is to pass d and y as arguments:
## function(y) {
## d <- 8
## return(h(d,y))
## }
## function() {
## return(d*(w+y))
## }
## function(dee,yyy) {
## return(dee*(w+yyy))
## }
## [1] 112
Okay, let’s look at one last variation:
f<-function(y,ftn) {
d <- 8
print(environment(ftn))
return(ftn(d,y))
}
h<-function(dee,yyy) {
return(dee*(w+yyy))
}
w <- 12
f(3,h)## <environment: R_GlobalEnv>
## [1] 120
Here’s an example:
f<- function(y) {
d <- 8
return(h(d,y))
}
h<-function(dee,yyy) {
print(ls())
print(ls(envir=parent.frame(n=1)))
return(dee*(w+yyy))
}
f(2)## [1] "dee" "yyy"
## [1] "d" "y"
## [1] 112
w <- 12
f<- function(y) {
d <- 8
w <- w + 1
y <- y - 2
print(w)
h <- function() {
return(d*(w+y))
}
return(h())
}
t <- 4
f(t)## [1] 13
## [1] 120
## [1] 12
## [1] 4
## [5, 12, 13]
## [1] 5 12 13
## [1] 13 5 12
## [1] 5 12 13
oddsevens<-function(v){
odds <- which(v %% 2 == 1)
evens <- which(v %% 2 == 1)
list(o=odds,e=evens)
}f <- function(lxxyy) {
...
lxxyy$x <- ...
lxxyy$y <- ...
return(lxxyy)
}
# set x and y
lxy$x <- ...
lxy$y <- ...
lxy <- f(lxy)
# use new x and y
... <- lxy$x
... <- lxy$yConsider the following code:
two <- function(u) {
u <<- 2*u
z <- 2*z
}
x <- 1
z <- 3
u
Error: object "u" not found
two(x)
x
[1] 1
z
[1] 3
u
[1] 2Let’s look at the impact (or not) on the three top-level variables x, z, and u:
Though <<- is typically used to write to top-level variables, as in our example, technically, it does something a bit different. Use of this operator to write to a variable w will result in a search up the environment hierarchy, stopping at the first level at which a variable of that name is encountered. If none is found, then the level selected will be global. Look what happens in this little example:
f<-function() {
inc <- function() {x <<- x + 1}
x <- 3
inc()
return(x)
}
f()
[1] 4
x
Error: object 'x' not foundYou can also use the assign() function to write to upper-level variables. Here’s an altered version of the previous example:
Discrete-event simulation (DES) is widely used in business, industry, and government. The term discrete event refers to the fact that the state of the system changes only in discrete quantities, rather than changing continuously.
A typical example would involve a queuing system, say people lining up to use an ATM. Let’s define the state of our system at time t to be the number of people in the queue at that time. The state changes only by +1, when someone arrives, or by −1, when a person finishes an ATM transaction. This is in contrast to, for instance, a simulation of weather, in which temperature, barometric pressure, and so on change continuously.
This will be one of the longer, more involved examples in this book. But it exemplifies a number of important issues in R, especially concerning global variables, and will serve as an example when we discuss appropriate use global variables in the next section. Your patience will turn out to be a good investment of time. (It is not assumed here that the reader has any prior background in DES.)
Central to DES operation is maintenance of the event list, which is simply a list of scheduled events. This is a general DES term, so the word list here does not refer to the R data type. In fact, we’ll represent the event list by a data frame.
In the ATM example, for instance, the event list might at some point in the simulation look like this:
Since the earliest event must always be handled next, the simplest form of coding the event list is to store it in time order, as in the example. (Readers with computer science background might notice that a more efficient approach might be to use some kind of binary tree for storage.) Here, we will implement it as a data frame, with the first row containing the earliest scheduled event, the second row containing the second earliest, and so on.
The main loop of the simulation repeatedly iterates. Each iteration pulls the earliest event off of the event list, updates the simulated time to reflect the occurrence of that event, and reacts to this event. The latter action will typically result in the creation of new events. For example, if a customer arrival occurs when the queue is empty, that customer’s service will begin—one event triggers setting up another. Our code must determine the customer’s service time, and then it will know the time at which service will be finished, which is another event that must be added to the event list.
One of the oldest approaches to writing DES code is the event-oriented paradigm. Here, the code to handle the occurrence of one event directly sets up another event, reflecting our preceding discussion.
As an example to guide your thinking, consider the ATM situation. At time 0, the queue is empty. The simulation code randomly generates the time of the first arrival, say 2.3. At this point, the event list is simply (2.3,“arrival”). This event is pulled off the list, simulated time is updated to 2.3, and we react to the arrival event as follows:
The queue for the ATM is empty, so we start the service by randomly generating the service time—say it is 1.2 time units. Then the completion of service will occur at simulated time 2.3 + 1.2 = 3.5.
We add the completion of service event to the event list, which will now consist of (3.5,“service done”)).
We also generate the time to the next arrival, say 0.6, which means the arrival will occur at time 2.9. Now the event list consists of (2.9,“arrival”) and (3.5,“service done”).
The code consists of a generally applicable library. We also have an example application, which simulates an M/M/1 queue, which is a singleserver queue in which both interarrival time and service time are exponentially distributed.
Here is a summary of the library functions:
The code uses the following application-specific functions: - initglbls(): Initializes the application-specific global variables. - reactevnt(): Takes the proper actions when an event occurs, typically generating new events as a result. - prntrslts(): Prints the application-specific results of the simulation.
# DES.R: R routines for discrete-event simulation (DES)
# each event will be represented by a data frame row consisting of the
# following components: evnttime, the time the event is to occur;
# evnttype, a character string for the programmer-defined event type;
# optional application-specific components, e.g.
# the job's arrival time in a queuing app
# a global list named "sim" holds the events data frame, evnts, and
# current simulated time, currtime; there is also a component dbg, which
# indicates debugging mode
# forms a row for an event of type evntty that will occur at time
# evnttm; see comments in schedevnt() regarding appin
evntrow <- function(evnttm,evntty,appin=NULL) {
rw <- c(list(evnttime=evnttm,evnttype=evntty),appin)
return(as.data.frame(rw))
}
# insert event with time evnttm and type evntty into event list;
# appin is an optional set of application-specific traits of this event,
# specified in the form a list with named components
schedevnt <- function(evnttm,evntty,appin=NULL) {
newevnt <- evntrow(evnttm,evntty,appin)
# if the event list is empty, set it to consist of evnt and return
if (is.null(sim$evnts)) {
sim$evnts <<- newevnt
return()
}
# otherwise, find insertion point
inspt <- binsearch((sim$evnts)$evnttime,evnttm)
# now "insert," by reconstructing the data frame; we find what
# portion of the current matrix should come before the new event and
# what portion should come after it, then string everything together
before <-if (inspt == 1) NULL else sim$evnts[1:(inspt-1),]
nr <- nrow(sim$evnts)
after <- if (inspt <= nr) sim$evnts[inspt:nr,] else NULL
sim$evnts <<- rbind(before,newevnt,after)
}
# binary search of insertion point of y in the sorted vector x; returns
# the position in x before which y should be inserted, with the value
# length(x)+1 if y is larger than x[length(x)]; could be changed to C
# code for efficiency
binsearch <- function(x,y) {
n <- length(x)
lo <- 1
hi <- n
while(lo+1 < hi) {
mid <- floor((lo+hi)/2)
if (y == x[mid]) return(mid)
if (y < x[mid]) hi <- mid else lo <- mid
}
if (y <= x[lo]) return(lo)
if (y < x[hi]) return(hi)
return(hi+1)
}
# start to process next event (second half done by application
# programmer via call to reactevnt())
getnextevnt <- function() {
head <- sim$evnts[1,]
# delete head
if (nrow(sim$evnts) == 1) {
sim$evnts <<- NULL
} else sim$evnts <<- sim$evnts[-1,]
return(head)
}
# simulation body
# arguments:
# initglbls: application-specific initialization function; inits
# globals to statistical totals for the app, etc.; records apppars
# in globals; schedules the first event
# reactevnt: application-specific event handling function, coding the
# proper action for each type of event
# prntrslts: prints application-specific results, e.g. mean queue
# wait
# apppars: list of application-specific parameters, e.g.
# number of servers in a queuing app
# maxsimtime: simulation will be run until this simulated time
# dbg: debug flag; if TRUE, sim will be printed after each event
dosim <- function(initglbls,reactevnt,prntrslts,maxsimtime,apppars=NULL, dbg=FALSE) {
sim <<- list()
sim$currtime <<- 0.0 # current simulated time
sim$evnts <<- NULL # events data frame
sim$dbg <<- dbg
initglbls(apppars)
while(sim$currtime < maxsimtime) {
head <- getnextevnt()
sim$currtime <<- head$evnttime # update current simulated time
reactevnt(head) # process this event
if (dbg) print(sim)
}
prntrslts()
}The following is an example application of the code. Again, the simulation models an M/M/1 queue, which is a single-server queuing system in which service times and times between job arrivals are exponentially distributed.
# DES application: M/M/1 queue, arrival rate 0.5, service rate 1.0
# the call
# dosim(mm1initglbls,mm1reactevnt,mm1prntrslts,10000.0,
# list(arrvrate=0.5,srvrate=1.0))
# should return a value of about 2 (may take a while)
# initializes global variables specific to this app
mm1initglbls <- function(apppars) {
mm1glbls <<- list()
# simulation parameters
mm1glbls$arrvrate <<- apppars$arrvrate
mm1glbls$srvrate <<- apppars$srvrate
# server queue, consisting of arrival times of queued jobs
mm1glbls$srvq <<- vector(length=0)
# statistics
mm1glbls$njobsdone <<- 0 # jobs done so far
mm1glbls$totwait <<- 0.0 # total wait time so far
# set up first event, an arrival; the application-specific data for
# each event will consist of its arrival time, which we need to
# record in order to later calculate the job's residence time in the
# system
arrvtime <- rexp(1,mm1glbls$arrvrate)
schedevnt(arrvtime,"arrv",list(arrvtime=arrvtime))
}
# application-specific event processing function called by dosim()
# in the general DES library
mm1reactevnt <- function(head) {
if (head$evnttype == "arrv") { # arrival
# if server free, start service, else add to queue (added to queue
# even if empty, for convenience)
if (length(mm1glbls$srvq) == 0) {
mm1glbls$srvq <<- head$arrvtime
srvdonetime <- sim$currtime + rexp(1,mm1glbls$srvrate)
schedevnt(srvdonetime,"srvdone",list(arrvtime=head$arrvtime))
} else mm1glbls$srvq <<- c(mm1glbls$srvq,head$arrvtime)
# generate next arrival
arrvtime <- sim$currtime + rexp(1,mm1glbls$arrvrate)
schedevnt(arrvtime,"arrv",list(arrvtime=arrvtime))
} else { # service done
# process job that just finished
# do accounting
mm1glbls$njobsdone <<- mm1glbls$njobsdone + 1
mm1glbls$totwait <<-
mm1glbls$totwait + sim$currtime - head$arrvtime
# remove from queue
mm1glbls$srvq <<- mm1glbls$srvq[-1]
# more still in the queue?
if (length(mm1glbls$srvq) > 0) {
# schedule new service
srvdonetime <- sim$currtime + rexp(1,mm1glbls$srvrate)
schedevnt(srvdonetime,"srvdone",list(arrvtime=mm1glbls$srvq[1]))
}
}
}
mm1prntrslts <- function() {
print("mean wait:")
print(mm1glbls$totwait/mm1glbls$njobsdone)
}To see how all this works, take a look at the M/M/1 application code. There, we have set up a global variable, mm1glbls, which contains variables relevant to the M/M/1 code, such as mm1glbls$totwait, the running total of the wait time of all jobs simulated so far. As you can see, the superassignment operator is used to write to such variables, as in this statement:
Let’s look at mm1reactevnt() to see how the simulation works, focusing on the code portion in which a “service done” event is handled.
} else { # service done
# process job that just finished
# do accounting
mm1glbls$njobsdone <<- mm1glbls$njobsdone + 1
mm1glbls$totwait <<-
mm1glbls$totwait + sim$currtime - head$arrvtime
# remove this job from queue
mm1glbls$srvq <<- mm1glbls$srvq[-1]
# more still in the queue?
if (length(mm1glbls$srvq) > 0) {
# schedule new service
srvdonetime <- sim$currtime + rexp(1,mm1glbls$srvrate)
schedevnt(srvdonetime,"srvdone",list(arrvtime=mm1glbls$srvq[1]))
}
}First, this code does some bookkeeping, updating the totals of number of jobs completed and wait time. It then removes this newly completed job from the server queue. Finally, it checks if there are still jobs in the queue and, if so, calls schedevnt() to arrange for the service of the one at the head. What about the DES library code itself? First note that the simulation state, consisting of the current simulated time and the event list, has been placed in an R list structure, sim. This was done in order to encapsulate all the main information into one package, which in R, typically means using a list. The sim list has been made a global variable.
As mentioned, a key issue in writing a DES library is the event list. This code implements it as a data frame, sim$evnts. Each row of the data frame corresponds to one scheduled event, with information about the event time, a character string representing the event type (say arrival or service completion), and any application-specific data the programmer wishes to add. Since each row consists of both numeric and character data, it was natural to choose a data frame for representing this event list. The rows of the data frame are in ascending order of event time, which is contained in the first column.
The main loop of the simulation is in dosim() of the DES library code, beginning at line 91:
while(sim$currtime < maxsimtime) {
head <- getnextevnt()
sim$currtime <<- head$evnttime # update current simulated time
reactevnt(head) # process this event
if (dbg) print(sim)
}Here, we wish to insert a newly created event into the event list, and the fact that we are working with a vector enables the use of a fast binary search. (As noted in the comments in the code, though, this really should be implemented in C for good performance.) A later line in schedevnt()is a good example of the use of rbind():
Now, we have extracted the events in the event list whose times are earlier than that of evnt and stored them in before. We also constructed a similar set in after for the events whose times are later than that of newevnt. We then use rbind() to put all these together in the proper order.
f <- function(lxxyy) { # lxxyy is a list containing x and y
...
lxxyy$x <- ...
lxxyy$y <- ...
return(lxxyy)
}
# set x and y
lxy$x <- ...
lxy$y <- ...
lxy <- f(lxy)
# use new x and y
... <- lxy$x
... <- lxy$yAs noted earlier, this code might be a bit unwieldy, especially if x and y are themselves lists. By contrast, here is an alternate pattern that uses globals:
f <- function() {
...
x <<- ...
y <<- ...
}
# set x and y
x <- ...
y <- ...
f() # x and y are changed in here
# use new x and y
... <- x
... <- yto this:
We would then need to add a line like the following to our applicationspecific function mm1reactevnt():
We could do something similar with mm1glbls, placing a variable called, say, appvars as a local within dosim(). However, if we did this with sim as well, we would need to place them together in a list so that both would be returned, as in our earlier example function f(). We would then have the lists-within-lists clutter described earlier—well, lists within lists within lists in this case. On the other hand, critics of the use of global variables counter that the simplicity of the code comes at a price. They worry that it may be difficult during debugging to track down locations at which a global variable changes value, since such a change could occur anywhere in the program. This seems to be less of a concern in view of our modern text editors and integrated development tools (the original article calling for avoiding use of globals was published in 1970!), which can be used to find all instances of a variable. However, it should be taken into consideration. Another concern raised by critics involves situations in which a function is called in several unrelated parts of the overall program using different values. For example, consider using our example f() function in different parts of our program, each call with its own values of x and y, rather than just a single value of each, as assumed earlier. This could be solved by setting up vectors of x and y values, with one element for each instance of f() in your program. You would lose some of the simplicity of using globals, though. The above issues apply generally, not just to R. However, for R there is an additional concern for globals at the top level, as the user will typically have lots of variables there. The danger is that code that uses globals may accidentally overwrite an unrelated variable with the same name. This can be avoided easily, of course, by simply choosing long, very application-specific names for globals in your code. But a compromise is also available in the form of environments, such as the following for the DES example above. Within dosim(), the line
would be replaced by
This would create a new environment, pointed to by simenv at the top level. It would serve as a package in which to encapsulate our globals. We would access them via get() and assign(). For instance, the lines
in schedevnt() would become
Yes, this is cluttered too, but at least it is not complex like lists of lists of lists. And it does protect against unwittingly writing to an unrelated variable the user has at the top level. Using the superassignment operator still yields the least cluttered code, but this compromise is worth considering. As usual, there is no single style of programming that produces the best solution in all applications. The globals approach is another option to consider for your programming tool kit.
Recall that an R closure consists of a function’s arguments and body together with its environment at the time of the call. The fact that the environment is included is exploited in a type of programming that uses a feature also known (in a slight overloading of terminology) as a closure. A closure consists of a function that sets up a local variable and then creates another function that accesses that variable. This is a very abstract description, so let’s go right to an example.
counter<-function () {
ctr <- 0
f <- function() {
ctr <<- ctr + 1
cat("this count currently has value",ctr,"\n")
}
return(f)
}Let’s try this out before going into the internal details:
c1 <- counter()
c2 <- counter()
c1<-function() {
ctr <- ctr + 1
cat("this count currently has value",ctr,"\n")
}
c2<-function() {
ctr <<- ctr + 1
cat("this count currently has value",ctr,"\n")
}
c1()
c1()
c2()
c2()
c2()
c1()Here, we called counter() twice, assigning the results to c1 and c2. As expected, those two variables will consist of functions, specifically copies of f(). However, f() accesses a variable ctr through the superassignment operator, and that variable will be the one of that name that is local to counter(), as it is the first one up the environment hierarchy. It is part of the environment of f() and, as such, is packaged in what is returned to the caller of counter(). The key point is that each time counter() is called, the variable ctr will be in a different environment (in the example, the environments were at memory addresses 0x8d445c0 and 0x8d447d4). In other words, different calls to counter() will produce physically different ctrs. The result, then, is that our functions c1() and c2() serve as completely independent counters, as seen in the example, where we invoke each of them a few times.
Once a mathematics PhD student whom I knew to be quite bright, but who had little programming background, sought my advice on how to write a certain function. I quickly said, “You don’t even need to tell me what the function is supposed to do. The answer is to use recursion.” Startled, he asked what recursion is. I advised him to read about the famous Towers of Hanoi problem. Sure enough, he returned the next day, reporting that he was able to solve his problem in just a few lines of code, using recursion. Obviously, recursion can be a powerful tool. Well then, what is it? A recursive function calls itself. If you have not encountered this concept before, it may sound odd, but the idea is actually simple. In rough terms, the idea is this: To solve a problem of type X by writing a recursive function f(): 1. Break the original problem of type X into one or more smaller problems of type X. 2. Within f(), call f() on each of the smaller problems. 3. Within f(), piece together the results of (b) to solve the original problem.
A classic example is Quicksort, an algorithm used to sort a vector of numbers from smallest to largest. For instance, suppose we wish to sort the vector (5,4,12,13,3,8,88). We first compare everything to the first element, 5, to form two subvectors: one consisting of the elements less than 5 and the other consisting of the elements greater than or equal to 5. That gives us subvectors (4,3) and (12,13,8,88). We then call the function on the subvectors, returning (3,4) and (8,12,13,88). We string those together with the 5, yielding (3,4,5,8,12,13,88), as desired. R’s vector-filtering capability and its c() function make implementation of Quicksort quite easy.
qs <- function(x) {
if (length(x) <= 1) return(x)
pivot <- x[1]
therest <- x[-1]
sv1 <- therest[therest < pivot]
sv2 <- therest[therest >= pivot]
sv1 <- qs(sv1)
sv2 <- qs(sv2)
return(c(sv1,pivot,sv2))
}Note carefully the termination condition:
Without this, the function would keep calling itself repeatedly on empty vectors, executing forever. (Actually, the R interpreter would eventually refuse to go any further, but you get the idea.) Sounds like magic? Recursion certainly is an elegant way to solve many problems. But recursion has two potential drawbacks: - It’s fairly abstract. I knew that the graduate student, as a fine mathematician, would take to recursion like a fish to water, because recursion is really just the inverse of proof by mathematical induction. But many programmers find it tough. - Recursion is very lavish in its use of memory, which may be an issue in R if applied to large problems.
Recall the following example
## NULL
## [1] "a" "b" "ab"
## a b ab
## 1 2 4
Consider one line in particular:
Looks totally innocuous, eh? Well, no. In fact, it’s outrageous! How on Earth can we possibly assign a value to the result of a function call? The resolution to this odd state of affairs lies in the R notion of replacement functions. The preceding line of R code actually is the result of executing the following:
No, this isn’t a typo. The call here is indeed to a function named names<-(). (We need to insert the quotation marks due to the special characters involved.)
Any assignment statement in which the left side is not just an identifier (meaning a variable name) is considered a replacement function. When encountering this:
R will try to execute this:
Note the “try” in the preceding sentence. The statement will fail if you have not previously defined g<-(). Note that the replacement function has one more argument than the original function g(), a named argument value, for reasons explained in this section. In earlier chapters, you’ve seen this innocent-looking statement:
The left side is not a variable name, so it must be a replacement function, and indeed it is, as follows. Subscripting operations are functions. The function “[”() is for reading vector elements, and “[<-”() is used to write. Here’s an example:
## [1] 8 88 5 12 13
## [1] 5
## [1] 5
## [1] 8 99 100 12 13
Again, that complicated call in this line:
is simply performing what happens behind the scenes when we execute this:
We can easily verify what’s occurring like so:
If you are writing a short function that’s needed only temporarily, a quickand- dirty way to do this is to write it on the spot, right there in your interactive terminal session. Here’s an example:
This approach obviously is infeasible for longer, more complex functions. Now, let’s look at some better ways to compose R code.
You can use a text editor such as Vim, Emacs, or even Notepad, or an editor within an integrated development environment (IDE) to write your code in a file and then read it into R from the file. To do the latter, you can use R’s source() function. For instance, suppose we have functions f() and g() in a file xyz.R. In R, we give this command:
This reads f() and g() into R as if we had typed them using the quick-anddirty way shown at the beginning of this section. If you don’t have much code, you can cut and paste from your editor window to your R window. Some general-purpose editors have special plug-ins available for R, such as ESS for Emacs and Vim-R for Vim. There are also IDEs for R, such as the commercial one by Revolution Analytics, and open source products such as StatET, JGR, Rcmdr, and RStudio.
A nice implication of the fact that functions are objects is that you can edit functions from within R’s interactive mode. Most R programmers do their code editing with a text editor in a separate window, but for a small, quick change, the edit() function can be handy.
For instance, we could edit the function f1() by typing this:
This opens the default editor on the code for f1, which we could then edit and assign back to f1. Or, we might be interested in having a function f2() very similar to f1() and thus could execute the following:
This gives us a copy of f1() to start from. We would do a little editing and then save to f2(), as seen in the preceding command.
You can invent your own operations! Just write a function whose name begins and ends with %, with two arguments of a certain type, and a return value of that type. For example, here’s a binary operation that adds double the second operand to the first:
## [1] 13