The following dataset was scraped from VentureStores.com via the Internet Archive.

For this business intellegence project I wanted to scrape the addresses of a defunct retail chain known as “Venture” between 1996 and 1998. Venture was a popular discount retailer, comparable in many ways to TJ Maxx or Ross, but with more variety and multiple departments. It had stores across the Midwest and Mid-South, and was known for its iconic black and white branding. Unfortunately, Venture declared bankruptcy in 1998, and was liquidated by the end of the year. It’s history has been all but lost. This project is an attempt to map the finals years of Venture, and draw business insights about its real estate holdings and customer base.

For more info on Venture, check out this Wikipedia Article.


Step 1: Explore the Internet Archive

In order to recreate the store list for this long dead retailer, I needed to find a data source where I could beging my extraction. I had two options. (1) I could find an old retail catalog or stock report, and try to extract information from it using the pdftools package. (2) I could try to find an archived internet page from around the time period I am interested in, and scrape it using a package like rvest.

Fortunately, I was able to locate an archived website for the chain via the Internet Archive. The oldest page that they had archived was from January 1998, just 6 months before the company’s liquidation.

Within the page was a store_locator page that subdivided the stores by state. This would be where I began my scraping procedure.

Step 2: Scraping State Level Store Directories using Rvest

library(rvest) #for web scraping
library(stringr) #for string manipulation

For this project I decided to use two key packages; stringr for text manipulation and rvest for web scraping.

venture_link <- "https://web.archive.org/web/19980111103922/http://venturestores.com/locator.htm"
locator_page <- read_html(venture_link)

The first part of scraping this page involved saving its url, and then using the read_html() function to save it in a variable called “locator page”.

The locator page of this particular website was a parent directory of other state-level child directories. It featured a crude map of the United States, with embedded “areas” leading to subpages for the individual state directories. In order to scrape all the store locations, I first had to scrape all of the individual state directories from this main locator page.

States <- locator_page %>% html_nodes("area") %>% html_attr("href") %>% paste("https://web.archive.org/web/19980111103922/http:/venturestores.com/", ., sep = "")

In order to scrape the state directories, I employed the above pipe. Using locator_page as my input, I used html_nodes() to seperate the text and attributes of items embedded in area tags. I further extracted only the links from “area”, using html_attr(). I then rebuilt the full file path using paste().

Now I am going to take a look at what that left me with:

States
##  [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/locator.htm"        
##  [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/career.htm"         
##  [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/talk.htm"           
##  [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/locator.htm"        
##  [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/profile.htm"        
##  [6] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm"  
##  [7] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/oklahoma.htm"
##  [8] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/texas.htm"   
##  [9] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/arkansas.htm"
## [10] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kentucky.htm"
## [11] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/indiana.htm" 
## [12] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/illinois.htm"
## [13] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/iowa.htm"    
## [14] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/missouri.htm"
## [15] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/texas.htm"   
## [16] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/oklahoma.htm"
## [17] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kentucky.htm"
## [18] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/indiana.htm" 
## [19] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/iowa.htm"    
## [20] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/missouri.htm"
## [21] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/arkansas.htm"
## [22] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm"  
## [23] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/illinois.htm"

Very clearly there were links still contained within my States variable that did not lead to state directories. I also noticed that the state_directories repeated themselves once, due to the way the locator page was structured.

States <- States[6:14]

Fortunately, it was easy to subset out the data that I wanted. Leaving me with the following links to state directories.

States
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm"  
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/oklahoma.htm"
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/texas.htm"   
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/arkansas.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kentucky.htm"
## [6] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/indiana.htm" 
## [7] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/illinois.htm"
## [8] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/iowa.htm"    
## [9] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/missouri.htm"

The inclusion of a state directory for Texas was a great hidden find. Even though the web developers had removed Texas from their map at the time, since the company’s stores in that state had been previously shut down in 1996, they forgot to remove the embeded link to the Texas state directory from their html document.

With the Venture state directories now extracted from the html, I was able to move on to scraping city level directories.

Step 3: Scraping City-Level Directories

Just like the original locator page I started with, each state-level directory also held a number of child directories featuring cities within each state. Unlike my original web scrape, I now had to scrape multiple web pages to extract the next level of directories.

Though I could do this one at a time, I decided to save myself time and create a function to do the work for me.

State_Scraper = function(state_link) {
  state_directory = read_html(state_link)
  state_sub = state_directory %>% html_nodes("area")
  state_sub2 = state_sub %>% html_attr("href") %>% paste("https://web.archive.org/web/19980111103922/http:/venturestores.com/states/", ., sep = "")
  return(state_sub2)
  }

My function was designed to take a state link as an input, and run through it, following the same procedure as my initial scrape, and returning the child directories for each city in each state.

Using sapply, I tested my function on my state directories contained in the variable States.

Sub_States <- sapply(States, FUN = State_Scraper)
head(Sub_States)
## $`https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm`
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104704/http://venturestores.com/talk.htm"   
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104704/http://venturestores.com/career.htm" 
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104704/http://venturestores.com/locator.htm"
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104704/http://venturestores.com/profile.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/wichita.htm"                                      
## [6] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/kansascityks.htm"                                 
## [7] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/topeka.htm"                                       
## 
## $`https://web.archive.org/web/19980111103922/http:/venturestores.com/states/oklahoma.htm`
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104741/http://venturestores.com/career.htm" 
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104741/http://venturestores.com/talk.htm"   
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104741/http://venturestores.com/locator.htm"
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104741/http://venturestores.com/profile.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/okcity.htm"                                       
## [6] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/tulsa.htm"                                        
## 
## $`https://web.archive.org/web/19980111103922/http:/venturestores.com/states/texas.htm`
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104818/http://venturestores.com/career.htm" 
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104818/http://venturestores.com/talk.htm"   
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104818/http://venturestores.com/locator.htm"
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104818/http://venturestores.com/profile.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/houston.htm"                                      
## [6] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/corpuschristi.htm"                                
## [7] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/dallas.htm"                                       
## [8] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/amarillo.htm"                                     
## 
## $`https://web.archive.org/web/19980111103922/http:/venturestores.com/states/arkansas.htm`
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104908/http://venturestores.com/career.htm" 
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104908/http://venturestores.com/talk.htm"   
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104908/http://venturestores.com/locator.htm"
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104908/http://venturestores.com/profile.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/ftsmith.htm"                                      
## 
## $`https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kentucky.htm`
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104946/http://venturestores.com/career.htm" 
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104946/http://venturestores.com/talk.htm"   
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104946/http://venturestores.com/locator.htm"
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111104946/http://venturestores.com/profile.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/paducah.htm"                                      
## 
## $`https://web.archive.org/web/19980111103922/http:/venturestores.com/states/indiana.htm`
## [1] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111105025/http://venturestores.com/career.htm" 
## [2] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111105025/http://venturestores.com/talk.htm"   
## [3] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111105025/http://venturestores.com/locator.htm"
## [4] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states//web/19980111105025/http://venturestores.com/profile.htm"
## [5] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/evansville.htm"                                   
## [6] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/indianapolis.htm"                                 
## [7] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/southbend.htm"                                    
## [8] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/chicagoin.htm"                                    
## [9] "https://web.archive.org/web/19980111103922/http:/venturestores.com/states/cities/mishawaka.htm"

Success! The function ran sucessfully, further scraping the city level directories and placing them in a list. There are quite a few of them too.

Sub_State_2 <- Sub_States %>% unlist()

I decided to unlist the states, in order to do my final bit of scraping.

Step 3: Scraping Store Names, Locations, and Addresses

Now that I scraped down to the city level directories, I could begin scraping the cities themselves, extracting the locations of stores within each city.

But first, I wanted to make sure that I was only scraping the city directories, and not some other random pages that may have gotten caught up in my last scrape.

Sub_States_3 <- grep("cities", Sub_State_2, value = TRUE)

Since I knew that the city subdirectories contained “cities” in their paths, I was able to use Base R’s grep() to remove any unwanted links.

City_Scraper = function(city_link){
  city_directory = read_html(city_link)
  locations = city_directory %>% html_nodes("address strong") %>% html_text()
  return(locations)
}

I created another function for my next scrape, this time using the city links as inputs. Unlike my previous scraping efforts, this time, I was only interested in extracting text from each html page which necessitated the use of html_text(). Further, the locations of the stores were now contained in strong address attributes.

Venture_Locations <- sapply(Sub_States_3, FUN = City_Scraper)
Venture_Locations <- unlist(Venture_Locations)

Once again I applied my function, this time to my city child directories. Knowing that I would recieve a list as an output, I promptly unlisted my newly extracted data.

head(Venture_Locations)
## https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm51 
##                                                                      "E. Wichita # 59" 
## https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm52 
##                                                                     "2057 N. Rock Rd." 
## https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm53 
##                                                                    "Wichita, KS 67206" 
## https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm54 
##                                                                         "316/687-3313" 
## https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm55 
##                                                                      "W. Wichita # 60" 
## https://web.archive.org/web/19980111103922/http:/venturestores.com/states/kansas.htm56 
##                                                                     "350 S. Tracy St."

The result was a jumbled mess of address fragments.

Quickly examining the data using head() revealed that there were 4 types of fragments.

  1. Store Names: These generally featured a city or shopping mall name, followed by a pound sign and number
  2. Street Addresses: A general street address, excluding city and state.
  3. Cities: City, state and zip code.
  4. Store Telephones: Telephone numbers including area codes and a backslash.

Though they were still in fragments, having these pieces meant that the extraction phase of this project was a success.

Step 4: Transformation of Store Address Fragments

I needed to transform my store address fragments into a usable format. I also needed to do some data cleaning.

Based on my quick analysis of the fragments, it was clear that a few contained a Unix line feed. First I wanted to get rid of this.

Venture_Locations_2<- Venture_Locations %>% str_replace_all("[\n]", " ")

I decided the quickest way to get rid of the line feed, was to simply replace it with a space. This would also put the affected fragments in line with other fragments, which contained spaces.

Next I wanted to parse out the store name fragments and the city fragments.

Here, grep() came in handy again. Since I knew that store name fragments contained a pound sign, and city fragments started with a letter value, seperating them was easy.

store_name <- Venture_Locations_2 %>% grep("#",., value = TRUE) 
length(store_name)
## [1] 97
city <- Venture_Locations_2 %>% grep("^[A-Z].*[,]",., value = TRUE)
length(city)
## [1] 96

After parsing the fragments apart, it was clear that a slight discrepency existed between the two. While there were 96 city fragments, there were 97 store names. This necessitated more exploration.

unique(store_name)
##  [1] "E. Wichita # 59"       "W. Wichita # 60"       "Overland Park # 9"    
##  [4] "State Ave. # 11"       "Roeland Park # 22"     "Shawnee Mission # 166"
##  [7] "Topeka # 127"          "S. Shields # 77"       "Midwest # 78"         
## [10] "Edmond # 137"          "Oklahoma City # 159"   "North Tulsa # 52"     
## [13] "Pasadena Park # 130"   "Voss # 140"            "Baytown # 143"        
## [16] "Corpus Christi # 144"  "East Plano # 68"       "S. Arlington # 71"    
## [19] "Irving # 72"           "Amarillo #145"         "Valley View # 196"    
## [22] "Bruton Masters # 197"  "Richardson # 198"      "Amarillo # 145"       
## [25] "Fort Smith # 124"      "Paducah # 175"         "LOC #600"             
## [28] "E. Evansville # 57"    "W. Evansville # 58"    "E. Washington # 66"   
## [31] "Washington # 163"      "South Bend # 171"      "Merrillville # 34"    
## [34] "Griffith # 36"         "Mishawaka # 172"       "Springfield # 75"     
## [37] "Fairview Heights # 2"  "Alton # 4"             "Fairmont City # 20"   
## [40] "Bellville # 31"        "Decatur # 62"          "Peoria # 8"           
## [43] "Moline # 61"           "Oaklawn # 14"          "Calumet City # 15"    
## [46] "Oakbrook # 16"         "Metteson # 17"         "Mount Prospect # 18"  
## [49] "Norridge # 25"         "Countryside # 26"      "Addison # 30"         
## [52] "Orland Park # 33"      "Downers Grove # 37"    "Kostner # 38"         
## [55] "Skokie # 39"           "Pulaski # 42"          "Schaumburg # 44"      
## [58] "Forest Park # 63"      "Naperville # 65"       "Niles # 69"           
## [61] "Homewood # 73"         "Cresthill # 74"        "Elgin # 79"           
## [64] "Geneva # 142"          "Rockford # 151"        "Mundelein # 152"      
## [67] "Waukegan # 155"        "Melrose Park # 156"    "Aurora # 158"         
## [70] "Crystal Lake # 160"    "Peterson # 167"        "Elston # 168"         
## [73] "Bridgeview # 170"      "Arlington # 173"       "Bradley # 177"        
## [76] "Crestwood # 178"       "Davenport # 49"        "S.E. Des Moines # 51" 
## [79] "Dubuque # 27"          "St. Joseph # 153"      "Independence # 10"    
## [82] "Bannister # 12"        "Vivion Rd. # 21"       "Joplin # 157"         
## [85] "Springfield # 3"       "Cape Girardeau # 128"  "Page Ave. # 1"        
## [88] "Kirkwood # 5"          "Christy # 6"           "Dunn Road # 7"        
## [91] "Lemay Ferry # 13"      "Cave Springs # 19"     "Maplewood # 23"       
## [94] "Crystal City # 35"     "Woodcrest # 41"        "Bridgeton # 132"      
## [97] "Manchester # 150"

Fortunately, this was a small dataset. I decided to use unique() to manually inspect the store name fragments, to see if any seemed out of place. Sure enough, one called “LOC #600” caught my eye. It clearly did not follow the naming conventions of the other stores.

store_name <- store_name[-27]
length(store_name)
## [1] 96

I decided to remove it based on its index, and that brought my number of store name fragments in line with the city fragments. Comparing them side by side showed that they matched up well with eachother.

All that remained was to parse the address and phone number fragments. Since this retail chain has been defunct for two decades, the phone numbers were not something I was very interested in. The addresses were something I wanted though.

addresses <- Venture_Locations_2 %>% grep("^[0-9]",., value = TRUE) %>% grep("/",., value = TRUE, invert = TRUE)
length(addresses)
## [1] 97

Once again I used grep() to parse the address fragments, this time my pipe was a little more complex. I knew that all of the addresses started with numeric characters, but so did the phone numbers. The phone number contained forward slashes, while the addresses did not. I therefore applied a second inverted grep() function to parse out the framgents without forward slashes.

This narrowed my results to 97 fragments, one away from my target of 96 fragments.

unique(addresses)
##  [1] "2057 N. Rock Rd."             "350 S. Tracy St."            
##  [3] "9600 Metcalf"                 "4301 State Ave."             
##  [5] "4950 Roe Ave."                "13110 W. 62nd Terrace"       
##  [7] "913-631-1030"                 "5311 Southwest 22nd Place"   
##  [9] "7401 S. Shields Blvd."        "5701 E. Reno Ave."           
## [11] "2501 S. Broadway St."         "8315 N. Rockwell Ave."       
## [13] "2029 South Sheridan"          "6802 Spencer Hwy."           
## [15] "7600 Westheimer Rd. Space A"  "4553 Garth Rd."              
## [17] "4717 South Padre Island Rd."  "600 Accent Drive"            
## [19] "4628 S. Cooper Street"        "3200 W. Irving Blvd."        
## [21] "2610 Soncy Rd."               "13000 Josey Lane, Ste. 108"  
## [23] "10060 Bruton Rd., Ste. 207"   "1718 E. Beltline Rd."        
## [25] "136 Phoenix Village Mall"     "5101 Hinkleville Rd."        
## [27] "101 N. Green River Rd."       "2500 First Ave."             
## [29] "7339 E. Washington St."       "6650 W. Washington St."      
## [31] "4600 South High St."          "6063 Broadway"               
## [33] "430 West Ridge Road"          "5802 Grape Rd."              
## [35] "2115 McArthur Blvd."          "6525 N. Illinois"            
## [37] "2600 E. Hommer M. Adams Pky." "5401 Collinsville Rd."       
## [39] "7230 Westfield Plaza Dr."     "2800 N. Water St."           
## [41] "901 West Lake Ave."           "2000 36th Ave."              
## [43] "4101 West 95th Street"        "500 River Oaks W."           
## [45] "17W734 22nd St."              "21000 S. Cicero Ave."        
## [47] "1500 S. Elmhurst Rd."         "4210 N. Harlem Ave."         
## [49] "140 Countryside Plaza"        "1050 N. Route 53"            
## [51] "15701 S. Harlem"              "7401 Lemont Rd."             
## [53] "1740 N. Kostner Ave."         "9449 Skokie Blvd."           
## [55] "4433 S. Pulaski Rd."          "1311 Golf Road"              
## [57] "7600 W. Roosevelt Rd."        "510 S. Route 59"             
## [59] "8500 W. Golf Rd."             "17920 Halsted St."           
## [61] "1701 N. Larken"               "450 Airport Rd."             
## [63] "2100 S. Randall Rd."          "5880 E. State Street"        
## [65] "1555 S. Lake St."             "2700 Bellvidere Rd."         
## [67] "2031 N. Mannheim"             "1250 N. Lake St."            
## [69] "6250 A Northwest Hwy"         "2050 W. Peterson"            
## [71] "5033 North Elston"            "7725 South Harlem"           
## [73] "750 E. Rand"                  "1602 N. State Rt. 50"        
## [75] "13200 S. Cicero"              "3808 Brady Street"           
## [77] "4900 S.E. 14th St."           "255 John F. Kennedy Road"    
## [79] "137 N. Belt Hwy"              "4023 S. Nolan Rd."           
## [81] "4707 Bannister Rd."           "4820 N. Oak Trafficway"      
## [83] "101 North Rangeline"          "3101 S. Glenstone"           
## [85] "300 West Park Mall"           "8901 Page Ave."              
## [87] "1225 S. Kirkwood Rd."         "4930 Christy Blvd."          
## [89] "2855 Dunn Road"               "3901 Lemay Ferry Road"       
## [91] "3979 Bogey Rd."               "3200 Laclede Station Rd."    
## [93] "155 Twin City Mall"           "955 Woodcrest Exec. Dr."     
## [95] "12222 St. Charles Rock Rd."   "14425 Andersohn Drive"

I inspected the addresses to see if anything looked out of place, and sure enough, one phone number accidentally got parsed in with the rest of the addresses.

addresses <- addresses[-7]

Once I removed it, I was left with three vectors of the same length.

My last step was to combine them in a data frame, and write them into a .csv file.

Venture <- data.frame(store_name, addresses, city)
write.csv(Venture, "Venture_Stores.csv")

Looking Ahead

With my initial ETL done, I can now work with this data. Simple visualization, both aspatial and spatial can be performed on it. I can also dig into real estate records and location data to see what has become of this company’s former real estate holdings. Stay tuned to this Rpubs page, to see what I do with it next.