sparklytd is an extension to read and write TD data from/to R.
You can install this package via devtools with:
install.packages("devtools")
devtools::install_github("chezou/sparklytd")
This is a basic example to handle TD data:
# Assuming TD API key is set in environment varraible "TD_API_KEY".
library(sparklyr)
library(sparklytd)
# Enable Java 9 if needed
options(sparklyr.java9 = TRUE)
Before first execution, you need to download td-spark jar.
download_jar()
# Connect td-spark API
default_conf <- spark_config()
default_conf$spark.td.apikey=Sys.getenv("TD_API_KEY")
default_conf$spark.serializer="org.apache.spark.serializer.KryoSerializer"
default_conf$psark.sql.execution.arrow.enabled="true"
sc <- spark_connect(master = "local", config = default_conf)
Read and manupilate data on TD with dplyr.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
df <- spark_read_td(sc, "www_access", "sample_datasets.www_access")
df %>% count()
df %>%
filter(method == "GET", code != 200) %>%
head(50) %>%
collect
Next, plot the summarized info from access log.
library(ggplot2)
codes <- df %>% group_by(code) %>% summarise(count = n(), size_mean = mean(size)) %>% collect
## Warning: Missing values are always removed in SQL.
## Use `AVG(x, na.rm = TRUE)` to silence this warning
codes$code <- as.character(codes$code)
ggplot(data=codes, aes(x=code, y=count)) + geom_bar(stat="identity") + scale_y_continuous(trans='log10')
You can copy R data.frame to TD.
iris_tbl <- copy_to(sc, iris)
spark_write_td(iris_tbl, "aki.iris", mode="overwrite")
If you want to execute Presto SQL on TD, you can use spark_read_td_presto function. An execution result will be wirtten as a temporaly view of Spark.
spark_read_td_presto(sc,
"sample",
"sample_datasets",
"select count(1) from www_access") %>% collect()
You can run DDL using spark_execute_td_presto function.
spark_execute_td_presto(sc,
"aki",
"create table if not exists orders (key bigint, status varchar, price double)")
## NULL
You can execute Presto or Hive SQL on TD as a regular TD job.
spark_read_td_query(sc,
"sample",
"sample_datasets",
"select count(1) from www_access",
engine="presto") %>% collect()