The R package litsearchr provides various functions to help with planning a systematic search of the scientific literature on a given topic. This tutorial gives an example of how to use litsearchr, along with some brief explanations of its workings. This Tutorial is taken from an original GitHub repository created by Luke Tudge. The data sets have been replaced and are using my own Naive Search for source content.
litsearchr was created by Eliza Grames and parts of this search use examples she has created as well. These can be found in her vignette for the package.
As well as litsearchr itself, we will use a few other R packages and set the working directory:
library(dplyr)
library(ggplot2)
library(ggraph)
library(igraph)
library(readr)
library(dplyr)
library(ggplot2)
library(ggraph)
library(igraph)
library(readr)
library(stopwords)
library(bibtex)
library(readxl)
# install.packages("remotes")
library(remotes)
#install_github("elizagrames/litsearchr", ref="main")
library(litsearchr)
library(data.table)
left = function(text, num_char) {
substr(text, 1, num_char)
}
mid = function(text, start_num, num_char) {
substr(text, start_num, start_num + num_char - 1)
}
right = function(text, num_char) {
substr(text, nchar(text) - (num_char-1), nchar(text))
}
To use the litsearchr package, we need to install it first.
litsearchr isn’t (yet) stored in the package repository of CRAN, the Comprehensive R Archive Network. This means that the usual install.packages() function or the Install button in RStudio won’t find it. Instead, we can install the package from Eliza’s GitHub repository.
The devtools package provides a function for installing R packages from GitHub instead of from CRAN. So we can use this.
library(devtools)
install_github("elizagrames/litsearchr", ref="main")
Now that we have installed litsearchr we can load it.
library(litsearchr)
litsearchr is a new package and is currently in development. So we should keep track of which version we are using, in case we later work with a newer version and find that the examples in this tutorial no longer work.
packageVersion("litsearchr")
## [1] '1.0.0'
Our overall topic consists of one primary phrase Facial Reenactment. The goal is to reduce the risk of bias and ensure that other topics within this area are captured, even if they are labelled without this phrase. This starts with a Naive search where the resulting discovered papers are used with the R package litsearchr. A larger set of search terms will be constructed ensuring a more comprehensive collection of found studies, removing selection bias or missed terms.
We start by going to the sources below and enter the search in the Query box. The following search term was used:
(Facial Reenactment) OR (Face Reenactment)
These results were saved to a single directory. Litsearchr has the capability of batch importing a variety of citation formats - Bibtex, RIS and CSV are all acceptable. However, clean up is reqiured since the formats are slightyl different.
We can load results from a file using the litsearchr function import_results(). We give the file name as the file argument.
#naive_results <- import_results(file="pubmed-medication-set.nbib")
#getwd()
dir_Citations <- "./NAIVE CITATIONS/"
list.files(dir_Citations)
## [1] "NAIVE-SpringLink-SearchResults.csv"
## [2] "NAIVE_acm.bib"
## [3] "NAIVE_arXiv.csv"
## [4] "NAIVE_GOOGLESCHOLAR_V1"
## [5] "NAIVE_IEEE Xplore Citation Download 2021.04.22.22.22.11.ris"
## [6] "NAIVE_MA-ReadingList-FacialReenactment_v3.ris"
## [7] "NAIVE_ScienceDirect_citations_1619060036568.bib"
## [8] "NAIVE_scopus.bib"
## [9] "NAIVE_WoS-savedrecs.bib"
naiveimport <- litsearchr::import_results(dir_Citations, verbose = TRUE)
## Reading file ./NAIVE CITATIONS/NAIVE-SpringLink-SearchResults.csv ... done
## Reading file ./NAIVE CITATIONS/NAIVE_acm.bib ... done
## Reading file ./NAIVE CITATIONS/NAIVE_arXiv.csv ... done
## Reading file ./NAIVE CITATIONS/NAIVE_GOOGLESCHOLAR_V1 ... done
## Reading file ./NAIVE CITATIONS/NAIVE_IEEE Xplore Citation Download 2021.04.22.22.22.11.ris ... done
## Reading file ./NAIVE CITATIONS/NAIVE_MA-ReadingList-FacialReenactment_v3.ris ... done
## Reading file ./NAIVE CITATIONS/NAIVE_ScienceDirect_citations_1619060036568.bib ... done
## Reading file ./NAIVE CITATIONS/NAIVE_scopus.bib ... done
## Reading file ./NAIVE CITATIONS/NAIVE_WoS-savedrecs.bib ... done
# colnames(naiveimport)
#Sources:
Sources <- rownames(naiveimport)
Sources <- gsub(dir_Citations,"",Sources)
Sources <- substr(Sources,1,9)
table(Sources)
## Sources
## NAIVE-Spr NAIVE_acm NAIVE_arX NAIVE_GOO NAIVE_IEE NAIVE_MA- NAIVE_Sci NAIVE_sco
## 4 53 17 112 20 24 10 43
## NAIVE_WoS
## 15
#Clean Up Required due to column missalignment
#Abstract, Title, Keywords
# colnames(naiveimport)
naiveresults <- naiveimport
import_results() gives us a dataframe in which each result is a row. We can see from the number of rows how many results our search got.
nrow(naiveimport)
## [1] 298
We can take a look at the first few.
head(naiveimport,5)
There are columns for the title, authors, date, abstract, and so on. We can check the names of all the columns to see all the information we have on each search result.
colnames(naiveimport)
## [1] "item_title" "publication_title"
## [3] "book_series_title" "journal_volume"
## [5] "journal_issue" "item_doi"
## [7] "author" "publication_year"
## [9] "url" "content_type"
## [11] "type" "title"
## [13] "year" "isbn"
## [15] "publisher" "address"
## [17] "doi" "abstract"
## [19] "booktitle" "pages"
## [21] "numpages" "keywords"
## [23] "location" "series"
## [25] "issue_date" "volume"
## [27] "number" "issn"
## [29] "journal" "articleno"
## [31] "o_bf_doi" "organization"
## [33] "school" "source_type"
## [35] "source" "start_page"
## [37] "end_page" "Y1"
## [39] "issue" "VO"
## [41] "date_generated" "L1"
## [43] "LK" "M2"
## [45] "note" "references"
## [47] "document_type" "art_number"
## [49] "author_keywords" "page_count"
## [51] "month" "language"
## [53] "affiliation" "eissn"
## [55] "keywords_plus" "research_areas"
## [57] "web_of_science_categories" "author_email"
## [59] "orcid_numbers" "funding_acknowledgement"
## [61] "funding_text" "number_of_cited_references"
## [63] "times_cited" "usage_count_last_180_days"
## [65] "usage_count_since_2013" "journal_iso"
## [67] "doc_delivery_number" "unique_id"
## [69] "da" "book_group_author"
## [71] "article_number" "researcherid_numbers"
## [73] "filename"
And just as a check, let’s take a look at the title of the first result.
naiveimport[1, "item_title"]
## [1] "Neural Voice Puppetry: Audio-Driven Facial Reenactment"
Note the various names for Title in the list above. Another process will be required to consolidated the various columns into a common column for DOI, Title, Keyword and Year. This is a manual step and will depend on the data format and sources you plan to use. I used 3 different file formats from 9 different databases and unfortuneately the results were not consistent across all the sources.
naiveresults <- data.frame(doi_good = naiveresults$doi,
doi_item = naiveresults$item_doi,
doi_arXi = naiveresults$o_bf_doi,
title_good = naiveresults$title,
title_item = naiveresults$item_title,
abst_good = naiveresults$abstract,
kywrd_good = naiveresults$keywords,
kywrd_auth = naiveresults$author_keywords,
kywrd_plus = naiveresults$keywords_plus,
year = naiveresults$year,
year_pub = naiveresults$publication_year,
year_iss = right(naiveresults$issue_date,4),
year_fd = right(naiveresults$Y1,4),
Sources = Sources
)
naiveresults$DOI <- ifelse(is.na(naiveresults$doi_good),naiveresults$doi_item, naiveresults$doi_good)
naiveresults$DOI <- ifelse(is.na(naiveresults$DOI),naiveresults$doi_arXi, naiveresults$DOI)
naiveresults$DOI <- gsub("https://doi.org/","",naiveresults$DOI)
naiveresults$YEAR <- ifelse(is.na(naiveresults$year),naiveresults$year_pub, naiveresults$year)
naiveresults$YEAR <- ifelse(is.na(naiveresults$YEAR),naiveresults$year_iss, naiveresults$YEAR)
naiveresults$TITLE <- ifelse(is.na(naiveresults$title_good), naiveresults$title_item, naiveresults$title_good)
naiveresults$ABSTRACT <- naiveresults$abst_good
naiveresults$KEYWORDS <- paste(naiveresults$kywrd_good, naiveresults$kywrd_auth, naiveresults$kywrd_plus, sep = ",")
Based on the results, we can see the various years of publication below:
yearOut <- table(naiveresults$YEAR)
barplot(yearOut, main="Years of Publication for Facial Reenactment",
xlab="Year", ylab = "Number of Papers Returned")
Please note that there are a few articles which are repeated in the set. This is because they were found in multiple databases. The top titles which were discovered in at least four of the databases include:
tableOfTitles <- as.data.frame(table(toupper(naiveresults$TITLE)))
tableOfTitles <- tableOfTitles[rev(order(tableOfTitles$Freq)),]
colnames(tableOfTitles) <- c("Title of Research Paper", "Freq")
knitr::kable(tableOfTitles[tableOfTitles$Freq>2,], caption = "Most Common Research Papers - Different Database Sources")
| Title of Research Paper | Freq | |
|---|---|---|
| 58 | FACE2FACE: REAL-TIME FACE CAPTURE AND REENACTMENT OF RGB VIDEOS | 9 |
| 32 | DEEPFACEFLOW: IN-THE-WILD DENSE 3D FACIAL MOTION ESTIMATION | 7 |
| 86 | HEAD2HEAD: VIDEO-BASED NEURAL HEAD SYNTHESIS | 6 |
| 36 | DEFERRED NEURAL RENDERING: IMAGE SYNTHESIS USING NEURAL TEXTURES | 6 |
| 17 | ANCHOR CASCADE FOR EFFICIENT FACE DETECTION | 6 |
| 22 | AUTOMATIC FACE REENACTMENT | 5 |
| 19 | ANY-TO-ONE FACE REENACTMENT BASED ON CONDITIONAL GENERATIVE ADVERSARIAL NETWORK | 5 |
| 137 | REAL-TIME EXPRESSION TRANSFER FOR FACIAL REENACTMENT | 4 |
| 64 | FACEVR: REAL-TIME GAZE-AWARE FACIAL REENACTMENT IN VIRTUAL REALITY | 4 |
| 59 | FACE2FACE: REAL-TIME FACIAL REENACTMENT | 4 |
| 42 | DETECTING FACE2FACE FACIAL REENACTMENT IN VIDEOS | 4 |
| 39 | DEMO OF FACEVR: REAL-TIME FACIAL REENACTMENT AND EYE GAZE CONTROL IN VIRTUAL REALITY | 4 |
| 31 | DEEP VIDEO PORTRAITS | 4 |
| 24 | BRINGING PORTRAITS TO LIFE | 4 |
| 171 | UNCONSTRAINED FACIAL EXPRESSION TRANSFER USING STYLE-BASED GENERATOR | 3 |
| 158 | STATE OF THE ART ON MONOCULAR 3D FACE RECONSTRUCTION, TRACKING, AND APPLICATIONS | 3 |
| 147 | REENACTNET: REAL-TIME FULL HEAD REENACTMENT | 3 |
| 132 | PROTECTING REAL-TIME VIDEO CHAT AGAINST FAKE FACIAL VIDEOS GENERATED BY FACE REENACTMENT | 3 |
| 127 | PHOTOREALISTIC AUDIO-DRIVEN VIDEO PORTRAITS | 3 |
| 124 | PCA-BASED 3D FACIAL REENACTMENT FROM SINGLE IMAGE | 3 |
| 113 | NEURAL VOICE PUPPETRY: AUDIO-DRIVEN FACIAL REENACTMENT | 3 |
| 108 | MULTI-TASK LEARNING FOR DETECTING AND SEGMENTING MANIPULATED FACIAL IMAGES AND VIDEOS | 3 |
| 99 | LEARNING IDENTITY-INVARIANT MOTION REPRESENTATIONS FOR CROSS-ID FACE REENACTMENT | 3 |
| 88 | HEAD2HEADFS: VIDEO-BASED HEAD REENACTMENT WITH FEW-SHOT LEARNING | 3 |
| 87 | HEAD2HEAD++: DEEP FACIAL ATTRIBUTES RE-TARGETING | 3 |
| 83 | GENERATIVE VIDEO FACE REENACTMENT BY AUS AND GAZE REGULARIZATION | 3 |
| 78 | FREENET: MULTI-IDENTITY FACE REENACTMENT | 3 |
| 56 | FACE REENACTMENT BASED FACIAL EXPRESSION RECOGNITION | 3 |
| 38 | DEMO OF FACE2FACE: REAL-TIME FACE CAPTURE AND REENACTMENT OF RGB VIDEOS | 3 |
| 20 | APB2FACE: AUDIO-GUIDED FACE REENACTMENT WITH AUXILIARY POSE AND BLINK SIGNALS | 3 |
| 12 | ACTGAN: FLEXIBLE AND EFFICIENT ONE-SHOT FACE REENACTMENT | 3 |
| 8 | A REVIEW ON FACE REENACTMENT TECHNIQUES | 3 |
naiveresults <- litsearchr::remove_duplicates(naiveresults, field = "TITLE", method = "string_osa")
naiveresults <- litsearchr::remove_duplicates(naiveresults, field = "DOI", method = "exact")
The original search yielded a significant number of research articles. However, this is a fairly new field of study, and the issue of bias, where using only these terms, could lose several other valuable research areas, as well as include research which is out of scope for the Systematic Literature Review.
The original search needs to be expanded. > (Facial Reenactment) OR (Face Reenactment)
There are two primary areas to pull data from the citations collected so far. The first is Keywords and the second is Title and Abstract. Both methods in this case have issues as the keywords may not be available for some sources, whereas the abstract is missing from others. Therefore, using both these techniques should yield a power collection of search terms for the post naive search.
Several databases have well defined keywords, as well as a search field and citation export option to collect them. Since this set is incomplete, it will provide a set which is specific to the ‘domain’ of the research.
Investigating the cleaned keywords from the first ten sources are:
knitr::kable(naiveresults[1:10, "KEYWORDS"], caption = "Keywords Discovered")
| x |
|---|
| Computer vision; Deep neural networks; Three dimensional computer graphics, 3-D face modeling; 3d representations; Digital assistants; Photo-realistic; State-of-the-art techniques; Synthetic voices; Temporal stability; Video synthesis, 3D modeling,NA,NA |
| NA,NA,NA |
| NA,NA,NA |
| NA,NA,NA |
| generative adversarial networks, graph convolutional networks, image synthesis, face reenactment,NA,NA |
| Lighting; Video streaming, Depth camera; Expression transfers; Faces; Facial deformations; Parametric modeling; Real time; Real world environments; Source expressions, Rendering (computer graphics),Depth camera; Expression transfer; Faces; Real-time,NA |
| Computer graphics; Digital storage; Eye movements; Goggles; Helmet mounted displays; Interactive computer graphics; Rendering (computer graphics); Video conferencing; Virtual reality, Data-driven approach; Expression transfers; Face capture; Face reconstruction; Facial motion capture; Facial reenactment; Head mounted displays; Reconstruction process, Stereo image processing,Expression transfer; Face capture; Facial reenactment,NA |
| Eye tracking; Helmet mounted displays; Stereo image processing; Teleconferencing; Three dimensional computer graphics; Virtual reality, Data-driven approach; Face Tracking; Facial motion capture; Head mounted displays; Image-based techniques; Model based approach; Tracking performance; Video teleconferencing, Face recognition,Eye tracking; Face tracking; Virtual reality,NA |
| Computer graphics; Constraint theory; Interactive computer graphics; Video streaming, Consistency measures; Deformation transfer; Expression transfers; Face capture; Facial Expressions; Facial reenactment; Target sequences; Under-constrained, Rendering (computer graphics),Expression transfer; Face capture; Facial reenactment,NA |
| Image texture; Pipelines; Textures; Three dimensional computer graphics, Facial reenactment; High dimensional feature; Metric reconstruction; Neural rendering; Novel view synthesis; Photorealistic images; State-of-the-art approach; Static and dynamic environments, Rendering (computer graphics),Facial reenactment; Neural rendering; Neural texture; Novel view synthesis,NA |
The keywords are missing (NA) from the first four articles.
The Keywords were combined from three columns. Counting the number of rows with NA,NA,NA .
#sum(is.na(naiveresults[, "KEYWORDS"]))
nrow(naiveresults[naiveresults$KEYWORDS == "NA,NA,NA",])
## [1] 82
This means at 47.7% with NA, less than HALF of the rows or Research Papers discovered have keywords available in the analysis. Relying on the provided keywords might not always be such a great approach. But for the purposes of demonstration let’s see how we could use them in litsearchr.
litsearchr has a function extract_terms() that can gather the keywords from this column of our search results. The keywords argument is where we put the column of keywords from our results dataframe. The method="tagged" argument lets extract_terms() know that we are getting keywords that article authors themselves have provided (or ‘tagged’ the article with). Extracted Terms from Keywords
extract_Keywords <- extract_terms(keywords=naiveresults[, "KEYWORDS"], method="tagged")
knitr::kable(as.data.frame(extract_Keywords))
| extract_Keywords |
|---|
| 3-d face modeling |
| 3d imaging |
| 3d modeling |
| 3d reconstruction |
| adversarial networks |
| artificial intelligence |
| arts computing |
| big data |
| body reenactment |
| computer graphics |
| computer science |
| computer vision |
| conditional gan |
| consistency measures |
| constraint theory |
| convolutional neural networks |
| data-driven approach |
| data driven animation |
| deep learning |
| deep neural networks |
| deepfake detection |
| deformation transfer |
| depth camera |
| digital storage |
| expression matching |
| expression transfer |
| expression transfers |
| eye movements |
| eye tracking |
| face alignment |
| face animation |
| face capture |
| face expressions |
| face forensics |
| face generation |
| face images |
| face landmarks |
| face modeling |
| face recognition |
| face reconstruction |
| face reenactment |
| face swapping |
| face tracking |
| facial animation |
| facial expression recognition |
| facial expression synthesis |
| facial expressions |
| facial images |
| facial landmark |
| facial motion capture |
| facial movements |
| facial performance capture |
| facial puppetry |
| facial reenactment |
| generative adversarial network |
| generative adversarial networks |
| generative models |
| geometry information |
| gesture recognition |
| head mounted displays |
| helmet mounted displays |
| high dimensional feature |
| identity preserving |
| image-based rendering |
| image classification |
| image reconstruction |
| image synthesis |
| in the wild |
| interactive computer graphics |
| large dataset |
| learning systems |
| low resolution |
| model based approach |
| monocular video |
| motion capture |
| neural networks |
| neural rendering |
| novel view synthesis |
| pattern recognition |
| photo-realistic video |
| photorealistic images |
| real time |
| recurrent neural networks |
| rendering computer graphics |
| solid modeling |
| state-of-the-art approach |
| state-of-the-art methods |
| state-of-the-art techniques |
| state of the art |
| stereo image processing |
| still images |
| synthetic rendering |
| target images |
| target sequences |
| temporal clustering |
| temporal consistency |
| three-dimensional displays |
| three dimensional computer graphics |
| video conferencing |
| video editing |
| video portraits |
| video streaming |
| video synthesis |
| video synthesisi |
| video teleconferencing |
| virtual reality |
| visual qualities |
Overall, there is a respectable level of keywords returned. Some of the terms suggested are not beneficial for the study and context needs to be provided. Also, there are no single words returned.
The function extract_terms() can be setup with several parameters which include:
min_freq=3. Keyword must be discovered in a minimum of 3 sources .min_n=1. Minimum n-gram of 1 word.max_n=3. Maximum n-gram of 3 words.Experimenting with these parameters will tune the list. At this point, the n-gram is set to 1 and returns single word to a maximum of 3 words or n-gram.
Extracted Terms from Keywords Using Parameters
terms_Keywords <- extract_terms(keywords=naiveresults[, "KEYWORDS"], method="tagged", min_n=1, max_n = 3, min_freq = 3)
knitr::kable(terms_Keywords)
| x |
|---|
| 3d modeling |
| adversarial networks |
| animation |
| blendshapes |
| computer graphics |
| computer science |
| computer vision |
| computers |
| decoding |
| deep learning |
| deep neural networks |
| deformation transfer |
| dubbing |
| expression transfer |
| expression transfers |
| face |
| face alignment |
| face capture |
| face generation |
| face recognition |
| face reenactment |
| face swapping |
| face tracking |
| faces |
| facial animation |
| facial expression recognition |
| facial expression synthesis |
| facial expressions |
| facial images |
| facial motion capture |
| facial movements |
| facial performance capture |
| facial reenactment |
| generative adversarial networks |
| generative models |
| geometry |
| image-based rendering |
| in the wild |
| interactive computer graphics |
| motion capture |
| neural networks |
| neural rendering |
| pattern recognition |
| photo-realistic |
| photorealistic images |
| real-time |
| reenactment |
| rendering computer graphics |
| state-of-the-art approach |
| state-of-the-art methods |
| target images |
| three-dimensional displays |
| video conferencing |
| video editing |
| video streaming |
| virtual reality |
This gets us more search terms. Some of these might be useful new terms to include in our literature search. But others are clearly too broad, for example animiation, or are from tangential topics, for example virtual reality. Using this method with the set arguments, 56 extracted terms were pulled from the title field.
In addition to key word searches, establishing terms from the titles and abstracts can provide more terms. More derived ‘interesting’ words may be pulled from the selected study using Rapid Automatic Keyword Extraction (RAKE). litsearchr can apply this method using the method argument "fakerake", a slightly simplified version of the full RAKE technique.
Arguments to set include min_n, and max_n and min_freq to exclude phrases or words that occur in too few of the titles in our original search.
Extracted Terms from Titles Using Parameters
terms_Title <- extract_terms(text=naiveresults[, "TITLE"], method="fakerake", min_freq=2, min_n=2)
knitr::kable(terms_Title,caption = "Title Based Keywords using fakerake")
| x |
|---|
| actor videos |
| adversarial networks |
| conditional generative |
| conditional generative adversarial |
| conditional generative adversarial networks |
| consistent facial |
| convolutional networks |
| deepfake detection |
| deferred neural |
| deferred neural rendering |
| dynamic facial |
| emerging technologies |
| expression synthesis |
| expression transfer |
| facial animation |
| facial attributes |
| facial expression |
| facial expression synthesis |
| facial expression transfer |
| facial forgery |
| facial forgery detection |
| facial landmark |
| facial landmark tracking |
| facial performance |
| facial performance capture |
| facial reenactment |
| facial video |
| facial videos |
| facialb reenactment |
| facialb reenactmentb |
| forgery detection |
| fully automatic |
| generative adversarial |
| generative adversarial networks |
| human actor |
| human actor videos |
| image synthesis |
| landmark tracking |
| manipulated facial |
| manipulation detection |
| monocular video |
| neural rendering |
| neural textures |
| performance capture |
| photorealistic facial |
| photorealistic facial expression |
| real-time facial |
| real-time facial reenactment |
| reenactment based |
| single image |
| style transfer |
| u-net conditional |
| video forgery |
| video portraits |
| virtual reality |
Since the title field is rarely blank, this yields many more terms. Using this method with the set arguments, 55 extracted terms were pulled from the title field versus 56 derived from keywords.
To ensure that higher impact terms are kept, while avoiding other, non essential words or ‘stopwords’ litsearchr is able to filter out these common words, thus reducing the risk of them in the results. There can also be domain specific stopwords, however, litsearchr is not returning non essential domain terms and therefore they can be kept.
The extract_terms() function provides a stopwords argument that we can use to filter out words. litsearchr also provides a general list of stopwords for English (and for some other languages) via the get_stopwords() function, so we can also add this to our own more specific stopwords.
Extracted Terms from Titles Using Parameters and Stop Word Removal
all_stopwords <- get_stopwords("English")
terms_Title_NoStopWords <- extract_terms(
text=naiveresults[, "TITLE"],
method="fakerake",
min_freq=2, min_n=2, max_n = 3,
stopwords=all_stopwords
)
knitr::kable(terms_Title_NoStopWords, caption = "Terms from Titles - Without Stop Words")
| x |
|---|
| actor videos |
| adversarial networks |
| conditional generative |
| conditional generative adversarial |
| consistent facial |
| convolutional networks |
| deepfake detection |
| deferred neural |
| deferred neural rendering |
| dynamic facial |
| emerging technologies |
| expression synthesis |
| expression transfer |
| facial animation |
| facial attributes |
| facial expression |
| facial expression synthesis |
| facial expression transfer |
| facial forgery |
| facial forgery detection |
| facial landmark |
| facial landmark tracking |
| facial performance |
| facial performance capture |
| facial reenactment |
| facial video |
| facial videos |
| facialb reenactment |
| facialb reenactmentb |
| forgery detection |
| fully automatic |
| generative adversarial |
| generative adversarial networks |
| human actor |
| human actor videos |
| image synthesis |
| landmark tracking |
| manipulated facial |
| manipulation detection |
| monocular video |
| neural rendering |
| neural textures |
| performance capture |
| photorealistic facial |
| photorealistic facial expression |
| real-time facial |
| real-time facial reenactment |
| reenactment based |
| single image |
| style transfer |
| u-net conditional |
| video forgery |
| video portraits |
| virtual reality |
By dropping stop words in the title, 54 extracted terms were pulled from versus 55 derived before.
There was no improvement here, thus suggesting stop words did not significantly contribute to additional phrases found in the Title.
Search terms from the Titles and Keywords are combined, and only unique terms remain.
terms <- unique(c(terms_Keywords, terms_Title_NoStopWords))
terms_Abstract_NoStopWords <- extract_terms(
text=naiveresults[, "ABSTRACT"],
method="fakerake",
min_freq=10, min_n=2, max_n = 3,
stopwords=all_stopwords
)
knitr::kable(terms_Abstract_NoStopWords, caption = "Terms from Abstract - Without Stop Words")
| x |
|---|
| experimental results |
| facial expression |
| facial expressions |
| facial reenactment |
| generative adversarial |
| neural network |
| proposed method |
| state-of-the-art methods |
| target video |
Terms discovered in the Abstract using the fakerake parameter yielded, 9 extracted terms.
In order to proceed to the next section, these terms will be grouped all together into a unique list.
terms <- unique(c(terms_Keywords, terms_Title_NoStopWords, terms_Abstract_NoStopWords))
The list of unique terms is quite comprehensive at 105. While some maybe unrelated to the others and to our topic of interest, they may only occur in a small number of articles that do not mention many of the other search terms. A systematic way of identifying these ‘isolated’ search terms is to analyze them as a network. Terms are linked to each other by virtue of appearing in the same articles. Finding which terms tend to occur together in the same article, we can pick out groups of terms that are probably all referring to the same topic and filter out terms that rarely occur together with any of the main groups of terms.
The title and abstract of each article will suffice as the ‘content’ of that article, and count a term as having occurred in that article if it is to be found in either the title or abstract. For this, we need to join the title of each article to its abstract.
docs <- paste(naiveresults[, "TITLE"], naiveresults[, "ABSTRACT"])
Let’s just check the first one to make sure we did this right.
docs[1]
## [1] "Neural Voice Puppetry: Audio-drivenB FacialB Reenactment We present Neural Voice Puppetry, a novel approach for audio-driven facial video synthesis (Video, Code and Demo: https://justusthies.github.io/posts/neural-voice-puppetry/). Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video of a target person that is in sync with the audio of the source input. This audio-driven facial reenactment is driven by a deep neural network that employs a latent 3D face model space. Through the underlying 3D representation, the model inherently learns temporal stability while we leverage neural rendering to generate photo-realistic output frames. Our approach generalizes across different people, allowing us to synthesize videos of a target actor with the voice of any unknown source actor or even synthetic voices that can be generated utilizing standard text-to-speech approaches. Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head. We demonstrate the capabilities of our method in a series of audio- and text-based puppetry examples, including comparisons to state-of-the-art techniques and a user study. © 2020, Springer Nature Switzerland AG."
We now create a matrix that records which terms appear in which articles. The litsearchr function create_dfm() does this. ‘DFM’ stands for ‘document-feature matrix’, where the ‘documents’ are our articles and the ‘features’ are the search terms. The elements argument is the list of documents. The features argument is the list of terms whose relationships we want to analyze within that set of documents.
dfm <- create_dfm(elements=docs, features=terms)
The rows of our matrix represent the articles (their titles and abstracts), and the columns represent the search terms. Each entry in the matrix records how many times that article contains that term. For example, if we look at the first three articles we see that adherence does not occur in any of them, adolescents occurs in the third, antidepressant occurs in the first two, and anxiety occurs in all of them.
dfm[1:3, 1:4]
We can then turn this matrix into a network of linked search terms, using the litsearchr function create_network(). This function has an argument min_studies that excludes terms that occur in fewer than a given number of articles.
network_g <- create_network(dfm, min_studies=3)
The ggraph package provides visualizations for network graphs. Refer to tutorials for ggraph if you want to learn more. The ggraph() function takes the network that was created using create_network() as its argument and draws it as a graph. In addition, we can specify a layout for the graph. Here we use the ‘Kamada and Kawai’ layout. Terms that are related are closely linked close together while those less closely linked are further away from each other. Term labels are added using geom_node_text(). The parameter check_overlap=TRUE attempts to push each label away to keep the visual legible. Lines linking the terms use geom_edge_link(). More solid lines link terms that appear in more articles together and is the weight of the link
ggraph(network_g, layout="stress") +
coord_fixed() +
expand_limits(x=c(-3, 3)) +
geom_edge_link(aes(alpha=weight)) +
geom_node_point(shape="circle filled", fill="white") +
geom_node_text(aes(label=name), hjust="outward", check_overlap=TRUE) +
guides(edge_alpha=FALSE)
The Ranking of search terms by importance aims to prune away some of the least important ones.
The ‘strength’ of each term in the network is the number of other terms that it appears together with. We can get this information from our network using the strength() function from the igraph package (behind the scenes, litsearchr uses igraph for some of the workings of its network analyses). If we then arrange the terms in ascending order of strength we see those that might be the least important.
strengths <- strength(network_g)
data.frame(term=names(strengths), strength=strengths, row.names=NULL) %>%
mutate(rank=rank(strength, ties.method="min")) %>%
arrange(strength) ->
term_strengths
term_strengths
It is also possible to view the node strength. There are two ways to identify important nodes in the litsearchr package:
1 Fitting a spline model to the node importance to select tipping points 2 Finding the minimum number of nodes to capture a large percent of the total importance of the network.
The distribution of node importance (in this case, node strength) From the density and histogram plots, it looks like this network has a lot of fairly weak nodes with a long tail. There appears to breaks in the data. A spline model is an appropriate method to identify the cutoff threshold for keyword importance. A cumulative approach is more appropriate when there clear breaks in the data.
When using a spline model, we need to specify where to place knots (i.e. where should the fitted model parameters be allowed to change). Looking at the plot of ranked node importance, we should use a second-degree polynomial for the spline and should probably allow four knots.
hist(igraph::strength(network_g), 100, main="Histogram of node strengths", xlab="Node strength")
plot(sort(igraph::strength(network_g)), ylab="Node strength", main="Ranked node strengths", xlab="Rank")
At the top are the terms that are most weakly linked to the others. For some of them you can compare this with their positions on the graph visualization above, where they appear near the margins of the figure. In most cases, terms like these are completely irrelevant and have occurred in a few of the articles in the naive search for arbitrary reasons, for example virtual reality. Some are perhaps still relevant but are just very rarely used.
The cutoff allows for less linked terms to be removed, thus eliminating them from the final search string.
cutoff_fig <- ggplot(term_strengths, aes(x=rank, y=strength, label=term)) +
geom_line() +
geom_point() +
geom_text(data=filter(term_strengths, rank>5), hjust="right", nudge_y=20, check_overlap=TRUE)
cutoff_fig
A cutoff value for term strength needs to be established, such that we discard terms with a strength below that value. The find_cutoff() function implements these methods. To determine the cutoff, a value is selected to retain a certain proportion of the total strength of the network of search terms. For example 80% can be used in the argument method="cumulative" to the find_cutoff() function, we get the cutoff strength value according to this method. The percent argument specifies what proportion of the total strength we would like to retain.
cutoff_cum <- find_cutoff(network_g, method="cumulative", percent=0.8)
cutoff_cum
## [1] 51
Let’s see this on our figure.
cutoff_fig +
geom_hline(yintercept=cutoff_cum, linetype="dashed")
Once we have found a cutoff value, the reduce_graph() function applies it and prunes away the terms with low strength. The arguments are the original network and the cutoff. The get_keywords() function then gets the remaining terms from the reduced network.
get_keywords(reduce_graph(network_g, cutoff_cum))
## [1] "adversarial networks" "animation"
## [3] "deep learning" "face"
## [5] "face reenactment" "face tracking"
## [7] "faces" "facial animation"
## [9] "facial expressions" "facial motion capture"
## [11] "facial reenactment" "generative adversarial networks"
## [13] "geometry" "motion capture"
## [15] "neural networks" "photo-realistic"
## [17] "real-time" "reenactment"
## [19] "state-of-the-art methods" "virtual reality"
## [21] "conditional generative" "facial expression"
## [23] "facial performance" "facialb reenactment"
## [25] "generative adversarial" "monocular video"
## [27] "performance capture" "real-time facial"
## [29] "real-time facial reenactment" "experimental results"
## [31] "neural network" "proposed method"
## [33] "target video"
Looking at the figure above, another method of pruning away terms suggests itself. There are certain points along the ranking of terms where the strength of the next strongest term is much greater than that of the previous one (places where the ascending line ‘jumps up’). We could use these places as cutoffs, since the terms below them have much lower strength than those above. There may of course be more than one place where term strength jumps up like this, so we will have multiple candidates for cutoffs. The same find_cutoff() function with the argument method="changepoint" will find these cutoffs. The knot_num argument specifies how many ‘knots’ we wish to slice the keywords into.
cutoff_change <- litsearchr::find_cutoff(network_g, method="changepoint", knot_num=3)
cutoff_change
## [1] 46 95 198 577
Other cutoff methods include:
There is missing literature on these additional cutoff models in the litsearchr package
This time we get several suggested cutoffs. Let’s put them on our figure, where we can see that they cut off the search terms just before large increases in term strength.
cutoff_fig +
geom_hline(yintercept=cutoff_change, linetype="dashed")
After doing this, we can apply the same reduce_graph() function that we did for the cumulative strength method. The only difference is that we have to pick one of the cutoffs in our vector. The value 46 is selected and terms lower than this cumulative strength are removed from the study. Other cutoffs calculated include:
| x |
|---|
| 46 |
| 95 |
| 198 |
| 577 |
g_redux <- reduce_graph(network_g, cutoff_change[1])
selected_terms <- get_keywords(g_redux)
selected_terms
## [1] "adversarial networks" "animation"
## [3] "deep learning" "face"
## [5] "face reenactment" "face tracking"
## [7] "faces" "facial animation"
## [9] "facial expressions" "facial motion capture"
## [11] "facial performance capture" "facial reenactment"
## [13] "generative adversarial networks" "geometry"
## [15] "motion capture" "neural networks"
## [17] "photo-realistic" "real-time"
## [19] "reenactment" "state-of-the-art methods"
## [21] "virtual reality" "conditional generative"
## [23] "conditional generative adversarial" "facial expression"
## [25] "facial performance" "facialb reenactment"
## [27] "facialb reenactmentb" "generative adversarial"
## [29] "monocular video" "performance capture"
## [31] "real-time facial" "real-time facial reenactment"
## [33] "experimental results" "neural network"
## [35] "proposed method" "target video"
Grouping of selected terms provides a more comprehensive search string. By using logical operators OR and AND a search string is constructed. These two terms provide the following selection criteria.
OR Operator:
AND Operator:
In order to create the string, litsearchr documentation recommends doing this step manually. Each term is grouped together with other ‘like’ terms. A list of separate vectors, one for each subtopic, creates the groups.
For facial reenactment, these groups are as follows:
Several terms appear to have failed to make the cutting process. To ensure a known variable is reintroduced, they will be added manually.
extra_terms <- c(
"transfer facial",
"face swapping",
"facial expression synthesis", "expression synthesis",
"modified target", "performance capture","facial performance capture", "monocular video",
"target actor","target person"
)
selected_terms <- c(selected_terms, extra_terms)
selected_terms <- unique(sort(selected_terms))
selected_terms
write.csv(selected_terms, file = "PostNaiveSelectedTerms_V1.csv")
grouped_terms <-list(
Component = selected_terms[c(8,11,12,16,17,19,25,27,28,34,39,40,41)],
Methods = selected_terms[c(1,3,4,5,6,7,10,14,15,23,24,29,30,31,38)],
Results = selected_terms[c(9,18,20,26,36,37,42)],
Exclude = selected_terms[c(2,13)]
)
grouped_terms
The write_search() function takes our list of grouped search terms and writes the text of a new search. There are quite a few arguments to take care of for this function:
languages provides a list of languages to translate the search into, in case we want to get articles in multiple languages.exactphrase controls whether terms that consist of more than one word should be matched exactly rather than as two separate words. If we have phrases that are only relevant as a whole phrase, then we should set this to TRUE, so that for example social phobia will not also catch all the articles containing the word social.stemming controls whether words are stripped down to the smallest meaningful part of the word (its ‘stem’) so that we make sure to catch all variants of the word, for example catching both behavior and behavioral.closure controls whether partial matches are matched at the left end of a word ("left"), at the right ("right"), only as exact matches ("full") or as any word containing a term ("none").writesearch controls whether we would like to write the search text to a file.write_search(
grouped_terms,
languages="English",
exactphrase=TRUE,
stemming=FALSE,
closure="left",
writesearch=TRUE
)
Let’s read in the contents of the text file that we just wrote, to see what our search text looks like.
searchString <- cat(read_file("search-inEnglish.txt"))
## \(\(face OR "facial expressions" OR "facial motion capture" OR "facial performance capture" OR geometry OR "monocular video" OR "motion capture" OR real-time OR "target actor" OR "target person" OR "target video"\) AND \("adversarial networks" OR "conditional generative" OR "deep learning" OR "experimental results" OR "expression synthesis" OR "face swapping" OR "facial expression" OR "generative adversarial" OR "neural network" OR "performance capture" OR "state-of-the-art methods"\) AND \("face reenactment" OR "facial performance" OR "facial reenactment" OR "modified target" OR "real-time facial reenactment" OR reenactment OR "transfer facial"\) AND \(animation OR "facial animation"\)\)
# searchString <- gsub("\\\\", "", searchString)
# searchString
We can now go back to the search site and copy the contents of this text file into the search field to conduct a new search.
This section will ensure that many of the original works that were included in the study have remained. It is likely that they may not since arXiv titles were also include and will be dropped, as well as Wiley.
After the new query, the results from the search are loaded again using litsearchr function import_results().
getwd()
## [1] "C:/Users/WesSa/OneDrive/Documents/Athabasca/AU Project/LitReview/FINAL_APRIL21_NAIVE_SELECTION PROCESS/RMD_Litsearchr_Tutorial"
dir_PostCitations <- "./POST CITATIONS/"
getwd()
## [1] "C:/Users/WesSa/OneDrive/Documents/Athabasca/AU Project/LitReview/FINAL_APRIL21_NAIVE_SELECTION PROCESS/RMD_Litsearchr_Tutorial"
list.files(dir_PostCitations)
## [1] "POST IEEE Xplore Citation Download 2021.04.28.23.13.07.ris"
## [2] "POST_acm_101To150.bib"
## [3] "POST_acm_151To200.bib"
## [4] "POST_acm_1To50.bib"
## [5] "POST_acm_201To250.bib"
## [6] "POST_acm_251To280.bib"
## [7] "POST_acm_51To100.bib"
## [8] "POST_ScienceDirect_citations_1619565931828.bib"
## [9] "POST_scopus.bib"
## [10] "POST_SpringerLink_SearchResults.csv"
## [11] "POST_WOS_savedrecs.bib"
postNaiveimport <- litsearchr::import_results(dir_PostCitations, verbose = TRUE)
## Reading file ./POST CITATIONS/POST IEEE Xplore Citation Download 2021.04.28.23.13.07.ris ... done
## Reading file ./POST CITATIONS/POST_acm_101To150.bib ... done
## Reading file ./POST CITATIONS/POST_acm_151To200.bib ... done
## Reading file ./POST CITATIONS/POST_acm_1To50.bib ... done
## Reading file ./POST CITATIONS/POST_acm_201To250.bib ... done
## Reading file ./POST CITATIONS/POST_acm_251To280.bib ... done
## Reading file ./POST CITATIONS/POST_acm_51To100.bib ... done
## Reading file ./POST CITATIONS/POST_ScienceDirect_citations_1619565931828.bib ... done
## Reading file ./POST CITATIONS/POST_scopus.bib ... done
## Reading file ./POST CITATIONS/POST_SpringerLink_SearchResults.csv ... done
## Reading file ./POST CITATIONS/POST_WOS_savedrecs.bib ... done
# colnames(postNaiveimport)
#postSources:
postSources <- rownames(postNaiveimport)
postSources <- gsub(dir_PostCitations,"",postSources)
postSources <- substr(postSources,1,9)
table(postSources)
## postSources
## POST IEEE POST_acm_ POST_Scie POST_scop POST_Spri POST_WOS_
## 48 280 8 101 38 35
postResults <- postNaiveimport
Checking the names of all the columns is required again to ensure they are aligned across the citation sets
colnames(postNaiveimport)
## [1] "source_type" "author"
## [3] "year" "title"
## [5] "journal" "source"
## [7] "start_page" "end_page"
## [9] "abstract" "keywords"
## [11] "doi" "issn"
## [13] "Y1" "volume"
## [15] "issue" "VO"
## [17] "type" "issue_date"
## [19] "publisher" "address"
## [21] "number" "url"
## [23] "articleno" "numpages"
## [25] "isbn" "booktitle"
## [27] "pages" "location"
## [29] "series" "note"
## [31] "affiliation" "document_type"
## [33] "author_keywords" "art_number"
## [35] "page_count" "item_title"
## [37] "publication_title" "book_series_title"
## [39] "journal_volume" "journal_issue"
## [41] "item_doi" "publication_year"
## [43] "content_type" "month"
## [45] "language" "article_number"
## [47] "eissn" "research_areas"
## [49] "web_of_science_categories" "author_email"
## [51] "researcherid_numbers" "number_of_cited_references"
## [53] "times_cited" "usage_count_last_180_days"
## [55] "usage_count_since_2013" "journal_iso"
## [57] "doc_delivery_number" "unique_id"
## [59] "da" "keywords_plus"
## [61] "orcid_numbers" "funding_acknowledgement"
## [63] "funding_text" "book_group_author"
## [65] "organization" "oa"
## [67] "editor" "filename"
The number of new documents discovered include.
nrow(postNaiveimport)
## [1] 510
We now need to check whether the new results seem to be relevant to our chosen topic. There are a few basic things that we can check.
postResults <- data.frame (doi_good = postNaiveimport$doi,
doi_item = postNaiveimport$item_doi,
title_good = postNaiveimport$title,
title_item = postNaiveimport$item_title,
abst_good = postNaiveimport$abstract,
kywrd_good = postNaiveimport$keywords,
kywrd_auth = postNaiveimport$author_keywords,
kywrd_plus = postNaiveimport$keywords_plus,
year = postNaiveimport$year,
year_pub = postNaiveimport$publication_year,
year_iss = right(postNaiveimport$issue_date,4),
year_fd = right(postNaiveimport$Y1,4),
Sources = postSources
)
postResults$DOI <- ifelse(is.na(postResults$doi_good),postResults$doi_item, postResults$doi_good)
postResults$DOI <- gsub("https://doi.org/","",postResults$DOI)
postResults$YEAR <- ifelse(is.na(postResults$year),postResults$year_pub, postResults$year)
postResults$YEAR <- ifelse(is.na(postResults$YEAR),postResults$year_iss, postResults$YEAR)
postResults$TITLE <- ifelse(is.na(postResults$title_good), postResults$title_item, postResults$title_good)
postResults$ABSTRACT <- postResults$abst_good
postResults$KEYWORDS <- paste(postResults$kywrd_good, postResults$kywrd_auth, postResults$kywrd_plus, sep = ",")
Before removing duplicate papers, a quick check for the number of papers by year is analyzed.
yearOut_OrigPost <- table(postResults$YEAR)
barplot(yearOut_OrigPost, main="Years of Publication for Facial Reenactment - POST Query Original",
xlab="Year", ylab = "Number of Papers Returned")
Please note that there are a few articles which are repeated in the set. This is because they were found in multiple databases. The top titles which were discovered in at least four of the databases include:
tableOfTitlesPost <- as.data.frame(table(toupper(postResults$TITLE)))
tableOfTitlesPost <- tableOfTitlesPost[rev(order(tableOfTitlesPost$Freq)),]
colnames(tableOfTitlesPost) <- c("Title of Research Paper", "Freq")
knitr::kable(tableOfTitlesPost[tableOfTitlesPost$Freq>2,], caption = "Most Common Research Papers - Different Database Sources")
| Title of Research Paper | Freq | |
|---|---|---|
| 258 | PRACTICAL DYNAMIC FACIAL APPEARANCE MODELING AND ACQUISITION | 5 |
| 179 | HIGH RESOLUTION PASSIVE FACIAL PERFORMANCE CAPTURE | 5 |
| 176 | HIGH-QUALITY PASSIVE FACIAL PERFORMANCE CAPTURE USING ANCHOR FRAMES | 5 |
| 166 | HEAD-MOUNTED PHOTOMETRIC STEREO FOR PERFORMANCE CAPTURE | 4 |
| 142 | FACIAL PERFORMANCE CAPTURE AND EXPRESSIVE TRANSLATION FOR KING KONG | 4 |
| 131 | FACE2FACE: REAL-TIME FACE CAPTURE AND REENACTMENT OF RGB VIDEOS | 4 |
| 293 | RECONSTRUCTING DETAILED DYNAMIC FACE GEOMETRY FROM MONOCULAR VIDEO | 3 |
| 287 | REALISTIC FACIAL EXPRESSION RECONSTRUCTION FOR VR HMD USERS | 3 |
| 283 | REAL-TIME HIGH-FIDELITY FACIAL PERFORMANCE CAPTURE | 3 |
| 282 | REAL-TIME HIERARCHICAL FACIAL PERFORMANCE CAPTURE | 3 |
| 279 | REAL-TIME FACIAL MOTION CAPTURE USING RGB-D IMAGES UNDER COMPLEX MOTION AND OCCLUSIONS | 3 |
| 270 | REAL-TIME 3D FACE-EYE PERFORMANCE CAPTURE OF A PERSON WEARING VR HEADSET | 3 |
| 269 | REAL-TIME 3D EYELIDS TRACKING FROM SEMANTIC EDGES | 3 |
| 236 | NEURAL VOLUMES: LEARNING DYNAMIC RENDERABLE VOLUMES FROM IMAGES | 3 |
| 221 | MODULAR PRIMITIVES FOR HIGH-PERFORMANCE DIFFERENTIABLE RENDERING | 3 |
| 178 | HIGH QUALITY BINOCULAR FACIAL PERFORMANCE CAPTURE FROM PARTIALLY BLURRED IMAGE SEQUENCE | 3 |
| 171 | HIGH-DETAIL 3D CAPTURE AND NON-SEQUENTIAL ALIGNMENT OF FACIAL PERFORMANCE | 3 |
| 116 | EMOTION RECOGNITION VIA FACIAL EXPRESSIONS | 3 |
| 99 | DETAILED SPATIO-TEMPORAL RECONSTRUCTION OF EYELIDS | 3 |
| 93 | DEEPFACEFLOW: IN-THE-WILD DENSE 3D FACIAL MOTION ESTIMATION | 3 |
| 89 | DEEP INCREMENTAL LEARNING FOR EFFICIENT HIGH-FIDELITY FACE TRACKING | 3 |
| 38 | AN ANATOMICALLY-CONSTRAINED LOCAL DEFORMATION MODEL FOR MONOCULAR FACE CAPTURE | 3 |
| 36 | ACTGAN: FLEXIBLE AND EFFICIENT ONE-SHOT FACE REENACTMENT | 3 |
| 19 | A HYBRID APPROACH FOR FACIAL PERFORMANCE ANALYSIS AND EDITING | 3 |
postResults_Unique <- litsearchr::remove_duplicates(postResults, field = "TITLE", method = "string_osa")
postResults_Unique <- litsearchr::remove_duplicates(postResults_Unique, field = "DOI", method = "exact")
After reducing down to unique papers, the new various years of publication below:
yearOut_OrigPostUnique <- table(postResults_Unique$YEAR)
barplot(yearOut_OrigPostUnique, main="Years of Publication for Facial Reenactment - POST Query Unique",
xlab="Year", ylab = "Number of Papers Returned")
The papers in 2017, and spike in 2016 failed to drop.
We can first check whether all of the results of the naive search are in the new search. Since we conducted the naive search using the most important terms that occurred to us for our topic, and since we included these same terms or very similar in our new search, we ought to get the same articles back among our new results, at least if they were really relevant.
naiveresults %>%
mutate(in_new_results=TITLE %in% postResults_Unique[, "TITLE"]) ->
naive_resultsIn
naive_resultsIn %>%
filter(!in_new_results) %>%
select(TITLE, KEYWORDS)
Overall, they all appear to have remained in the new Post Naive citations.
The following are papers which need to be reviewed and the frequency they were discovered in the 6 research databases (IEEE, ACM, Science Direct, Scopus, Web of Science)
rownames(tableOfTitlesPost) <- c(1:nrow(tableOfTitlesPost))
colnames(tableOfTitlesPost) <- c("Discovered Document", "Number of DB")
knitr::kable(tableOfTitlesPost[1:(nrow(tableOfTitlesPost)-1),], caption = "Final List of Discovered Research Papers - Different Database Sources", )
| Discovered Document | Number of DB |
|---|---|
| PRACTICAL DYNAMIC FACIAL APPEARANCE MODELING AND ACQUISITION | 5 |
| HIGH RESOLUTION PASSIVE FACIAL PERFORMANCE CAPTURE | 5 |
| HIGH-QUALITY PASSIVE FACIAL PERFORMANCE CAPTURE USING ANCHOR FRAMES | 5 |
| HEAD-MOUNTED PHOTOMETRIC STEREO FOR PERFORMANCE CAPTURE | 4 |
| FACIAL PERFORMANCE CAPTURE AND EXPRESSIVE TRANSLATION FOR KING KONG | 4 |
| FACE2FACE: REAL-TIME FACE CAPTURE AND REENACTMENT OF RGB VIDEOS | 4 |
| RECONSTRUCTING DETAILED DYNAMIC FACE GEOMETRY FROM MONOCULAR VIDEO | 3 |
| REALISTIC FACIAL EXPRESSION RECONSTRUCTION FOR VR HMD USERS | 3 |
| REAL-TIME HIGH-FIDELITY FACIAL PERFORMANCE CAPTURE | 3 |
| REAL-TIME HIERARCHICAL FACIAL PERFORMANCE CAPTURE | 3 |
| REAL-TIME FACIAL MOTION CAPTURE USING RGB-D IMAGES UNDER COMPLEX MOTION AND OCCLUSIONS | 3 |
| REAL-TIME 3D FACE-EYE PERFORMANCE CAPTURE OF A PERSON WEARING VR HEADSET | 3 |
| REAL-TIME 3D EYELIDS TRACKING FROM SEMANTIC EDGES | 3 |
| NEURAL VOLUMES: LEARNING DYNAMIC RENDERABLE VOLUMES FROM IMAGES | 3 |
| MODULAR PRIMITIVES FOR HIGH-PERFORMANCE DIFFERENTIABLE RENDERING | 3 |
| HIGH QUALITY BINOCULAR FACIAL PERFORMANCE CAPTURE FROM PARTIALLY BLURRED IMAGE SEQUENCE | 3 |
| HIGH-DETAIL 3D CAPTURE AND NON-SEQUENTIAL ALIGNMENT OF FACIAL PERFORMANCE | 3 |
| EMOTION RECOGNITION VIA FACIAL EXPRESSIONS | 3 |
| DETAILED SPATIO-TEMPORAL RECONSTRUCTION OF EYELIDS | 3 |
| DEEPFACEFLOW: IN-THE-WILD DENSE 3D FACIAL MOTION ESTIMATION | 3 |
| DEEP INCREMENTAL LEARNING FOR EFFICIENT HIGH-FIDELITY FACE TRACKING | 3 |
| AN ANATOMICALLY-CONSTRAINED LOCAL DEFORMATION MODEL FOR MONOCULAR FACE CAPTURE | 3 |
| ACTGAN: FLEXIBLE AND EFFICIENT ONE-SHOT FACE REENACTMENT | 3 |
| A HYBRID APPROACH FOR FACIAL PERFORMANCE ANALYSIS AND EDITING | 3 |
| VIRTUAL HEADCAM: PAN/TILT MIRROR-BASED FACIAL PERFORMANCE TRACKING | 2 |
| VIDEO FACE REPLACEMENT | 2 |
| UNCONSTRAINED REALTIME FACIAL PERFORMANCE CAPTURE | 2 |
| SIGN CORRELATION SUBSPACE FOR FACE ALIGNMENT | 2 |
| SIGN-CORRELATION PARTITION BASED ON GLOBAL SUPERVISED DESCENT METHOD FOR FACE ALIGNMENT | 2 |
| SEMANTIC DEEP FACE MODELS | 2 |
| REPURPOSING LABELED PHOTOGRAPHS FOR FACIAL TRACKING WITH ALTERNATIVE CAMERA INTRINSICS | 2 |
| REALTIME PERFORMANCE-BASED FACIAL ANIMATION | 2 |
| REAL-TIME FACIAL SEGMENTATION AND PERFORMANCE CAPTURE FROM RGB INPUT | 2 |
| REAL-TIME EXPRESSION TRANSFER FOR FACIAL REENACTMENT | 2 |
| POST-PRODUCTION FACIAL PERFORMANCE RELIGHTING USING REFLECTANCE TRANSFER | 2 |
| PINSCREEN: 3D AVATAR FROM A SINGLE IMAGE | 2 |
| PERFORMANCE RELIGHTING AND REFLECTANCE TRANSFORMATION WITH TIME-MULTIPLEXED ILLUMINATION | 2 |
| NEURAL VOICE PUPPETRY: AUDIO-DRIVEN FACIAL REENACTMENT | 2 |
| MULTIVIEW FACE CAPTURE USING POLARIZED SPHERICAL GRADIENT ILLUMINATION | 2 |
| MULTI-SCALE CAPTURE OF FACIAL GEOMETRY AND MOTION | 2 |
| MOVIERESHAPE: TRACKING AND RESHAPING OF HUMANS IN VIDEOS | 2 |
| MARKERLESS 3D FACIAL MOTION CAPTURE SYSTEM | 2 |
| LIGHTWEIGHT BINOCULAR FACIAL PERFORMANCE CAPTURE UNDER UNCONTROLLED LIGHTING | 2 |
| LEVERAGING MOTION CAPTURE AND 3D SCANNING FOR HIGH-FIDELITY FACIAL PERFORMANCE ACQUISITION | 2 |
| INSTANCE-WEIGHTED TRANSFER LEARNING OF ACTIVE APPEARANCE MODELS | 2 |
| HOLOCHAT: 3D AVATARS ON MOBILE LIGHT FIELD DISPLAYS | 2 |
| HEAD2HEAD: VIDEO-BASED NEURAL HEAD SYNTHESIS | 2 |
| GENERATIVE VIDEO FACE REENACTMENT BY AUS AND GAZE REGULARIZATION | 2 |
| FSGAN: SUBJECT AGNOSTIC FACE SWAPPING AND REENACTMENT | 2 |
| FREENET: MULTI-IDENTITY FACE REENACTMENT | 2 |
| FACIAL PERFORMANCE SYNTHESIS USING DEFORMATION-DRIVEN POLYNOMIAL DISPLACEMENT MAPS | 2 |
| FACIAL MAKEUP TRANSFER COMBINING ILLUMINATION TRANSFER | 2 |
| FACEDIRECTOR: CONTINUOUS CONTROL OF FACIAL PERFORMANCE IN VIDEO | 2 |
| FACE VIDEO GENERATION FROM A SINGLE IMAGE AND LANDMARKS | 2 |
| EGOCENTRIC VIDEOCONFERENCING | 2 |
| DYNAMIC SHAPE CAPTURE USING MULTI-VIEW PHOTOMETRIC STEREO | 2 |
| DRIVING HIGH-RESOLUTION FACIAL SCANS WITH VIDEO PERFORMANCE CAPTURE | 2 |
| DEMO OF FACE2FACE: REAL-TIME FACE CAPTURE AND REENACTMENT OF RGB VIDEOS | 2 |
| DEFERRED NEURAL RENDERING: IMAGE SYNTHESIS USING NEURAL TEXTURES | 2 |
| DEEP REFLECTANCE FIELDS: HIGH-QUALITY FACIAL REFLECTANCE FIELD INFERENCE FROM COLOR GRADIENT ILLUMINATION | 2 |
| COOPERATIVE PATCH-BASED 3D SURFACE TRACKING | 2 |
| COMPREHENSIVE FACIAL PERFORMANCE CAPTURE | 2 |
| BINOCULAR PHOTOMETRIC STEREO ACQUISITION AND RECONSTRUCTION FOR 3D TALKING HEAD APPLICATIONS | 2 |
| AVENGERS: CAPTURING THANOS’S COMPLEX FACE | 2 |
| APB2FACE: AUDIO-GUIDED FACE REENACTMENT WITH AUXILIARY POSE AND BLINK SIGNALS | 2 |
| ANY-TO-ONE FACE REENACTMENT BASED ON CONDITIONAL GENERATIVE ADVERSARIAL NETWORK | 2 |
| ANATOMICAL CONSIDERATIONS IN FACIAL MOTION CAPTURE | 2 |
| ANALYSIS AND SYNTHESIS OF FACIAL EXPRESSIONS WITH HAND-GENERATED MUSCLE ACTUATION BASIS | 2 |
| AN OVERVIEW OF DEEPFAKE: THE SWORD OF DAMOCLES IN AI | 2 |
| A PRACTICAL APPEARANCE MODEL FOR DYNAMIC FACIAL COLOR | 2 |
| A CELL PHONE BASED PLATFORM FOR FACIAL PERFORMANCE CAPTURE | 2 |
| 15TH INTERNATIONAL SYMPOSIUM ON VISUAL COMPUTING, ISVC 2020 | 2 |
| XPRESSION: MOBILE REAL-TIME FACIAL EXPRESSION TRANSFER | 1 |
| XPRESSION : MOBILE REAL-TIME FACIAL EXPRESSION TRANSFER | 1 |
| WHAT IF YOUR CAR WOULD CARE? EXPLORING USE CASES FOR AFFECTIVE AUTOMOTIVE USER INTERFACES | 1 |
| WARP-GUIDED GANS FOR SINGLE-PHOTO FACIAL ANIMATION | 1 |
| VR FACIAL ANIMATION VIA MULTIVIEW IMAGE TRANSLATION | 1 |
| VR CONTENT CREATION AND EXPLORATION WITH DEEP LEARNING: A SURVEY | 1 |
| VMV 2015 - VISION, MODELING AND VISUALIZATION | 1 |
| VISUAL SPEECH ANIMATION | 1 |
| VISION-BASED TARGET TRACKING FOR UNMANNED SURFACE VEHICLE CONSIDERING ITS MOTION FEATURES | 1 |
| VIRTUAL CHARACTER PERFORMANCE FROM SPEECH | 1 |
| VIDEO TO FULLY AUTOMATIC 3D HAIR MODEL | 1 |
| VIDEO SYNTHESIS OF HUMAN UPPER BODY WITH REALISTIC FACE | 1 |
| VIDEO-BASED FACIAL RE-ANIMATION | 1 |
| VIDEO-AUDIO DRIVEN REAL-TIME FACIAL ANIMATION | 1 |
| USING FACIAL EMOTIONAL SIGNALS FOR COMMUNICATION BETWEEN EMOTIONALLY EXPRESSIVE AVATARS IN VIRTUAL WORLDS | 1 |
| UNIFIED APPLICATION OF STYLE TRANSFER FOR FACE SWAPPING AND REENACTMENT | 1 |
| UNCANNY VALLEY IN VIRTUAL REALITY | 1 |
| U-NET CONDITIONAL GANS FOR PHOTO-REALISTIC AND IDENTITY-PRESERVING FACIAL EXPRESSION SYNTHESIS | 1 |
| TRANSFIGURING PORTRAITS | 1 |
| TRANSFER MODEL COLLABORATING METRIC LEARNING AND DICTIONARY LEARNING FOR CROSS-DOMAIN FACIAL EXPRESSION RECOGNITION | 1 |
| TOWARDS PROPAGATION OF CHANGES BY MODEL APPROXIMATIONS | 1 |
| TOWARDS OPTIMAL NON-RIGID SURFACE TRACKING | 1 |
| TOWARDS HIGHER QUALITY CHARACTER PERFORMANCE IN PREVIZ | 1 |
| TOWARDS A DATA-DRIVEN FRAMEWORK FOR REALISTIC SELF-ORGANIZED VIRTUAL HUMANS: COORDINATED HEAD AND EYE MOVEMENTS | 1 |
| TOWARD HUMAN-MAGIC INTERACTION: INTERFACING BIOLOGICAL, TANGIBLE, AND CULTURAL TECHNOLOGY | 1 |
| TOTAL MOVING FACE RECONSTRUCTION | 1 |
| THIN SKIN ELASTODYNAMICS | 1 |
| THE VR BOOK: HUMAN-CENTERED DESIGN FOR VIRTUAL REALITY | 1 |
| THE STATE OF FAKERY | 1 |
| THE RELIGHTABLES: VOLUMETRIC PERFORMANCE CAPTURE OF HUMANS WITH REALISTIC RELIGHTING | 1 |
| THE PRACTICE OF SMILING: FACIAL EXPRESSION AND REPERTORY PERFORMANCE IN PROFESSOR PALMAI€™S €ŒSCHOOL FOR SMILES€ | 1 |
| THE PRACTICE OF SMILING: FACIAL EXPRESSION AND REPERTORY PERFORMANCE IN PROFESSOR PALMAI’S ``SCHOOL FOR SMILES{’’ | 1 |
| THE PARAMETER OPTIMIZATION OF THE PID AND PIDD CONTROLLER FOR A DISCRETE OBJECT | 1 |
| THE JENA SPEAKER SET (JESS)B | |
| FULL BODY PERFORMANCE CAPTURE UNDER UNCONTROLLED AND VARYING ILLUMINATION: A SHADING-BASED APPROACH,COMPUTER VISION B | 1 |
| THE IMPACT OF STYLIZATION ON FACE RECOGNITION | 1 |
| THE IMPACT OF A VIRTUAL AGENT’S NON-VERBAL EMOTIONAL EXPRESSION ON A USER’S PERSONAL SPACE PREFERENCES | 1 |
| THE HISTORY OF DIGITAL SPAM | 1 |
| THE FACES OF “THE POLAR EXPRESS” | 1 |
| THE EYES HAVE IT: AN INTEGRATED EYE AND FACE MODEL FOR PHOTOREALISTIC FACIAL ANIMATION | 1 |
| THE DIGITAL EMILY PROJECT: PHOTOREAL FACIAL MODELING AND ANIMATION | 1 |
| THE CREATION AND DETECTION OF DEEPFAKES: A SURVEY | 1 |
| THE ACADEMY’S SCIENTIFIC AND TECHNICAL AWARDS: THE TECHNOLOGY, THE AWARDEES, AND THE PROCESS | 1 |
| TEXTURE DEFORMATION BASED GENERATIVE ADVERSARIAL NETWORKS FOR MULTI-DOMAIN FACE EDITING | 1 |
| TEXT-BASED EDITING OF TALKING-HEAD VIDEO | 1 |
| TEMPORALLY CONSISTENT WIDE BASELINE FACIAL PERFORMANCE CAPTURE VIA IMAGE WARPING | 1 |
| TEMPORAL UPSAMPLING OF PERFORMANCE GEOMETRY USING PHOTOMETRIC ALIGNMENT | 1 |
| TECHNICAL PERSPECTIVE: PHOTOREALISTIC FACIAL DIGITIZATION AND MANIPULATION | 1 |
| TAKETOONS: SCRIPT-DRIVEN PERFORMANCE ANIMATION | 1 |
| SYNTHESIZING OBAMA: LEARNING LIP SYNC FROM AUDIO | 1 |
| SYNTHESIS OF FACIAL EXPRESSIONS IN PHOTOGRAPHS: CHARACTERISTICS, APPROACHES, AND CHALLENGES | 1 |
| SUPERVISED COORDINATE DESCENT METHOD WITH A 3D BILINEAR MODEL FOR FACE ALIGNMENT AND TRACKING | 1 |
| STYLE TRANSFER FOR HEADSHOT PORTRAITS | 1 |
| STUDY OF HOLOPORTATION: USING NETWORK ERRORS FOR IMPROVING ACCURACY AND EFFICIENCY | 1 |
| STRUCTURES OF UNFEELING: MYSTERIOUS SKIN | 1 |
| STRUCTURE-AWARE TRANSFER OF FACIAL BLENDSHAPES | 1 |
| STABILIZED REAL-TIME FACE TRACKING VIA A LEARNED DYNAMIC RIGIDITY PRIOR | 1 |
| SPEECH-DRIVEN FACE REENACTMENT FOR A VIDEO SEQUENCE | 1 |
| SPARSE LOCALIZED DEFORMATION COMPONENTS | 1 |
| SPACETIME EXPRESSION CLONING FOR BLENDSHAPES | 1 |
| SONY PICTURES IMAGEWORKS | 1 |
| SOCIAL TELEPRESENCE BAKEOFF: SKYPE GROUP VIDEO CALLING, GOOGLE+ HANGOUTS, AND MICROSOFT AVATAR KINECT | 1 |
| SMOOTH CONTACT-AWARE FACIAL BLENDSHAPES TRANSFER | 1 |
| SKIN MICROSTRUCTURE DEFORMATION WITH DISPLACEMENT MAP CONVOLUTION | 1 |
| SINGLE IMAGE PORTRAIT RELIGHTING VIA EXPLICIT MULTIPLE REFLECTANCE CHANNEL MODELING | 1 |
| SINGLE-SHOT HIGH-QUALITY FACIAL GEOMETRY AND SKIN APPEARANCE CAPTURE | 1 |
| SIGGRAPH ’18: ACM SIGGRAPH 2018 TALKS | 1 |
| SIGGRAPH ’18: ACM SIGGRAPH 2018 REAL-TIME LIVE! | 1 |
| SIGGRAPH ’18: ACM SIGGRAPH 2018 ART GALLERY | 1 |
| SIGGRAPH ’17: ACM SIGGRAPH 2017 TALKS | 1 |
| SIGGRAPH ’17: ACM SIGGRAPH 2017 POSTERS | 1 |
| SIGGRAPH ’17: ACM SIGGRAPH 2017 PANELS | 1 |
| SIGGRAPH ’17: ACM SIGGRAPH 2017 EMERGING TECHNOLOGIES | 1 |
| SIGGRAPH ’16: ACM SIGGRAPH 2016 TALKS | 1 |
| SIGGRAPH ’16: ACM SIGGRAPH 2016 POSTERS | 1 |
| SIGGRAPH ’16: ACM SIGGRAPH 2016 EMERGING TECHNOLOGIES | 1 |
| SIGGRAPH ’16: ACM SIGGRAPH 2016 COURSES | 1 |
| SIGGRAPH ’15: ACM SIGGRAPH 2015 TALKS | 1 |
| SIGGRAPH ’15: ACM SIGGRAPH 2015 POSTERS | 1 |
| SIGGRAPH ’15: ACM SIGGRAPH 2015 EMERGING TECHNOLOGIES | 1 |
| SIGGRAPH ’15: ACM SIGGRAPH 2015 COURSES | 1 |
| SIGGRAPH ’14: ACM SIGGRAPH 2014 TALKS | 1 |
| SIGGRAPH ’14: ACM SIGGRAPH 2014 COURSES | 1 |
| SIGGRAPH ’13: ACM SIGGRAPH 2013 EMERGING TECHNOLOGIES | 1 |
| SIGGRAPH ’13: ACM SIGGRAPH 2013 COMPUTER ANIMATION FESTIVAL | 1 |
| SIAMC: A SOCIALLY IMMERSIVE AVATAR MEDIATED COMMUNICATION PLATFORM | 1 |
| SENTENCE GUIDED OBJECT COLOR CHANGE BY ADVERSARIAL LEARNING | 1 |
| SEMANTICALLY-AWARE BLENDSHAPE RIGS FROM FACIAL PERFORMANCE MEASUREMENTS | 1 |
| SEMANTIC FACIAL SCORES AND COMPACT DEEP TRANSFERRED DESCRIPTORS FOR SCALABLE FACE IMAGE RETRIEVAL | 1 |
| SELFIE VIDEO STABILIZATION | 1 |
| SECONDARY MOTION FOR PERFORMED 2D ANIMATION | 1 |
| SA ’19: SIGGRAPH ASIA 2019 TECHNICAL BRIEFS | 1 |
| SA ’18: SIGGRAPH ASIA 2018 REAL-TIME LIVE! | 1 |
| SA ’18: SIGGRAPH ASIA 2018 EMERGING TECHNOLOGIES | 1 |
| SA ’16: SIGGRAPH ASIA 2016 VR SHOWCASE | 1 |
| SA ’16: SIGGRAPH ASIA 2016 TECHNICAL BRIEFS | 1 |
| SA ’16: SIGGRAPH ASIA 2016 EMERGING TECHNOLOGIES | 1 |
| SA ’15: SIGGRAPH ASIA 2015 TECHNICAL BRIEFS | 1 |
| SA ’14: SIGGRAPH ASIA 2014 AUTONOMOUS VIRTUAL HUMANS AND SOCIAL ROBOT FOR TELEPRESENCE | 1 |
| ROBUST 3D HUMAN FACE RECONSTRUCTION BY CONSUMER BINOCULAR-STEREO CAMERAS | 1 |
| RIGID STABILIZATION OF FACIAL EXPRESSIONS | 1 |
| REENACTNET: REAL-TIME FULL HEAD REENACTMENT | 1 |
| REDUCED MARKER LAYOUTS FOR OPTICAL MOTION CAPTURE OF HANDS | 1 |
| RECYCLING A LANDMARK DATASET FOR REAL-TIME FACIAL CAPTURE AND ANIMATION WITH LOW COST HMD INTEGRATED CAMERAS | 1 |
| RECORDING AND REENACTMENT OF COLLABORATIVE DIAGNOSIS SESSIONS USING DICOM | 1 |
| RECONSTRUCTION OF PERSONALIZED 3D FACE RIGS FROM MONOCULAR VIDEO | 1 |
| REALTIME PERFORMANCE-BASED FACIAL AVATARS FOR IMMERSIVE GAMEPLAY | 1 |
| REALTIME FACIAL ANIMATION WITH ON-THE-FLY CORRECTIVES | 1 |
| REALTIME 3D EYE GAZE ANIMATION USING A SINGLE RGB CAMERA | 1 |
| REALISTIC RETARGETING OF FACIAL VIDEO | 1 |
| REAL-TIME, SINGLE CAMERA, DIGITAL HUMAN DEVELOPMENT | 1 |
| REAL-TIME VLSI ARCHITECTURE FOR HYPERSPECTRAL IMAGE CLASSIFICATION USING THE CONSTRAINED LINEAR DISCRIMINANT ALGORITHM | 1 |
| REAL-TIME LINE-OF-SIGHT RATE ESTIMATOR FOR SURFACE-TO-AIR MISSILES WITH A RF SEEKER | 1 |
| REAL-TIME FACIAL TRACKING IN VIRTUAL REALITY | 1 |
| REAL-TIME FACIAL ANIMATION WITH IMAGE-BASED DYNAMIC AVATARS | 1 |
| REAL-TIME FACIAL ANIMATION ON MOBILE DEVICES | 1 |
| REAL-TIME FACIAL ANIMATION FROM LIVE VIDEO TRACKING | 1 |
| REAL-TIME FACE VIEW CORRECTION FOR FRONT-FACING CAMERAS | 1 |
| REAL-TIME FACE VIDEO SWAPPING FROM A SINGLE PORTRAIT | 1 |
| REAL-TIME EXPRESSION-SENSITIVE HMD FACE RECONSTRUCTION | 1 |
| REAL-TIME CLEANING AND REFINEMENT OF FACIAL ANIMATION SIGNALS | 1 |
| REAL-TIME 3D EYE PERFORMANCE RECONSTRUCTION FOR RGBD CAMERAS | 1 |
| READING BETWEEN THE DOTS: COMBINING 3D MARKERS AND FACS CLASSIFICATION FOR HIGH-QUALITY BLENDSHAPE FACIAL ANIMATION | 1 |
| RAPID SATELLITE CAPTURE BY A SPACE ROBOT BASED ON DELAY COMPENSATION | 1 |
| RAPID PHOTOREALISTIC BLENDSHAPE MODELING FROM RGB-D SENSORS | 1 |
| RAMPAGE: A PRODUCT OF EVOLUTION | 1 |
| PROTECTING REAL-TIME VIDEO CHAT AGAINST FAKE FACIAL VIDEOS GENERATED BY FACE REENACTMENT | 1 |
| PROGRESS AND CHALLENGES OF REMOTE SENSING EDGE INTELLIGENCE TECHNOLOGY [ɥƄŸÈ¾¹Ç¼˜Æ™ºÈƒ½ÆŠ€ÆŒ¯Ç ”Ç©¶È¿›Å±•ÅŠÆŒ‘ƈ˜] | 1 |
| PRODUCTION-LEVEL FACIAL PERFORMANCE CAPTURE USING DEEP CONVOLUTIONAL NEURAL NETWORKS | 1 |
| PROCEEDINGS - DIGIPRO 2020: ACM SIGGRAPH DIGITAL PRODUCTION SYMPOSIUM | 1 |
| PROCEEDINGS - 13TH INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN AND COMPUTER GRAPHICS, CAD/GRAPHICS 2013 | 1 |
| POSE-SPACE ANIMATION AND TRANSFER OF FACIAL DETAILS | 1 |
| PLAYING HIDE-AND-SEEK WITH A FOCUSED MOBILE ADVERSARY IN UNATTENDED WIRELESS SENSOR NETWORKS | 1 |
| PLAYABLE UNIVERSAL CAPTURE: COMPRESSION AND REAL-TIME SEQUENCING OF IMAGE-BASED FACIAL ANIMATION | 1 |
| PINSCREEN AVATARS IN YOUR POCKET: MOBILE PAGAN ENGINE AND PERSONALIZED GAMING | 1 |
| PIE: PORTRAIT IMAGE EMBEDDING FOR SEMANTIC CONTROL | 1 |
| PHYSICAL FACE CLONING | 1 |
| PHOTOREALISTIC FACIAL SYNTHESIS IN THE DIMENSIONAL AFFECT SPACE | 1 |
| PHACE: PHYSICS-BASED FACE MODELING AND ANIMATION | 1 |
| PERSONA: A METHOD FOR FACIAL ANALYSIS IN VIDEO AND APPLICATION IN ENTERTAINMENT | 1 |
| PERFORMING ANGST | 1 |
| PERFORMANCE-BASED EXPRESSIVE CHARACTER ANIMATION | 1 |
| PERCEPTION OF LINEAR AND NONLINEAR MOTION PROPERTIES USING A FACS VALIDATED 3D FACIAL MODEL | 1 |
| PAGAN: REAL-TIME AVATARS USING DYNAMIC TEXTURES | 1 |
| OPTIMIZATIONS OF VR360 ANIMATION PRODUCTION PROCESS | 1 |
| OPTIMAL MARKER SET FOR MOTION CAPTURE OF DYNAMICAL FACIAL EXPRESSIONS | 1 |
| ONLINE MODELING FOR REALTIME FACIAL ANIMATION | 1 |
| ONLINE GENERATIVE MODEL PERSONALIZATION FOR HAND TRACKING | 1 |
| ON-SET PERFORMANCE CAPTURE OF MULTIPLE ACTORS WITH A STEREO CAMERA | 1 |
| NEURAL STYLE-PRESERVING VISUAL DUBBING | 1 |
| NEURAL RENDERING AND REENACTMENT OF HUMAN ACTOR VIDEOS | 1 |
| NEURAL FACE MODELS FOR EXAMPLE-BASED VISUAL SPEECH SYNTHESIS | 1 |
| MUSIC AS AN ENHANCER FOR IMAGINED MOVEMENT | 1 |
| MUSCLE SIMULATION FOR FACIAL ANIMATION IN KONG: SKULL ISLAND | 1 |
| MULTI - BIOMETRICS APPROACH FOR FACIAL RECOGNITION | 1 |
| MULTI-VIEW STEREO ON CONSISTENT FACE TOPOLOGY | 1 |
| MULTI-TASK LEARNING FOR DETECTING AND SEGMENTING MANIPULATED FACIAL IMAGES AND VIDEOS | 1 |
| MULTI-SUBSPACE SUPERVISED DESCENT METHOD FOR ROBUST FACE ALIGNMENT | 1 |
| MOOD CONGRUITY AND EPISODIC MEMORY IN YOUNG CHILDREN | 1 |
| MODELING AND CAPTURING THE HUMAN BODY: FOR RENDERING, HEALTH AND VISUALIZATION | 1 |
| MODEL-BASED TEETH RECONSTRUCTION | 1 |
| MODEL-BASED SYNTHESIS OF VISUAL SPEECH MOVEMENTS FROM 3D VIDEO | 1 |
| MODALITY DROPOUT FOR IMPROVED PERFORMANCE-DRIVEN TALKING FACES | 1 |
| MIG ’15: PROCEEDINGS OF THE 8TH ACM SIGGRAPH CONFERENCE ON MOTION IN GAMES | 1 |
| MICROEMULSION SYSTEMS BASED ON A C8/10 ALKYL POLYGLUCOSIDE: A REENTRANT PHASE INVERSION INDUCED BY ALCOHOLS? | 1 |
| MENTAL MODELS OF A MOBILE SHOE RACK: EXPLORATORY FINDINGS FROM A LONG-TERM IN-THE-WILD STUDY | 1 |
| MARKER OPTIMIZATION FOR FACIAL MOTION ACQUISITION AND DEFORMATION | 1 |
| MAKING DIGITAL CHARACTERS: CREATION, DEFORMATION, AND ANIMATION | 1 |
| MAKELTTALK: SPEAKER-AWARE TALKING-HEAD ANIMATION | 1 |
| LOCAL SHAPE BLENDING USING COHERENT WEIGHTED REGIONS | 1 |
| LIGHTWEIGHT WRINKLE SYNTHESIS FOR 3D FACIAL MODELING AND ANIMATION | 1 |
| LIGHTWEIGHT EYE CAPTURE USING A PARAMETRIC MODEL | 1 |
| LEARNING TEMPORAL COHERENCE VIA SELF-SUPERVISION FOR GAN-BASED VIDEO GENERATION | 1 |
| LEARNING IDENTITY-INVARIANT MOTION REPRESENTATIONS FOR CROSS-ID FACE REENACTMENT | 1 |
| LEARNING FACIAL EXPRESSIONS WITH 3D MESH CONVOLUTIONAL NEURAL NETWORK | 1 |
| LEARNING DETAILED FACE RECONSTRUCTION FROM A SINGLE IMAGE | 1 |
| LEARNING CHARACTER-AGNOSTIC MOTION FOR MOTION RETARGETING IN 2D | 1 |
| LEARNING AN APPEARANCE-BASED GAZE ESTIMATOR FROM ONE MILLION SYNTHESISED IMAGES | 1 |
| LEARNING A MODEL OF FACIAL SHAPE AND EXPRESSION FROM 4D SCANS | 1 |
| JOINT FACE ALIGNMENT AND SEGMENTATION VIA DEEP MULTI-TASK LEARNING | 1 |
| JANUSB |
BUILDING ANATOMICALLY REALISTIC JAW KINEMATICS MODEL FROM DATA,THE VISUAL COMPUTER,“,35,6 - 8,10.1007/S00371-019-01677-8,WENWU YANGNATHAN MARSHAKDANIEL SC=KORASRIKUMAR RAMALINGAMLADISLAV KAVAN,2019,HTTP://LINK.SPRINGER.COM/ARTICLE/10.1007/S00371-019-01677-8,ARTICLE DATA-DRIVEN FACIAL EXPRESSION SYNTHESIS VIA LAPLACIAN DEFORMATION,MULTIMEDIA TOOLS AND APPLICATIONS,”,58,1,10.1007/S11042-010-0688-7,XIANMEI WANXIAOGANG JIN,2012,HTTP://LINK.SPRINGER.COM/ARTICLE/10.1007/S11042-010-0688-7,ARTICLE ROBOTIC FACIALITY: THE PHILOSOPHY | 1| |JALI: AN ANIMATOR-CENTRIC VISEME MODEL FOR EXPRESSIVE LIP SYNCHRONIZATION | 1| |JALI-DRIVEN EXPRESSIVE FACIAL ANIMATION AND MULTILINGUAL SPEECH IN CYBERPUNK 2077 | 1| |INTUITIVE FACIAL ANIMATION EDITING BASED ON A GENERATIVE RNN FRAMEWORK | 1| |INTRODUCTION | 1| |INTERACTIVE EDITING OF PERFORMANCE-BASED FACIAL ANIMATION | 1| |INTELLIGENT SYSTEMS CONFERENCE, INTELLISYS 2019 | 1| |IMPROVING THE TARGET IMPEDANCE METHOD FOR PCB DECOUPLING OF CORE POWER | 1| |IMPROVEMENTS IN APPROXIMATION PERFORMANCE AND PARALLELIZATION OF NONNEGATIVE MATRIX FACTORIZATION WITH NEWTON ITERATION | 1| |IMAGE-BASED FACE ILLUMINATION TRANSFERRING USING LOGARITHMIC TOTAL VARIATION MODELS | 1| |ILM FACIAL PERFORMANCE CAPTURE | 1| |HYBRID REGRESSION AND ISOPHOTE CURVATURE FOR ACCURATE EYE CENTER LOCALIZATION | 1| |HUMAN UPPER-BODY INVERSE KINEMATICS FOR INCREASED EMBODIMENT IN CONSUMER-GRADE VIRTUAL REALITY | 1| |HUMAN FACE PROJECT | 1| |HUMAN-LIKE FACIAL EXPRESSION IMITATION FOR HUMANOID ROBOT BASED ON RECURRENT NEURAL NETWORK | 1| |HIGH FIDELITY FACIAL ANIMATION CAPTURE AND RETARGETING WITH CONTOURS | 1| |HIGH-QUALITY FACE CAPTURE USING ANATOMICAL MUSCLES | 1| |HIGH-FIDELITY FACIAL REFLECTANCE AND GEOMETRY INFERENCE FROM AN UNCONSTRAINED IMAGE | 1| |HIGH-FIDELITY FACIAL PERFORMANCE CAPTURE WITH NON-SEQUENTIAL TEMPORAL ALIGNMENT | 1| |HIGH-FIDELITY FACIAL AND SPEECH ANIMATION FOR VR HMDS | 1| |HEADON: REAL-TIME REENACTMENT OF HUMAN PORTRAIT VIDEOS | 1| |HEAD2HEAD++: DEEP FACIAL ATTRIBUTES RE-TARGETING | 1| |HEAD MOVEMENTS AND ANIMATING FACIAL EXPRESSIONS BASED ON MULTI-CURVE SPECTRUM | 1| |GUEST EDITORIAL: ADVANCED UNDERSTANDING AND MODELLING OF HUMAN MOTION IN MULTIDIMENSIONAL SPACES | 1| |GREEK SIGN LANGUAGE VOCABULARY RECOGNITION USING KINECT | 1| |GESTURE-TO-GESTURE TRANSLATION IN THE WILD VIA CATEGORY-INDEPENDENT CONDITIONAL MAPS | 1| |GEOMETRY GUIDED ADVERSARIAL FACIAL EXPRESSION SYNTHESIS | 1| |GENERATING FACIAL EXPRESSION DATA: COMPUTATIONAL AND EXPERIMENTAL EVIDENCE | 1| |GENERATING AND RANKING DIVERSE MULTI-CHARACTER INTERACTIONS | 1| |FUSION4D: REAL-TIME PERFORMANCE CAPTURE OF CHALLENGING SCENES | 1| |FULLY AUTOMATIC GENERATION OF ANATOMICAL FACE SIMULATION MODELS | 1| |FULLY AUTOMATED AND HIGHLY ACCURATE DENSE CORRESPONDENCE FOR FACIAL SURFACES | 1| |FSE 2016: PROCEEDINGS OF THE 2016 24TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON FOUNDATIONS OF SOFTWARE ENGINEERING | 1| |FEATURE-PRESERVING DETAILED 3D FACE RECONSTRUCTION FROM A SINGLE IMAGE | 1| |FATAL ERROR: ARTIFICIAL CREATIVE INTELLIGENCE (ACI) | 1| |FAST AND DEEP FACIAL DEFORMATIONS | 1| |FAKEBUSTER: A DEEPFAKES DETECTION TOOL FOR VIDEO CONFERENCING SCENARIOS | 1| |FACIAL VIDEO AGE PROGRESSION CONSIDERING EXPRESSION CHANGE | 1| |FACIAL TRACKING AND ANIMATION FOR DIGITAL SOCIAL SYSTEM | 1| |FACIAL PERFORMANCE TRANSFER VIA DEFORMABLE MODELS AND PARAMETRIC CORRESPONDENCE | 1| |FACIAL PERFORMANCE SENSING HEAD-MOUNTED DISPLAY | 1| |FACIAL PERFORMANCE ENHANCEMENT USING DYNAMIC SHAPE SPACE ANALYSIS | 1| |FACIAL MOTION RETARGETING | 1| |FACIAL MODELING AND ANIMATION | 1| |FACIAL EXPRESSION SYNTHESIS BY U-NET CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS | 1| |FACIAL EXPRESSION MAPPING INSIDE HEAD MOUNTED DISPLAY BY EMBEDDED OPTICAL SENSORS | 1| |FACIAL ANIMATION (PANEL): PAST, PRESENT AND FUTURE | 1| |FACEVR: REAL-TIME GAZE-AWARE FACIAL REENACTMENT IN VIRTUAL REALITY | 1| |FACESYNCNET: A DEEP LEARNING-BASED APPROACH FOR NON-LINEAR SYNCHRONIZATION OF FACIAL PERFORMANCE VIDEOS | 1| |FACELAB: SCALABLE FACIAL PERFORMANCE CAPTURE FOR VISUAL EFFECTS | 1| |FACE2FACE MANIPULATION DETECTION BASED ON HISTOGRAM OF ORIENTED GRADIENTS | 1| |FACE/OFF: LIVE FACIAL PUPPETRY | 1| |FACE REENACTMENT BASED FACIAL EXPRESSION RECOGNITION | 1| |FACE POSER: INTERACTIVE MODELING OF 3D FACIAL EXPRESSIONS USING FACIAL PRIORS | 1| |FACE-OFF: A FACE RECONSTRUCTION TECHNIQUE FOR VIRTUAL REALITY (VR) SCENARIOS | 1| |EYEOPENER: EDITING EYES IN THE WILD | 1| |EYEGLASS-BASED HANDS-FREE VIDEOPHONE | 1| |EXPRESSION TRANSFER FOR FACIAL SKETCH ANIMATION | 1| |EXPRESSION ANALYSIS IN THE WILD: FROM INDIVIDUAL TO GROUPS | 1| |EXPLOITING COHERENCE IN TIME-VARYING VOXEL DATA | 1| |EXPLICIT FACIAL EXPRESSION TRANSFER VIA FINE-GRAINED REPRESENTATIONS | 1| |EXAMPLE-BASED SYNTHESIS OF STYLIZED FACIAL ANIMATIONS | 1| |EULERIAN SOLIDS FOR SOFT TISSUE AND MORE | 1| |EMOTION INFORMATION VISUALIZATION THROUGH LEARNING OF 3D MORPHABLE FACE MODEL | 1| |EMOTION CHALLENGE: BUILDING A NEW PHOTOREAL FACIAL PERFORMANCE PIPELINE FOR GAMES | 1| |EMOTION CAPTURE: EMOTIONALLY EXPRESSIVE CHARACTERS FOR GAMES | 1| |ELECTROMYOGRAPHIC EVALUATION OF TEMPORALIS MUSCLE FOLLOWING TEMPORALIS TENDON TRANSFER (FACIAL REANIMATION) SURGERY | 1| |EDITING SELF-IMAGE | 1| |DYNAMIC HAIR CAPTURE USING SPACETIME OPTIMIZATION | 1| |DYNAMIC FACIAL ASSET AND RIG GENERATION FROM A SINGLE SCAN | 1| |DYNAMIC 3D AVATAR CREATION FROM HAND-HELD VIDEO INPUT | 1| |DISPLACED DYNAMIC EXPRESSION REGRESSION FOR REAL-TIME FACIAL TRACKING AND ANIMATION | 1| |DIGITAL IRA AND BEYOND: CREATING REAL-TIME PHOTOREAL DIGITAL ACTORS | 1| |DIGITAL FABRICATION AND ITS MEANINGS FOR PHOTOGRAPHY AND FILM | 1| |DIGITAL ALBERT EINSTEIN, A CASE STUDY | 1| |DEVELOPMENT OF PLC BASED RELUCTANCE TYPE TARGET FLOW CONTROL SYSTEM | 1| |DEMOCRATISING MOCAP: REAL-TIME FULL-PERFORMANCE MOTION CAPTURE WITH AN IPHONE X, XSENS, AND MAYA | 1| |DEMO OF FACEVR: REAL-TIME FACIAL REENACTMENT AND EYE GAZE CONTROL IN VIRTUAL REALITY | 1| |DEEPRHYTHM: EXPOSING DEEPFAKES WITH ATTENTIONAL VISUAL HEARTBEAT RHYTHMS | 1| |DEEP VIDEO PORTRAITS | 1| |DEEP REFLECTANCE FIELDS HIGH-QUALITY FACIAL REFLECTANCE FIELD INFERENCE FROM COLOR GRADIENT ILLUMINATION | 1| |DEEP AUDIO-VISUAL LEARNING: A SURVEY | 1| |DEEP APPEARANCE MODELS FOR FACE RENDERING | 1| |DE-IDENTIFICATION WITHOUT LOSING FACES | 1| |DATA-DRIVEN SIMULATION METHODS IN COMPUTER GRAPHICS: CLOTH, TISSUE AND FACES | 1| |DATA-DRIVEN EXTRACTION AND COMPOSITION OF SECONDARY DYNAMICS IN FACIAL PERFORMANCE CAPTURE | 1| |CSCW ’17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING | 1| |CROSS-MAPPING | 1| |CREATING CONNECTION WITH AUTONOMOUS FACIAL ANIMATION | 1| |CORRECTIVE 3D RECONSTRUCTION OF LIPS FROM MONOCULAR VIDEO | 1| |CORRECTING MOTION DISTORTIONS IN TIME-OF-FLIGHT IMAGING | 1| |CONVOLUTIONAL NEURAL NETWORKS FOR FACE ILLUMINATION TRANSFER | 1| |CONTROLLING THE CRITICAL CURRENT ANISOTROPY OF YBCO SUPERCONDUCTING FILMS BY INCORPORATING HYBRID ARTIFICIAL PINNING CENTERS | 1| |CONTROLLABLE HIGH-FIDELITY FACIAL PERFORMANCE TRANSFER | 1| |CONTROLLABLE HAND DEFORMATION FROM SPARSE EXAMPLES WITH RICH DETAILS | 1| |CONTENT RETARGETING USING PARAMETER-PARALLEL FACIAL LAYERS | 1| |CONSTRAINING DENSE HAND SURFACE TRACKING WITH ELASTICITY | 1| |CODED MODULATION FOR E/SUP 2/PR4 AND ME/SUP 2/PR4 CHANNELS | 1| |CINEMA, SPACE, GENDER | 1| |CHALLENGES AND SOLUTIONS FOR CORE POWER DISTRIBUTION NETWORK DESIGNS | 1| |CAPTURING THE HUMAN BODY: FROM VR, CONSUMER, TO HEALTH APPLICATIONS | 1| |BUILDING AND ANIMATING USER-SPECIFIC VOLUMETRIC FACE RIGS | 1| |BUILDING AN EMPIRE: ASSET PRODUCTION IN RYSE | 1| |BRINGING THE KID BACK INTO YOUTUBE KIDS: DETECTING INAPPROPRIATE CONTENT ON VIDEO STREAMING PLATFORMS | 1| |BRINGING PORTRAITS TO LIFE | 1| |BODIES AND PROFESSIONAL PRACTICES | 1| |BECOMING AND CONTRADICTION IN THE MUSLIM COURTESANB HYBRID HUMAN MODELING: MAKING VOLUMETRIC VIDEO ANIMATABLE,REAL VR B | 1| |AVATAR DIGITIZATION FROM A SINGLE IMAGE FOR REAL-TIME RENDERING | 1| |AUTOMATING THE DESIGN FLOW FOR DISTRIBUTED EMBEDDED AUTOMOTIVE APPLICATIONS: KEEPING YOUR TIME PROMISES, AND OPTIMIZING COSTS, TOO | 1| |AUTOMATIC REAL TIME DERIVATION OF BREATHING RATE FROM THERMAL VIDEO SEQUENCES | 1| |AUTOMATIC FACIAL PARALYSIS EVALUATION AUGMENTED BY A CASCADED ENCODER NETWORK STRUCTURE | 1| |AUTOMATIC FACE REENACTMENT | 1| |AUTOMATIC ACQUISITION OF HIGH-FIDELITY FACIAL PERFORMANCES USING MONOCULAR VIDEOS | 1| |ART FACING SCIENCE: ARTISTIC HEURISTICS FOR FACE DETECTION: TRACKING GAZE WHEN LOOKING AT FACES | 1| |ANIMATING THROUGH WARPING: AN EFFICIENT METHOD FOR HIGH-QUALITY FACIAL EXPRESSION ANIMATION | 1| |ANIMATING BLENDSHAPE FACES BY CROSS-MAPPING MOTION CAPTURE DATA | 1| |ANCHOR CASCADE FOR EFFICIENT FACE DETECTION | 1| |ANALYZING GROWING PLANTS FROM 4D POINT CLOUD DATA | 1| |ANALYSIS OF CBRN SENSOR FUSION METHODS | 1| |AN EMPIRICAL RIG FOR JAW ANIMATION | 1| |AN AUTOSTEREOSCOPIC PROJECTOR ARRAY OPTIMIZED FOR 3D FACIAL DISPLAY | 1| |AN ART-DIRECTED WORKFLOW FOR TRANSFERRING FACIAL ACTION CODING BETWEEN MODELS WITH DIFFERENT MESH TOPOLOGIES | 1| |ALIENATING THE FAMILIAR WITH CGI: A RECIPE FOR MAKING A FULL CGI ART HOUSE ANIMATED FEATURE | 1| |ACM TRANSACTIONS ON GRAPHICS | 1| |ACM SIGGRAPH 2010 PAPERS, SIGGRAPH 2010 | 1| |ACCURATE MARKERLESS JAW TRACKING FOR FACIAL PERFORMANCE CAPTURE | 1| |ACCURATE AND ROBUST 3D FACIAL CAPTURE USING A SINGLE RGBD CAMERA | 1| |A VIRTUAL REALITY AGENT-BASED PLATFORM FOR IMPROVISATION BETWEEN REAL AND VIRTUAL ACTORS USING GESTURES | 1| |A TECHNIQUE THAT FACILITATES AN ACCURATE COMPARISON BETWEEN STEREOTACTIC AND IMRT PLANS | 1| |A SURVEY ON HUMAN MOTION ANALYSIS FROM DEPTH DATA | 1| |A SURVEY ON GAIT RECOGNITION VIA WEARABLE SENSORS | 1| |A SPACE-TIME DEPTH SUPER-RESOLUTION SCHEME FOR 3D FACE SCANNING | 1| |A REVIEW ON FACE REENACTMENT TECHNIQUES | 1| |A PRACTICAL AND CONFIGURABLE LIP SYNC METHOD FOR GAMES | 1| |A NEW TARGET RESPONSE WITH PARITY CODING FOR HIGH DENSITY MAGNETIC RECORDING CHANNELS | 1| |A NEUROBEHAVIOURAL FRAMEWORK FOR AUTONOMOUS ANIMATION OF VIRTUAL HUMAN FACES | 1| |A MULTIMODAL SYSTEM FOR NONVERBAL HUMAN FEATURE RECOGNITION IN EMOTIONAL FRAMEWORK | 1| |A MECHANISM TO PROMOTE PRODUCT RECOVERY AND ENVIRONMENTAL PERFORMANCE | 1| |A HIGH-RESOLUTION GEOMETRY CAPTURE SYSTEM FOR FACIAL PERFORMANCE | 1| |A GENERATIVE APPROACH FOR DYNAMICALLY VARYING PHOTOREALISTIC FACIAL EXPRESSIONS IN HUMAN-AGENT INTERACTIONS | 1| |A FUSION METHOD FOR ROBUST FACE TRACKING | 1| |A FRAMEWORK FOR LOCALLY RETARGETING AND RENDERING FACIAL PERFORMANCE | 1| |A FRAMEWORK FOR GENERIC FACIAL EXPRESSION TRANSFER | 1| |A FACE REPLACEMENT NEURAL NETWORK FOR IMAGE AND VIDEO | 1| |A DATA-DRIVEN METHOD FOR VARIATION IN ANIMATED SMILES: EXTENDED ABSTRACT | 1| |A DATA-DRIVEN APPEARANCE MODEL FOR HUMAN FATIGUE | 1| |A BRIEF SURVEY OF VISUAL SALIENCY DETECTION | 1| |3D SHAPE REGRESSION FOR REAL-TIME FACIAL ANIMATION | 1| |3D MORPHABLE FACE MODELS€”PAST, PRESENT, AND FUTURE | 1| |3D FACIAL GEOMETRY ANALYSIS AND ESTIMATION USING EMBEDDED OPTICAL SENSORS ON SMART EYEWEAR | 1| |3D FACE TEMPLATE REGISTRATION USING NORMAL MAPS | 1| |HEADON: REAL-TIME REENACTMENT OF HUMAN PORTRAIT VIDEOS | 1| |FROM THE EYE TO THE HEART: EYE CONTACT TRIGGERS EMOTION SIMULATION | 1|