Python Challenge number 2 in R

I am sure there is a more effiecient way to tackle this challenge in R, but after many months of thinking, and many hours of trying I would like to share this with the RPUBs world. it is a function that takes a text string and allows the user to make decode it with a single offset to the alphabet.

text<-"g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj."

translate_text<-function(letters_offset, text) {
  if(letters_offset > 26) {
    return(print("Maximum offset is 26"))
  }
  require(tokenizers)
  let<-letters
  let_offset<-c(letters[letters_offset:length(letters)], letters[1:(letters_offset-1)])
  text_lets<-unlist(tokenize_characters(text, strip_non_alphanum = FALSE))
  punc_spaces<-which(text_lets %in% let==FALSE)
  len<-1:length(text_lets)
  len<-len[-punc_spaces]
  test<-vector()
  for(i in 1:length(text_lets)) {
    test[i]<-ifelse(text_lets[i] %in% let, which(let == text_lets[i]),0)
  }
  trans<-vector()
  for(i in 1:length(test)) {
    trans[i]<-ifelse(text_lets[i] %in% let, sub(text_lets[i], let_offset[test[i]], text_lets[i]),text_lets[i])
  }
  return(paste(trans, collapse = ""))
}
translate_text(3,text)
## Loading required package: tokenizers
## Warning: package 'tokenizers' was built under R version 3.4.4
## [1] "i hope you didnt translate it by hand. thats what computers are for. doing it in by hand is inefficient and that's why this text is so long. using string.maketrans() is recommended. now apply on the url."
## Answer to the problem is actually...
translate_text(3,"map")
## [1] "ocr"

Any comments or help to make this more effiecient, please let me know. Thanks.