The purpose of this project is to build a natural language model that suggests an appropriate next word in the user specified words sequence. Three types of data including twitter, news and blogs were consumed to train the model. Appropriate data cleaning and sub-setting techniques were applied to finalize the training data. Various word combinations (N-Grams) were then created using clean data sets and a predictive algorithm (Kneser-Kney smoothing) was applied to predict next word. The final predictive model was optimized appropriately to work as a Shiny application.