Creating the fake data.

options(scipen = 999) # tells R studio to use decimals instead of expotential notation

# the following lines are used only to create my fake dataset
dating <- c(1,2,3,4,5,3,4,5,6,7,4,5,6,7,8)
group <- c(rep("haircut",5), rep("child",5),rep("puppy",5))
df <- data.frame(group,dating)
library(ggplot2)
library(psych)
# I will start by getting descriptive statistics
describeBy(df$dating, group=df$group)

 Descriptive statistics by group 
group: child
   vars n mean   sd median trimmed  mad min max range skew
X1    1 5    5 1.58      5       5 1.48   3   7     4    0
   kurtosis   se
X1    -1.91 0.71
----------------------------------------------- 
group: haircut
   vars n mean   sd median trimmed  mad min max range skew
X1    1 5    3 1.58      3       3 1.48   1   5     4    0
   kurtosis   se
X1    -1.91 0.71
----------------------------------------------- 
group: puppy
   vars n mean   sd median trimmed  mad min max range skew
X1    1 5    6 1.58      6       6 1.48   4   8     4    0
   kurtosis   se
X1    -1.91 0.71
# Running an ANOVA
anova.results <- aov(dating ~ group, data=df)
anova(anova.results)
Analysis of Variance Table

Response: dating
          Df Sum Sq Mean Sq F value  Pr(>F)  
group      2 23.333  11.667  4.6667 0.03168 *
Residuals 12 30.000   2.500                  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Our results are significant (p = .03). Therefore, we should follow up the results with post hoc testing. I chose to test all pairwise comparisons. I will use a no correction but will divide the alpha by the number of tests to protect the familywise alpha.

# Conducting post hoc tests (I will test all pairwise comparisons)
pairwise.t.test(x=df$dating, g=df$group, p.adjust.method="none")

    Pairwise comparisons using t tests with pooled SD 

data:  df$dating and df$group 

        child haircut
haircut 0.069 -      
puppy   0.337 0.011  

P value adjustment method: none 
# other options for p.adjust.method in include "bonf"
# note that R multiplies the p-values by the number of tests instead of adjust alpha, so you should interpret the tests at your usual alpha (.05) 

The pairwise.t.test function does not have an option for a Tukey correction. We can use the TukeyHSD function isntead.

# remember that we saved the results of our ANOVA as anova.results
TukeyHSD(anova.results)
  Tukey multiple comparisons of means
    95% family-wise confidence level

Fit: aov(formula = dating ~ group, data = df)

$group
              diff        lwr       upr     p adj
haircut-child   -2 -4.6678637 0.6678637 0.1545800
puppy-child      1 -1.6678637 3.6678637 0.5907706
puppy-haircut    3  0.3321363 5.6678637 0.0277219

Notice that both results agree (this will not always be the case, so we should choose only one to run). The post hoc tests show that the only difference is between the puppy and haircut groups. Neither of those groups differed from the child group. Now I will graph the results.

ggplot(data=df, aes(x=group, y=dating, fill=group)) +
  geom_bar(stat="summary", fun.y="mean") +
  stat_summary(geom = "errorbar", fun.data = mean_se, width=.4) +
  scale_fill_manual(values=c("darkgoldenrod1", "coral", "azure3")) 

LS0tCnRpdGxlOiAiQU5PVkEgcG9zdCBob2MgZXhhbXBsZSIKb3V0cHV0OgogIGh0bWxfbm90ZWJvb2s6IGRlZmF1bHQKICBodG1sX2RvY3VtZW50OiBkZWZhdWx0CiAgcGRmX2RvY3VtZW50OiBkZWZhdWx0Ci0tLQoKQ3JlYXRpbmcgdGhlIGZha2UgZGF0YS4KYGBge3IsIGV2YWw9RkFMU0V9Cm9wdGlvbnMoc2NpcGVuID0gOTk5KSAjIHRlbGxzIFIgc3R1ZGlvIHRvIHVzZSBkZWNpbWFscyBpbnN0ZWFkIG9mIGV4cG90ZW50aWFsIG5vdGF0aW9uCgojIHRoZSBmb2xsb3dpbmcgbGluZXMgYXJlIHVzZWQgb25seSB0byBjcmVhdGUgbXkgZmFrZSBkYXRhc2V0CmRhdGluZyA8LSBjKDEsMiwzLDQsNSwzLDQsNSw2LDcsNCw1LDYsNyw4KQpncm91cCA8LSBjKHJlcCgiaGFpcmN1dCIsNSksIHJlcCgiY2hpbGQiLDUpLHJlcCgicHVwcHkiLDUpKQpkZiA8LSBkYXRhLmZyYW1lKGdyb3VwLGRhdGluZykKYGBgCgoKYGBge3J9CmxpYnJhcnkoZ2dwbG90MikKbGlicmFyeShwc3ljaCkKCiMgSSB3aWxsIHN0YXJ0IGJ5IGdldHRpbmcgZGVzY3JpcHRpdmUgc3RhdGlzdGljcwpkZXNjcmliZUJ5KGRmJGRhdGluZywgZ3JvdXA9ZGYkZ3JvdXApCmBgYAoKYGBge3J9CiMgUnVubmluZyBhbiBBTk9WQQphbm92YS5yZXN1bHRzIDwtIGFvdihkYXRpbmcgfiBncm91cCwgZGF0YT1kZikKYW5vdmEoYW5vdmEucmVzdWx0cykKYGBgCgpPdXIgcmVzdWx0cyBhcmUgc2lnbmlmaWNhbnQgKHAgPSAuMDMpLiBUaGVyZWZvcmUsIHdlIHNob3VsZCBmb2xsb3cgdXAgdGhlIHJlc3VsdHMgd2l0aCBwb3N0IGhvYyB0ZXN0aW5nLiBJIGNob3NlIHRvIHRlc3QgYWxsIHBhaXJ3aXNlIGNvbXBhcmlzb25zLiBJIHdpbGwgdXNlIGEgbm8gY29ycmVjdGlvbiBidXQgd2lsbCBkaXZpZGUgdGhlIGFscGhhIGJ5IHRoZSBudW1iZXIgb2YgdGVzdHMgdG8gcHJvdGVjdCB0aGUgZmFtaWx5d2lzZSBhbHBoYS4gCgpgYGB7cn0KIyBDb25kdWN0aW5nIHBvc3QgaG9jIHRlc3RzIChJIHdpbGwgdGVzdCBhbGwgcGFpcndpc2UgY29tcGFyaXNvbnMpCnBhaXJ3aXNlLnQudGVzdCh4PWRmJGRhdGluZywgZz1kZiRncm91cCwgcC5hZGp1c3QubWV0aG9kPSJub25lIikKIyBvdGhlciBvcHRpb25zIGZvciBwLmFkanVzdC5tZXRob2QgaW4gaW5jbHVkZSAiYm9uZiIKIyBub3RlIHRoYXQgUiBtdWx0aXBsaWVzIHRoZSBwLXZhbHVlcyBieSB0aGUgbnVtYmVyIG9mIHRlc3RzIGluc3RlYWQgb2YgYWRqdXN0IGFscGhhLCBzbyB5b3Ugc2hvdWxkIGludGVycHJldCB0aGUgdGVzdHMgYXQgeW91ciB1c3VhbCBhbHBoYSAoLjA1KSAKYGBgClRoZSBwYWlyd2lzZS50LnRlc3QgZnVuY3Rpb24gZG9lcyBub3QgaGF2ZSBhbiBvcHRpb24gZm9yIGEgVHVrZXkgY29ycmVjdGlvbi4gV2UgY2FuIHVzZSB0aGUgVHVrZXlIU0QgZnVuY3Rpb24gaXNudGVhZC4gCgpgYGB7cn0KIyByZW1lbWJlciB0aGF0IHdlIHNhdmVkIHRoZSByZXN1bHRzIG9mIG91ciBBTk9WQSBhcyBhbm92YS5yZXN1bHRzClR1a2V5SFNEKGFub3ZhLnJlc3VsdHMpCmBgYAoKTm90aWNlIHRoYXQgYm90aCByZXN1bHRzIGFncmVlICh0aGlzIHdpbGwgbm90IGFsd2F5cyBiZSB0aGUgY2FzZSwgc28gd2Ugc2hvdWxkIGNob29zZSBvbmx5IG9uZSB0byBydW4pLiBUaGUgcG9zdCBob2MgdGVzdHMgc2hvdyB0aGF0IHRoZSBvbmx5IGRpZmZlcmVuY2UgaXMgYmV0d2VlbiB0aGUgcHVwcHkgYW5kIGhhaXJjdXQgZ3JvdXBzLiBOZWl0aGVyIG9mIHRob3NlIGdyb3VwcyBkaWZmZXJlZCBmcm9tIHRoZSBjaGlsZCBncm91cC4gTm93IEkgd2lsbCBncmFwaCB0aGUgcmVzdWx0cy4KCmBgYHtyfQpnZ3Bsb3QoZGF0YT1kZiwgYWVzKHg9Z3JvdXAsIHk9ZGF0aW5nLCBmaWxsPWdyb3VwKSkgKwogIGdlb21fYmFyKHN0YXQ9InN1bW1hcnkiLCBmdW4ueT0ibWVhbiIpICsKICBzdGF0X3N1bW1hcnkoZ2VvbSA9ICJlcnJvcmJhciIsIGZ1bi5kYXRhID0gbWVhbl9zZSwgd2lkdGg9LjQpICsKICBzY2FsZV9maWxsX21hbnVhbCh2YWx1ZXM9YygiZGFya2dvbGRlbnJvZDEiLCAiY29yYWwiLCAiYXp1cmUzIikpIApgYGAKCg==