Overview

This document analyzes the responses from the Discuss the Undiscussables meeting that took place on 5/20/20. Each respondent submitted five answers to the following questions:

The original sheet is available here. Given that experiences were split by newer and older shapers, analysis was disaggregated by cohorts.

Please take some time to thoughtfully respond to each of these questions below. Please respond with at least 5 things for each of the questions below and leave your name next to your response. Thanks for being you, can’t wait to read these.

Response Rate

In total, 23 Shapers responded, broken out into

Analysis - State of Hub 2020

This report summarizes key themes that emerged in the Discuss the Undiscussables document and turns it into a tool to paint a better picture about the state of the Sacramento Hub. Additionally, it will provide feedback and opportunities for the entire hub as well as the curatorship to take action on improving our hub in a meaningful and intentional way.

Important Note: We did our best to organize the feedback within broader categories and highlight themes that emerged but everything should be taken with a grain of salt and considered within the broader context. We highly recommend seeing the original comments.

Whats working?

  • Projects (18+)
  • Recruitment (8+)
  • Engagement
  • Hub Culture

What’s not working?

  • Feeling Disconnected (9+)
  • Feels like work (9+)
  • Lack of Accountability (9+)
  • Closed off teams / groups (9+)

Takeaways

1) What’s working?

Overall

Trigrams

Word Frequency

Sentiment

By Older/New Shapers

Word Frequency

Sentiment

2) What’s not working?

Overall

Trigrams

##Create trigram Datasets (Q1)
bigrams <- dta %>% filter(name=="Notworking_2") %>% 
  unnest_tokens(phrase, value, token = "ngrams", n = 3, n_min = 2) %>% 
  count(phrase, sort = TRUE)
bigrams_separated <- bigrams %>%
  separate(phrase, c("word1", "word2"), sep = " ")
bigrams_filtered <- bigrams_separated %>%
  filter(!word1 %in% stop_words$word) %>%
  filter(!word2 %in% stop_words$word) %>% 
  select(-n)
# new bigram counts:
bigram_counts <- bigrams_filtered %>% 
  count(word1, word2, sort = TRUE)
bigrams_united <- bigrams_filtered %>%
  unite(bigram, word1, word2, sep = " ")

#######
library(igraph)
# filter for only relatively common combinations
bigram_graph <- bigram_counts %>%
    filter(n>2) %>% 
  graph_from_data_frame() 

library(ggraph)
set.seed(2017)

ggraph(bigram_graph, layout = "fr") +
  geom_edge_link() +
  geom_node_point() +
  geom_node_text(aes(label = name), vjust = 1, hjust = 1)

Word Frequency

Sentiment

By Older/New Shapers

Word Frequency

Sentiment

3) What do we do about it?

Overall

Word Frequency

Sentiment

By Older/New Shapers

Word Frequency

Sentiment

4) Anything else?

Overall

Word Frequency

Sentiment

By Older/New Shapers

Word Frequency

Sentiment

Creston Analysis

Below is an analysis conducted by Creston of the survey responses

Methodology

Helpfully used this text analysis guide as a reference point: https://jwinternheimer.github.io/blog/churn-survey-text-analysis/