23/08/2020

Introduction

  • The anonymous survey aimed at facilitating a self-assessment of the proposal as well as the evaluation of the support and tools by the involved team including lessons learned.

  • The proposal evaluation used the same questions and scoring scale as the call for proposals.

  • The resulting dataset contains 20 variables including metadata from 4 participants.

  • Median answering time: 9.1 minutes (Min.: 3.8’; Max: 19.5’).

Aims

  • Collecting opinions from participants concerning the proposal following the official evaluation grid.

  • Collect feedback data from participants concerning tools, consultancy support, lessons learned and suggestions for improvement.

Results of the proposal evaluation

Results of the proposal evaluation

Results of the proposal evaluation

Results of the proposal evaluation

Results of the proposal evaluation

Results of the proposal evaluation

Scoring system

Qualitative assessment options Numerical interval
Very good / Very high 4.21 to 5.00
Good / High 3.41 to 4.20
Regular / Average 2.61 to 3.40
Poor / Low 1.81 to 2.60
Very Poor / Very Low 1.00 to 1.80


  • Mean score by the participants: 47 points (57% higher than the min. 30 points).

  • Minimal score for acceptance: 30 out of 50 points.

Predicted score (Bayesian bootstrap)

Results of a reproducible Bayesian bootstrap re-sampling

  • We used a re-sampling statistical algorithm to bootstrap a 95% confidence interval for the mean score by participants. This re-samples data sets with replacement 4,000 times registering the draws to generate representative data from smaller data sets.

  • The histogram in the next slide contains the Bayesian highest density interval (HDI), which indicates a score prediction of 47 points for the proposal following the survey results.

  • This prediction lies within a confidence interval between 44.7 and 49.3 points at 95% confidence level (minimum score threshold for approval: 30 points).

Predicted score (Bayesian bootstrap)

Results / team’s feedback

Results / team’s feedback

Results / team’s feedback

Results / team’s feedback

Results / team’s feedback

Results from qualitative data

  • Visualisation of text-based questions uses simple word clouds (see next slides) filtering automatically for “stop words” (e.g., articles and prepositions).

  • The size of words in word clouds represents the numbers of time (frequency) that they occur in the dataset.

  • Each specific frequency is also associated with a colour. Words with same frequency in the dataset have same colours.

  • Computer-based random sampling has been used to select examples of answers from the dataset in a reproducible way.

Word cloud - Liked the least

Liked the least

  • Last-minute changes (e.g., the inclusion of the climate change aspect) prolongued the process considerably.
  • The first few drafts as inputted by Movimentar team were not helpful as it was a mere cut and paste from the tables that were inputted by the partner-the client team. We were expecting a lot more technical expertise from the consultants. We also had a bit of problems when many PDF versions were generated at the end when there was still things to be updated and edited.
  • At the end of the process we got a bit lost in the different CN version that could have been avoided if all of us are working with the collaborative versions @ Teamworks. The time schedule agreed in the beginning was not kept due to the fact that we got some “insider information” that were important to consider them in our concept note to increase our chances of success. At the end I felt that we were a bit struggleing with the different versions and amendments, back-and-forth communication and I was not sure if we will make it in time and it was also a bit risky to make mistakes because of the different versions. However, it went out well and we submitted in time but the process seemed to more efficient if we also plan for some more time to make ad-hoc revisions.
  • The pressure at the end of the timeline

Word cloud - Liked the most

Liked the most

  • Interesting topic.
  • The collaborative application/Software was quite a learning for us. It helps increasing communication effectiveness significantly and efficiently.
  • Team-spirit, very high level of motivation, high technical expertise from movimentar team, very good result = high quality of CN that we submitted, working with Teamworks and Mural and googledocs that allows participatory proposal development also during Covid-19 pandemic and travel restrictions
  • The imediate reflection of changes in the beginning

Word cloud - Lessons learned

Lessons learned

  • If possible, more clarity before the drafting of the concept note about the main issues of the proposal.
  • There are information to be available only at the final stage, so we need to be flexible enough to respond to the change, so that Unique Selling Points of the collaboration and proposed project will be ensured.
  • Working only with collaborative document versions Calculating (more) time for ad-hoc revisions (also in the end) Continously revising time schedule
  • More time investment is necessary to get a full overview of all online tools avaiable, including time to practice at the beginning

Word cloud - Additional comments

Additional comments

  • We highly appreciate the leadership and strong commitments by the client in Berlin and at the Regional Office. Their critical but strategic points kept us in good shape will the end. The movimentar team was quite efficient. Thank you all and we look forward to working together again soon for the next phase :-).
  • A BIG THANK YOU to MOVIMENTAR and OUR TEAM for this great team effort!
  • Thanks a lot for your support and patience

Recommendations (part 1)

  • Inputs to the results chain, outputs, and beneficiaries were shared by the local team right in the beginning of the assignment. The team used very well the online collaborative templates of the tables including the results chain, outputs by activity, and beneficiaries. Most importantly, the team followed the recommended sequence of steps in the process (first results chain and outputs, and only then start budgeting). This helped to achieve a good timing for the budget discussion so that it can be based on the output and beneficiary numbers (concrete results orientation). This was very important for the design process and is a best practice.

  • Local partners should communicate deadline changes in last-minute reviews in order to allow for better coordination with the proposal-submission task. It is important to inform movimentar of the suggested date for completion, so that we can generate the final PDF-version of the funding application.

Recommendations (part 2)

  • Join online workshop sessions individually (each person with one device) and using headphones with a microphone for improved sound experience for the entire team. This is important even when grouped together in one room and can contribute to increase the overall team’s productivity with the use of collaborative tools such as Mural. We used this tool for graphical thinking and brainstorming activities, risk and stakeholder analyses. It also supported presenting contents and voting on titles.

  • Throughout the collaboration and especially during the final document reviews, use only the provided collaborative documents and tables (activating the suggestion mode - equivalent to track changes or adding comments) to ease the quality assurance and reduce the risk of mistakes. This will also help to avoid the risk that team members work on different versions while avoiding the unnecessary work related to comparing and integrating changes from separate text documents.

Recommendations (part 3)

  • Contribute to the proposal design using a single text base in order to avoid duplication caused by adding different text versions for individual sections.

  • Text suggestions made by the client and partner staff are generally preferred by us if we need to choose between those and the drafts coming from our side of the team. We can provide technical advisory based on our past work experiences. However, we see ourselves as facilitators of participatory processes for the design of the funding application by, and following inputs and ideas from, the local teams. We try to include local knowledge as much as possible for improved relevance, ownership, and better adaptation to local needs and context. That is why we prefer to remove our drafts and prefer the text suggested by local partners when they share their versions. To improve this process, we recommend to change/adjust the text directly instead of preparing multiple versions for the same sections that already have a draft online in the collaborative version. It is less risky and more efficient and productive to work together online as a team on the same collaborative document.

Recommendations (part 4)

  • Involve all staff members that will have a role in the process as early as possible and well before the final reviewing process. This will avoid last-minute changes, as the team members can provide their inputs earlier.

  • As part of the finalisation of the funding application, we recommend that the local partner generates PDF-versions from the collaborative version. It helps if this happens independently from the consultancy team since the partner needs to confirm the version before submission and will avoid double work from all sides.

  • Scale up and develop capacities on the use of online collaborative document editing as well as management information systems such as Teamwork Projects. They can help to increase productivity and to reduce face-to-face meetings and risks, particularly in the context of the COVID-19 pandemic. We recommend enabling hybrid collaborative environments (online meeting with some members joining from a face-to-face meeting). It is good practice that gathered members in face-to-face meetings join the online meeting with their devices individually and using masks for reduced contagion risk / health safety.