# Statistics Discussion Peer Responses

### Description

## Having Trouble Meeting Your Deadline?

Get your assignment on** Statistics Discussion Peer Responses **completed on time. avoid delay and – **ORDER NOW**

Discussion Question: Please calculate the mean, median, mode and standard deviation of student satisfaction scores for Monday versus Saturday. Student satisfaction scores are represented across 20 survey items. In your post, detail the statistics you found, key themes (which items were rated favorably and which items were rated unfavorably?), as well as any suggestions you would make to the client based on these statistics.

Please be sure that you are posting in a client-ready format, using professional and appropriate language for talking to an organization. When responding to your peers react to their statistics and subsequent suggestion points for Teton Grand.

**Peer 1:**

Through statistical analysis of the Monday versus Saturday Environmental Immersion classes at Teton Grand, we are better able to understand and interpret overall student satisfaction scores. Specifically, by looking at the **mean**, **median**, and **mode**, we can learn more about how the training classes are performing in terms of student satisfaction. Mean, median, and mode are each different ways to describe distributions; they provide us with numerical information about the distribution of a given data set (Teton Grand, in this case). Mean, median, and mode are all measures of centre or central tendency that help us identify key themes.

**Mean **refers to the arithmetic average, which equals the summation of all data values, divided by the total number of data values. The mean is also referred to as X bar or x?. **Mode **refers to the data value that is most frequently observed in a data set. And, **median **refers to the data value that is positioned in the middle of a given data set. In order to determine the median, the data must first be ordered (usually smallest to largest). It can be helpful to use the formula: (n+1)/2, where n reflects the overall number of data samples. The result here provides us with the position of the median. In cases when n is an even number, we will need to average the two values on either side of the median position.

On Monday, the mean student satisfaction score was 3.39, the median was 4, and the mode was 4. On Saturday, the mean student satisfaction score was 3.63, the median was again 4, and the mode was also 4. These numbers tell us that:

- Average student satisfaction (mean) differed by 0.24 between Saturday and Monday;
- Most students strongly agreed that they were satisfied on both Monday and Saturday. On both days, the centre of the dataset was 4 – meaning that 50% provided a ranking >4, and 50% provided a ranking <4.
- And, on both Monday and Saturday, the most frequent ranking was 4 (mode).

Looking at the statistics, we can see the following key themes (on a 5 point scale, with 1 being strongly disagree and 5 being strongly agree):

- Most students did NOT receive regular feedback from their teachers, and most disagreed with the statement that they received regular feedback (mean=2.27, median=2, mode=2);
- Most students did NOT have a good understanding of the mission statement, and most disagreed that they understood the statement well (mean=2.52, median=2);
- The majority of students felt that they had committed instructors and most agreed that their instructors were committed (mean=3.88, median=4, mode=4);
- Students strongly agreed with the fact that their training rooms more than any other survey option (mean=3.9, medial=4, mode=4).

Based on these statistics, I would suggest the following:

- Ensure teachers commit to the provision of regular feedback to their students. This feedback should be meaningful, student-specific, as well as provide constructive options for improvement;
- Integrate the schools mission more fully with curriculum and courses. Provide tangible ways for students to relate course work and personal experience to the mission statement;
- Continue to encourage teachers in their commitment to students. Provide tangible rewards or acknowledgement for teachers who continue to demonstrate this commitment, and help less-involved teachers (or hands-off instructors) become more committed to their curriculum and students, and;
- Find out what makes the training rooms so successful, and attempt to model future rooms according to this model. Renovate older, less functional spaces to become more relevant and efficient.

Peer 2

I have an odd combination of education and experience, so the things that stuck out to me about this data are directly related to that but seem to be unrelated to each other.

Understanding the mission and philosophy of the organization cumulatively had a below average mean and median on both days, however it seems that participants believed they were collectively upholding the mission as evidenced by above average scores on the question, “The people I was in class with cooperated to uphold Teton Grand’s mission.” Coming from a non-profit background, I know how imperative a mission statement is, and can understand how easily it can be forgotten by the public or participants even as they participate within your own programming. These are small things that can be reinforced by marketing (specifically in the way courses are marketed), by way of introductions during the start of a class, by materials presented within class, by any follow-up surveys, etc., and by general marketing and branding efforts. This seems like a fairly easy fix, but will require some strategic conversations about how the mission and philosophy are communicated.

The other things that stood out were directly related to the courses themselves, and as an educator and instructional designer, they were a bit concerning.

Books for the class, for instance, rated below average on both days, which would indicate it could be time to reevaluate the curriculum in the case of a shift away from original intentions or at the very least launch investigations into more relevant materials from which the classes can work. This needs to be a direct communication between the organization and the instructors so an agreement can be reached on how best to provide appropriate materials. It may even be a good idea to have past participants gathered for a focus group to help investigate these resources.

Participants having confidence that their instructors cared about their success and participants receiving feedback from instructors also rated below average on both days. The first thing that comes to my mind is that instructors may need support on how to continuously get and give feedback during class in the form of a professional development session or at the very least by enacting a instructor observation and rating system. There’s a big difference between training and facilitating, and often, less experienced or less engaged trainers end up not consistently being able to close the feedback loop. By implementing an observation and rating program amongst instructors, they can better be exposed to other teaching and presentation styles and understand how they can personally improve.

Overall, I don’t see anything catastrophically challenging to this organization’s survival, but in the interest of consistent improvement, there are always steps to be taken.