School Climate Inventory Assessment Protocol
Step 1: Select your Target Population(s)
This instrument can be administered through a variety of means. However, the most reliable data will be obtained by incorporating a sample of ratings representing the broadest possible range of stakeholders, therefore it is recommended that data is drawn from both teacher and student groups. Parent and staff data will further strengthen the reliability of the results, as will having an independent evaluation team perform an assessment. In addition, it is also recommended that the sample size be as large as possible (n = 40+ or 20%+ for students, 50%+ for teachers, 6+ for staff, 20+ for parents, and 3+ for independent evaluators).
Step 2: Gather Data
Sub-Scales of the School Climate Assessment Instrument:
It is important for those facilitating the administration of the survey to provide accurate directions (see directions on Page One of the instrument) to participants, especially students. Miss-marked surveys cannot be used. A common problem is that participants make too many marks, assuming that each of the 3 descriptions for each item must be rated separately. Participants must feel uninhibited, anonymous, and relaxed for results to be meaningful. It is recommended that participants be given pre-labeled inventories coding their group category and number (e.g., P12 = parent group participant #12).
Focus group interview data can provide a powerful adjunct to the survey data, and provide both greater reliability and a better sense of causality for the ratings for each dimension. It is recommended that focus group interviewers target fewer dimensions with greater depth. This can be accomplished by dividing dimensions among a group of interviewers. It is not recommended that school administrators conduct focus group interviews.
It is essential that interviewers are perceived as neutral to ensure honest and open participation. It is recommended that notes from the focus groups be transcribed and compiled for later analysis.
Step 3: Aggregate the Data
It is recommended that each item be aggregated for each separate group of participants. Each item should be given a score corresponding to its mean (marks in level 3 are scored a 5, between level 3 and 2 are scored at a 4, scores in the middle of level 2 receive a 3, and so forth - the mean score can be obtained by dividing the total number of points for each item by the number of participants). Item mean scores will range between 5.0 (high) to 1.0 (low).
Next, a mean should be calculated for each group for each dimension. For example, School X may have a mean of 2.7 for Dimension 3: Student Interactions as rated by students, and a mean of 3.3 as rated by parents, and so on. It is also recommended that an overall mean for each separate group be calculated as well.
Step 4: Data Analysis
Creating a graphic representation of the data is recommended. It offers ease of interpretation and analysis. A table representing group means for each dimension can be effective, as well as a bar graph or other type of chart. (See sample evaluations provided by ASSC.)
Teachers or Teams can make assessment judgments at any of three levels.
- First, evaluations can be made at the individual item level. These data will provide implications for potential remediation and improvements related to practice.
- Second, each of the eight sub-scales should be scored as a unit. These data will provide the team a sense of which areas are sources of strength and which are areas of weakness/ opportunity.
- Third, using the rubric holistically, the entire school could be judged to be at one of the three performance levels. This level of assessment can be used to make a global judgment as to where the school is in its process of growth.
Doing some degree of assessment at each level of judgment is recommended. Depending upon the purpose of the assessment, and your reporting audience, you may wish to communicate your findings with or without a high level of specificity.
As you examine the data you will likely notice that the means for both dimensions and groups are rather consistent. The implications of this are: 1) the instrument tends to be quite reliable; 2) groups tend to recognize relatively similar conditions in any school; and 3) each of the eight dimensions is interrelated.
Remember, this rubric is not intended to provide a quantifiable rating for purposes of school-to-school comparison. It is simply an instrument intended to furnish an evaluation team with an overall qualitative sense of the current school climate, as well as the specific aspects of that climate, in their varying stages of development, at any particular school.
Step 5: Use of the Data
It is recommended that those involved in the assessment process are also involved in the process of action planning. The insights drawn from the data analysis process (especially the focus group interviews) will be invaluable in any process of implementation.
Examining dimension-level data will be useful in identifying areas of need. Further focus group data may be useful after discovery of an area that has been rated very low. Examining item-level data is useful when examining forms of practice that may be either particularly strong or weak. Curriculum experts and/or ASSC consultants may be helpful in suggesting practices that will target areas for improvement as identified by particular items. Specialized in-services can be one possible solution to these areas. It is recommended that any action plan be developed immediately following the completion of the data analysis. Delay can lead to stagnation and is typically a mistake. Moreover, it is essential that those charged with the task of identifying needs and developing a plan of action have the necessary power to implement those changes.
The duties of those on either the assessment and/or planning teams should undertaken with an awareness of the potential in the process for political damage to the school or themselves. Items have been purposefully developed to be as incisive as possible. While this leads to greater validity of the assessment, it also contributes to the sensitivity of the data. This process should never be used to assign blame to other faculty, put students down, indict leadership, or promote the perception that certain "individuals" are the problem. Solutions in the area of school climate improvement most often come as a result of raising the faculties' collective awareness by relating the systemic patterns operating within the school to choices that have contributed to their existence. This exercise will help the learning community address and collaboratively act on those areas of concern in an effort to promote shared accomplishment.
IMPORTANT: This work can be of profound value in your school's efforts toward improvement. But make no mistake. You are entering a very sensitive and intimate realm - the heart and mind of your school. It is critical that your endeavors are not perceived by others as personal accusations, careless bashing, and/or political gamesmanship, or your efforts will result in more harm than good.