Evaluation of the Basis and Effectiveness of Habitat Assessments in Wetland Functional Assessment Methods
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Your vote was cast
Thank you for your feedback
Thank you for your feedback
AuthorGardner, Amy Elizabeth
KeywordWetland Assessment Methods
Habitat Assessment Technique
MetadataShow full item record
AbstractI studied the basis and effectiveness of wetland assessment methods in providing habitat assessments. While it is well understood that wetlands and riparian areas provide important ecological functions and habitat for a wide variety of wildlife species, much is still to be learned about providing meaningful, accurate and repeatable methods for assessing them. I examined and evaluated four assessment methods to determine their accuracy and usefulness in assessing a site's provision of habitat. One hypothesis I tested is that if the assessment methods studied provide an accurate assessment of wetland functions, then the resulting site scores for the methods should be correlated. The second hypothesis is that there is a correlation between the site scores and an independent measure of function, specifically the number of riparian-associated bird and butterfly species observed at each site. Biological and physical data collected from 47 riparian sites in California's Central Valley were used to calculate site scores using Habitat Assessment Technique (HAT), Rocky Mountain Riparian Hydrogeomorphic (HGM), Southern California Riparian Model, and Reference Wetland assessment methods. The rankings of these site scores were also calculated for each method. Correlation coefficients (r) were calculated between the site scores of the four methods, as well as between the site scores and the numbers of riparian-associated bird and butterfly species for each plot. The site scores were mostly uncorrelated. Only one statistically significant correlation was demonstrated between the site scores for the Southern California Riparian Model and Reference Wetland methods (df = 46, r = 0.46, p = 0.00103, with Bonferroni correction). With Bonferroni corrections (p < 0.00625), the site scores were also uncorrelated with the numbers of riparian-associated bird and butterfly species. Without Bonferroni corrections, only two statistically significant correlations were demonstrated: between the number of riparian-associated bird species and the HAT score (df = 46, r = 0.37, p = 0.0095) and the number of riparian-associated butterfly species and the Reference Wetland score (df = 46, r = 0.38, p = 0.0092). I rejected both original hypotheses, which demonstrated that the assessment tools currently available do not consistently produce relatively precise, or reproducible results. Possible reasons for these problems include attempting to assess a function that is too broadly defined, inappropriately or subjectively selected variables, subjectively assigning values to variables, or inappropriately selecting reference sites. The existing attempts at assessing wetland or riparian function are important steps in the right direction toward assessment of wetland and riparian sites and achievement of "no net loss," but functional assessment must be considered a work in progress.
DescriptionRepository staff redacted information not essential to the integrity of this thesis to protect privacy.
Showing items related by title, author, creator and subject.
An analysis of language difficulties in Algebra I (Common core) assessments versus integrated Algebra assessmentsSpoth, Amy (State University of New York College at Fredonia, 2016-05)The purpose of this study was to determine if the difficult linguistic features of mathematics assessments correspond to teachers' perceptions of the assessments. A mixed methods research design was used in order to analyze the linguistic features of each exam and also gain insight to how teachers feel about the assessments. The assessments analyzed in this study were the June 2008 Integrated Algebra Examination and the 2015 Algebra I (Common Core) Assessment. In addition to comparing linguistic features of the two assessments, interviews were conducted. Two teachers were interviewed in one school district. The results of the data collection indicated that while the Algebra I (Common Core) Assessment contained more difficult linguistic features in fourteen of the sixteen categories, readability tests showed the Integrated Algebra Examination is written at a higher reading and grade level. The results of the interviews concluded that while students may struggle with linguistically difficult features in mathematics, there are strategies which may be incorporated into instruction in order to help these students overcome these challenges. Some of these strategies may include practice reading texts with difficult linguistic features in mathematics classrooms, explicitly teaching students how to separate mathematics and language, and collaborating with other teachers to determine what strategies may work best for your students. [from abstract]
How Does the Assessment Information Gained From the Literacy Software Program Raz Kids Compare to the DRA Assessment Information?Mackmin, Jennifer M.; The College at Brockport (2010-01-01)This study looked at two critical questions concerning the use of Raz Kids: 1) How do students' reading levels assigned by the computer program Raz Kids compare to their reading levels according to the DRA reading assessment? 2) How does the assessment information gained from the DRA assessment compare to students' level/performance on the Raz Kids reading assessment? Investigating these questions helped to determine whether Raz Kids is a tool for students to be using in the classroom and what type of data teachers could gather from this program to inform their future instruction of that student. I answered these questions by comparing the data gathered by the DRA assessment with the computer generated data from the Raz Kids programs. I looked for consistency between the two assessments and seeing what type of data I was able to gather from each. I also [be] took into account the attitude that the classroom teacher and students have about each program through observations of the students and an interview with the classroom teacher. Students need to be prepared for the literacy demands they will be facing inside and outside of the classroom and this research was helpful in finding out whether computer literacy programs are helping them meet this demand or not.
Congruity between Assessment Criteria and Cooperating Teacher Assessment of Student TeachersOcansey, Reginald T-A.; Sofo, Seidu; The College at Brockport (1998-12-01)This study investigated the congruity between cooperating teachers' assessment of student teachers and established set of criteria for assessment during student teaching. The study also examined the substance of the comments of cooperating teachers about student teachers' performances. The final evaluation forms submitted by the cooperating teachers to the student teaching coordinator served as the main source of data. These forms were content analyzed to determine the congruity of cooperating teachers' assessment and the set of assessment criteria. The researcher developed the Brockport Supervision Analysis System—Physical Education (BSASPE) instrument for data analysis. Subjects for the study included 41 cooperating teachers (27 males and 14 females) who supervised 32 student teachers for the period Fall 1995 through Spring 1998. The student teachers (22 males and 10 females) were enrolled in the physical education teacher certification program at SUNY Brockport. The student teachers in this study taught in 34 different schools during the period covered by the study. These included 17 elementary schools, 11 middle schools, and six high schools. The results indicated that while most cooperating teachers awarded outstanding and highly competent grades to their student teachers, it was incongruent with the set of assessment criteria established by the university. However, the assessment of one student teacher awarded a non-competent grade was congruent with assessment criteria. It was also found that the cooperating teachers' comments were related to the competencies under which they were written. The study showed that cooperating teachers' comments differed with the grade levels taught by student teachers. There is the need for further research to ascertain why most cooperating teachers' assessments were not congruent with established assessment criteria, even though they had the ability to make comments related to the major competencies for student teaching.