Examining the STEM-Science Achievement Test (SSAT) Using Rasch Dichotomous Measurement Model

J. Jamaludin, Y. F. Lay, C. H. Khoo, A. S. Y. Leong

Abstract

The main purpose of this study is to develop a valid and reliable instrument to measure the STEM-Science achievement of primary school students in Malaysia. Six Year 4 Science topics (Scientific Skills, Life Processes of Human, Properties of Materials, Measurement, Solar System, Importance of Technologies in Life) and Six Year 5 Science topics (Rules and Regulation in Science Lab, Life Processes of Plants, Acids and alkali, Electricity, Earth and Space Science, Technology and Sustainable Life) have been included in the development of the STEM-Science Achievement Test (SSAT). 226 Year 4 and 226 Year 5 primary school students in Sabah responded to the developed instrument to test their STEM-Science knowledge. The Rasch dichotomous measurement model approach was used to evaluate the validity and reliability of the SSAT. The validity assessed the Point-Measure Correlation (PTMEA CORR), Principal Component Analysis of Residuals (PCAR), as well as Infit and Outfit Mean Squares (MNSQ). In terms of reliability, Cronbach’s alpha, item reliability, and item separation index were analysed. The analysis results revealed the presence of unidimensionality for both objective and subjective items. For objective items, the reported values for Cronbach’s Alpha are .81 and .83; item reliability are .95 and .95; item separation are 4.21 and 4.25 for Year 4 and Year 5 students, respectively. Standardised residual correlations for Year 4 and Year 5 subjective items also showed satisfactory values. The assessment using Rasch measurement model has proven that SSAT is a valid and reliable instrument to measure Malaysian primary students’ STEM-Science-related knowledge.

Keywords

validation; Rasch measurement model; science achievement test

Full Text:

PDF

References

Anwer, F. (2019). Activity-based teaching, student motivation and academic achievement. Journal of Education and Educational Development, 6(1), 154-170.

Arnold, J. C., Boone, W. J., Kremer, K., & Mayer, J. (2018). Assessment of competencies in scientific inquiry through the application of Rasch measurement techniques. Education Sciences, 8(4), 184.

Bhagat, P., & Baliya, A. (2016). Construction and validation of achievement test in science. International Journal of Science and Research, 5(6), 2277-2280.

Bond, T. G., & Fox, C. M. (2015). Applying the Rasch Model: Fundamental Measurement in the Human Sciences, Third Edition. Routledge.

Boone, W. J. (2016). Rasch analysis for instrument development: why, when, and how?. CBE—Life Sciences Education, 15(4), rm4.

Chan, S. W., Looi, C. K., & Sumintono, B. (2021). Assessing computational thinking abilities among Singapore secondary students: A Rasch model measurement analysis. Journal of Computers in Education, 8(2), 213-236.

DeBoer, G. E., Herrmann-Abell, C. F., Wertheim, J., & Roseman, J. E. (2009, April). Assessment linked to middle school science learning goals: A report on field test results for four middle school science topics. In Annual Meeting of the National Association of Research in Science Teaching, held (pp. 17-21).

Fisher, W.P. Jr. (2007). Rasch measurement transactions. Journal of Measurement, 21(1),1094.

Ghulman, H. A., & Mas’ odi, M. S. (2009, December). Modern measurement paradigm in Engineering Education: Easier to read and better analysis using Rasch-based approach. In 2009 International Conference on Engineering Education (ICEED) (pp. 1-6). IEEE.

Haliza, O., Izamarlina, A., Hafizah, B., Zulkifli, M. N., & Nur Arzilah, I. (2012). Application of Rasch measurement model in reliability and quality evaluation of examination paper for engineering mathematics courses. Social and Behavioral Sciences, 60, 163 – 171.

Komarudin, K., Suherman, S., & Anggraini, A. (2021). Analysis of Mathematical Concept Understanding Capabilities: The Impact of Makerspae STEM Learning Approach Models and Student Learning Activities. Journal of Innovation in Educational and Cultural Research, 2(1), 35-43.

Laurens, T., Batlolona, F. A., Batlolona, J. R., & Leasa, M. (2017). How does realistic mathematics education (RME) improve students’ mathematics cognitive achievement? Eurasia Journal of Mathematics, Science and Technology Education, 14(2), 569-578.

Linacre, J. M. (2003). Dimensionality: Contrasts and Variances. Help for Winsteps Rasch Measurement Software. Available: http://www.winsteps.com/winman/principalcomponents.htm

Linacre, J. M. (2014). Infit mean square or infit z-std [From Rasch Measurement Forum discussion board]. Available: raschforumboards.netnals.

Linacre, J. M., & Wright. B. D. (2012). A User’s Guide to WINSTEPS Ministeps Rasch Model Computer Programs. Chicago: Mesa Press.

Morales, R. A. (2009). Evaluation of mathematics achievement test: A comparison between CTT and IRT. The International Journal of Educational and Psychological Assessment, 1(1), 19-26.

Muchinsky, P. M. (1996). The correction for attenuation. Educational & Psychological Measurement, 56 (1), 63-75.

O’Reilly, T., & McNamara, D. S. (2007). The impact of science knowledge, reading skill, and reading strategy knowledge on more traditional “high-stakes” measures of high school students’ science achievement. American educational research journal, 44(1), 161-196.

Parsons, S., Kruijt, A. W., & Fox, E. (2019). Psychological science needs a standard practice of reporting the reliability of cognitive-behavioral measurements. Advances in Methods and Practices in Psychological Science, 2(4), 378-395.

Planinic, M., Boone, W. J., Susac, A., & Ivanjek, L. (2019). Rasch analysis in physics education research: Why measurement matters. Physical Review Physics Education Research, 15(2), 020111.

Osadebe, P. U. (2015). Construction of Valid and Reliable Test for Assessment of Students. Journal of Education and Practice, 6(1), 51-56.

Rahi, S. (2017). Research design and methods: A systematic review of research paradigms, sampling issues and instruments development. International Journal of Economics & Management Sciences, 6(2), 1-5.

Rahman, S. A., & Manaf, N. F. A. (2017). A critical analysis of Bloom’s taxonomy in teaching creative and critical thinking skills in Malaysia through English literature. English Language Teaching, 10(9), 245-256.

Reardon, S. F., Kalogrides, D., & Ho, A. D. (2021). Validation methods for aggregate-level test scale linking: A case study mapping school district test score distributions to a common scale. Journal of Educational and Behavioral Statistics, 46(2), 138-167.

Román-González, M., Moreno-León, J., & Robles, G. (2019). Combining assessment tools for a comprehensive evaluation of computational thinking interventions. In Computational thinking education (pp. 79-98). Springer, Singapore.

Saidi, S. S., & Siew, N. M. (2019). Reliability and validity analysis of Statistical Reasoning Test survey instrument Using the Rasch Measurement Model. International Electronic Journal of Mathematics Education, 14(3), 535-546.

Scully, D. (2017). Constructing multiple-choice items to measure higher-order thinking. Practical Assessment, Research, and Evaluation, 22(1), 4.

Sener, N., & Tas, E. (2017). Developing achievement test: A research for assessment of 5th grade biology subject. Journal of Education and Learning, 6(2), 254-271.

Sharma, G. (2017). Pros and cons of different sampling techniques. International Journal of Applied Research, 3(7), 749-752.

Susongko, P. (2016). Validation of science achievement test with the Rasch model. Jurnal Pendidikan IPA Indonesia, 5(2), 268-277.

Sumintono, B. (2017). Science education in Malaysia: Challenges in the 21st. century. Jurnal Cakrawala Pendidikan, 36(3), 459-471.

Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273-1296.

Wright, B., & Stone, M. (1999). Measurement Essentials 2nd Edition. Wilmington, Delaware: Wide Range, Inc.

Refbacks

  • There are currently no refbacks.