Exploring and Comparing Content Validity and Assumptions of Modern Theory of an Integrated Assessment: Critical Thinking-Chemical Literacy Studies

S. Sadhu, E. Ad'hiya, E. W. Laksono

Abstract

The development of chemical literacy as well as critical thinking, are a prominent objective of science education and become essential skills in the 21st century. The stress has not been given to the measurement of both skills together in chemistry content in high school. Another problem shows in developing an instrument in which teachers often do not know whether the content and construct that they developed in the tool can measure the skill that supposed to be measured. The first step the teacher should do is studying the content validity when an instrument has been constructed and also psychometric testing is needed. Hence, this is addressed for exploring the content validity evidence, even assumptions for the modern theory in integrated assessment for measuring students’ chemical literacy and critical thinking. Also, comparing a result of content validity and assumptions for modern theory to know which is mandatory for research instrument. The initial integrated assessment consists of 37 items. Content validity was first piloted under a review of six experts and also 133 participants involved to determine the assumptions of modern theory in integrated assessment. From this study, it can be confirmed that the development of the integrated assessment on the content validity can be used to measure 13 integrated skills of chemical literacy and also critical thinking in chemical equilibrium, and an assumption for modern theory met with the standard of the valid instrument. In general, to know the quality of the items test using more sophisticated statistical analysis, the integrated assessment is appropriate for measuring the big-scale of the paper-pencil test. The outcome of this research will help teacher/lecturer in the analysis quality of an assessment that will be used in student’s evaluation

Keywords

assumption of modern theory, chemical equilibrium, chemical literacy, content validity, critical thinking

Full Text:

PDF

References

Adedoyin, O. O., & Adedoyin, J. A. (2013). Assessing the comparability between classical test theory (CTT) and item response theory (IRT) models in estimating test item parameters. Herald Journal of Education and General Studies, 2(3), 107-114.

Adedoyin, O.O., & Mokobi, T. (2013). Using IRT psychometric analysis in examining the qulity of junior certificate mathematics multiple choice examination test items. International Journal of Asian Social Science, 3(4), 992-1011.

Bartlett, J. E (2002). Analysis of motivational orientation and learning strategies of high school business students. Business and Education Forum, 56(4), 18-23.

Bayrak, B.K. (2013). Using two-tier test to identify primary students’ conceptual understanding and alternative conceptions in acid base. Mevlana International Journal of Education, 3(2), 19-26. http://dx.doi.org/10.13054/mije.13.21.3.2

Bergquist, W., & Heikkinen, H. (1990). Student ideas regarding chemical equilibrium: What written test answers do not reveal. Journal of chemical Education, 67(12), 1000.

Bond, D. (1989). In pursuit of chemical literacy: a place for chemical reactions. Journal of Chemical Education, 66(2),157-160.

Camacho, M., & Good, R. (1989). Problem solving and chemical equilibrium: successful versus unsuccesful pcrformance. Journal of Research in Science Teaching, 26, 251-272. https://doi.org/10.1002/tea.3660260306.

Cigdemoglu, C., & Geban, O. (2015). Improving students’ chemical literacy levels on thermochemical and thermodynamics concepts through a context-based approach. Chemistry Education Research and Practice, 16(2), 302-317.

DeBoer, G. E. (2000). Scientific literacy: another look at its historical and contemporary meanings and its relationship to science education reform. Journal of Research in Science Teaching, 37(6), 582-601. https://doi.org/10.1002/10982736(200008)37:6< 582::AID-TEA5>3.0.CO;2-L.

Dehnad, A., Nasser, H., & Hosseini, A.F. (2014). A Comparison between three-and four-option multiple choice questions. Procedia-Social and Behavioral Sciences, 98,398-403. https://doi.org/10.1016/j.sbspro.2014.03.432

Demars, C. (2010). Item response theory. New York: Oxford University Press.

Duskri, M., Kumaidi., & Suryanto. (2014). Pengembangan tes diagnostik kesulitan belajar matematika di SD. Jurnal Penelitian dan Evaluasi Pendidikan, 18(1), 44-56.

Eleje, L. I., Onah, F. E., & Abanobi, C. C. (2018). Comparative study of classical test theory and ıtem response theory using diagnostic quantitative economics skill test ıtem analysis results. European Journal of Educational and Social Sciences, 3(1), 71 – 89.

Fensham, P.J., & Bellocchi, A. (2013). Higher order thinking in chemistry curriculum and its assessment. Thinking Skills and Creativity, 10, 250– 264. https://doi.org/10.1016/j.tsc.2013.06.003

Fives,H., Huebner, W., Birnbaum, A.S., & Nicolich, M. (2014). Developing a measure of scientific literacy for middle school students. Science Education, 98(4), 549–580. https://doi.org/10.1002/sce.21115

Gregory, R.J.(2007). Psychological testing: history, principles, and applications (5th edition). New York: Pearson Education Group, Inc.

Grosser, M.M. & Nel, M. (2013). The relationship between the critical thinking skills and the academic language profiency of prospective teachers. South African Journal of Eduvation, 33(22), 246-262.

Guller, N., Uyank, G.K., & Teker, G.T. (2014). Comparison of classical tes theory and item response theory in terms of item parameter. European Journal of Research on Education, 2(1), 1-6.

Hambleton, R. K., & Swaminathan, H. (1985). Item response theory. New York: Kluwer Inc.

Hambleton, R., K., Swaminathan, H., & Rogers, H.J. (1991). Fundamental of item response theory. Los Angeles: Sage Publication, Inc.

HyunhoKim., Boncho-Ku., Yeol-Kim, J., Young-Jae, P., & Young-Bae,P. (2016). Confirmatory and exploratory factor analysis for validating the phlegm pattern questionnaire for healthy subjects. Evidence-Based Complementary and Alternative Medicine, 2016(2016), 1-8. http://dx.doi.org/10.1155/2016/ 2696019

Jackson, S. L. (2003). Research methods and statistics, a critical thinking approach. USA: Thomson Wadsworth.

Jin, I. H., & Jeon, M. (2018). A doubly latent space joint model for local item and person dependence in the analysis of item response data. Psychometrika, 83(333) 1-41. https://doi.org/10.1007/s11336-018-9630-0.

Karpudewan, M., Treagust, D. F., Mocerino, M., Won, N., & Chandrasegaran, A. L. (2015). Investigating high school student’s understanding of chemical equilibrium concepts. International Journal of Environmental and Science Education, 10(6), 845-863.

Kyung, T. H. (2013). Windows software that generates IRT parameters and item responses: research and evaluation program methods (REMP). Retrieved 10 March, 2017 from https://www.umass.edu/remp/software/simcata/wingen/homeF.html

Lynn, M. R. (1986). Determination and quantiï¬cation of content validity. Nursing Research, 35, 382– 385.

Medeiros, R.K.S., Junior, M.A.F., Torres, G.V, Vitor, A.F., Santos, V.E.P., & Barichello, E. (2015). Content validity of an instrument about knowledge on nasogastricintubation. Bioscience Journal, 31(6), 1862-1870.

https://doi.org/10.14393/BJ-v31n6a2015-26318

Mujis, D. (2011). Doing quantitative research in education with SPSS. London: SAGE Publications, Ltd

Norris, S. P., & Phillips, L. M. (2003). How literacy in its fundamental sense is central to scientific literacy. Science Education, 87, 224–240.

Qasem, M. A. N. (2013). A comparative study of classical theory (CT) and item response theory (IRT) in relation to various approaches of evaluating the validity and reliability of research tools. Journal of Research & Method in Education, 3(5), 77-81.

Reckase, M. D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of educational statistics, 4(3), 207-230.

Rijn, R. W. V., Sinharay, S., Haberman, S.J., & Johnson, M.S. (2016). Assessment of fit of item response theory models used in large-scale educational survey assessments. Large-Scale Assessments in Education, 4(10), 1-23.

Rubio, D. M., Berg-Weger, M., Tebb, S.S., Lee, E.S., & Rauch, S. (2003). Objectifying content validity: conducting a content validity study in social work research. Social Work Research: Oxford Journals, 27(2), 94-105. https://doi.org/10.1093/swr/27.2.94.

Shakirova, D. M (2007). Technology for the shaping of college students’ and upper-grade students’ critical thinking. Russian Education Society, 49(9): 42-52. https://doi.org/10.2753/RES1060-9393490905.

Show-Yu, L. (2009). Chemical literacy and learning sources of non-science major undergraduates on understandings of environmental issues. Chemical Education Journal, 3(1), 1-6.

Shwartz, Y., Ben-Zvi, R., & Hofstein, A. (2006). Chemical literacy: what does this mean to scientists and school teachers?. Journal of Chemical Education, 83(10), 1557-1561.

Taherdoost, H. (2016). Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. International Journal of Academic Research in Management, 5(3), 28-36. http://dx.doi.org/10.2139/ssrn.3205040.

Taylor, J. (2012, August 14). Philosophical teaching will get students thinking for themselves again. The Guardian. Retrieved from https://tinyurl.com/ybsn4de

Thiagarajan, S., Semmel, D., & Senmel, M. (1974). Instructional development for training teachers of exceptional children: a sourcebook. Retrieved 10 March, 2017 from https://files.eric.ed.gov/fulltext/ED090725.pdf.

Thorndike, R. M., & Thorndike-Christ, T. (2010). Measurement and Evaluation in Psychology and Education (8th Ed). Upper Saddle River, NJ: Pearson/ Merrill Prentice Hall.

Toland, M. D. (2014). Practical guide to conducting an item response theory analysis. Journal of Early Adolescence, 34(1), 120 –151.

Turiman, P., Omar, J., Daud, A.M., & Osman, K. (2012). Fostering the 21st century skills through scientific literacy and science process skills. Procedia - Social and Behavioral Sciences,59, 110 – 116.

Wiberg, M. (2004). Classical test theory vs item response theory. Umea: Umea University Press.

Wu, Q., Zhang, Z., Song, Y., Zhang, Y., Zhang, Y., Zhang, F., LI, R., Miao, D. (2013). The development of mathematical test based on item response theory. International Journal of Advancements in Computing Technology, 5(10), 209-216.

Wynd, C. A., Schmidt, B., & Schaefer, M. A. (2003). Two quantitative approaches for estimating content validity. Western Journal of Nursing Research, 25(5), 508-518.

Xie, R. (2018). Investigating the content validity of the junior middle school entrance english testing. Journal of Education and Development, 2(1), 45-54.

Yaghmale, F. (2003). Content validity and its estimation. Journal of Medical Education, 3(1), 25-27. https://doi.org/10.22037/jme.v3i1.870

Yilmaz, K., Altinkurt, Y., & Cokluk, O. (2011). Developing the Educational Belief Scale: The Validity and Reliability Study. Educational Sciences: Theory and Practice, 11(1), 343-350.

Refbacks

  • There are currently no refbacks.