The Development of An Android-Based Assessment Instrument To Assess Fifth-Grade Students' Cognitive Ability

Article Info ________________ History Articles Received: 17 May 2021 Accepted: 11 June 2021 Published: 30 August 2021 ________________


INTRODUCTION
Economic, technological, demographic, political, and sociocultural forces have altered how people work and live in the twenty-first century. The change will continue to occur at a breakneck pace, as will education, which must adapt to the various changes. Education's role in preparing students who can learn and innovate, use technology and information media, and work and survive using life skills is becoming increasingly important (Arifin, 2017;Wechsler et al., 2018;Wijaya, Sudjimat, & Nyoto, 2016).
According to Levy and Murnane (Boyaci & Atalay, 2016), with the advancement of technology, scientific innovation, globalization, shifts in job orientation, economic pressures, and significant changes in social life, students must possess skills that contribute to and improve their social lives.  stated that life in the twenty-first century is synonymous with the advancement of science and technology, which places increasing demands on the elements of life, most notably the world of education.
In response to the issues raised above, educators and researchers are racing to conduct research and development on 21st-century skills that will benefit students in the future. Collaboration, communication, digital literacy, citizenship, problem-solving, critical thinking, creativity, and productivity are examples of 21stcentury skills (Ahonen & Kinnunen, 2015;Laar, Deursen, Dijk, & Haan, 2017;Voogt, Joke;Roblin, 2012).
In the United States, organizations representing diverse fields such as the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council have collaborated to form The National Academies, a special committee tasked with developing 21st-century skills. The National Academies have classified the 21st-century skills they have developed into three broad categories: cognitive competence, interpersonal competence, and intrapersonal competence (The National Academies, 2012).
Critical thinking is a fundamental cognitive ability that should be developed in elementary schools(Tim Pusat Penilaian Pendidikan, 2019). According to Kumalasari & Sulistyorini (2019), critical thinking is a type of higher-level thinking that falls under the category of cognitive processes such as analyzing, evaluating, and creating. One of the skills necessary for someone to contribute to the lives of others is the ability to think critically (Facione, 2011). Ennis (2011) classified 12 indicators of critical thinking abilities into five categories, including 1) providing simple explanations, 2) developing fundamental skills, 3) concluding, 4) providing additional explanations, and 5) strategies and tactics. According to Susanto (2013) developing a thought process is critical for problem-solving and decision making. Students who possess high-level thinking abilities can accept differences, are more receptive to diversity, are skeptical of news or information that is not yet clear, and are not easily influenced or provoked by negative things. Students who develop higher-order thinking abilities are unquestionably capable of thinking and acting independently, and distinguishing between essential and secondary concerns (Arifin, 2017;. The acquisition and assessment of higherorder thinking skills enable students to 1) transfer, apply knowledge and skills to the real world in a more complex manner; 2) engage in critical thinking, such as deducing the truth from data and facts or producing work steps that adhere to the criteria; 3) troubleshoot, analyze, and resolve life's problems (Damayanti, Rusilowati, & Linuwih, 2017;Mulnix, 2012) Based on observations and interviews with fifth-grade teachers and preliminary data from previous studies, it was concluded that school-based assessments were not effective or efficient in meeting the 2013 curriculum's expectations. It is because the preparation and administration of authentic manual assessments are viewed as less practical and efficient.
The assessment instruments included in the teacher's book are insufficient and do not measure accurately, they have not fully aided teachers in conducting authentic assessments. In each competency assessment, the assessment instruments included in the teacher's book are still quite limited.
The needs analysis results indicate that a tool is required to conduct practical and efficient assessments and accurately assess cognitive competencies that include aspects of students' knowledge acquired through thematic learning. Additional data and information were gathered during the needs analysis stage via a questionnaire about the needs of teachers and students.
Numerous previous studies have been conducted on instrument development, critical thinking, and even android applications. According to research conducted by Alfansuri, Rusilowati, & Ridlo (2018), the development of an instrument aims to help students understand which competencies they have mastered and which they have not; of course, the teacher can see the extent to which students' competencies have developed. Additionally, teachers are capable of organizing and managing the appropriate learning process.
Research by Aziz, Kustiono, & Lestari (2019) states that the development of technology-based instruments can be a discovery for elementary school teachers who are needed to replace old assessment instruments with new assessment instruments.
The research gap between the conducted research and the facts is on the ground results in a solution to the desired problem based on a needs analysis. The solution chosen is to create a flexible, effective, and efficient tool for teachers to facilitate the assessment process (Aziz et al., 2019).
These tools are expected to assist teachers in conducting assessments objectively through precise and accurate indicators. The needs are met through the development of authentic assessment instruments and the application of technological advancements. As a result, an android-based cognitive assessment instrument was developed to meet the needs of teachers and students involved in the assessment's implementation.

METHODS
The research method used is research and development research. The study's research and development procedure is divided into four stages: definition, design, development, and dissemination, collectively referred to as 4-D (Four D) (Kumalasari & Sulistyorini, 2019) The defining stage is 1) analysis through observation, interviews, and documentation, and 2) needs analysis through the collection of data and information on the needs of teachers and students via a questionnaire.
The design phase consists of the following steps: 1) goal setting and literature review, beginning with an analysis of core competencies (CI), fundamental competencies (FC), indicators, and learning objectives related to theme 2 Clean Air for Health. Additionally, a literature review of the grand theory of critical thinking was conducted to identify critical thinking indicators used to determine items. 2) A draft assessment instrument creates a grid of questions, knowledge question sheets, answer keys, and scoring guidelines. The validator verifies the completed instrument.
The development stage consists of the following: 1) creating a prototype for an android-based assessment instrument. 2) expert validation of prototype development results through the use of a questionnaire. 3) product trial involving one teacher and fifteen students from Randudongkal Elementary School 02, grade 6. 4) revision of the product in light of evaluations made during the learning process and the outcome of discussions with the teacher. The process of product improvement is guided by suggestions and input from teachers and experts to perfect the product and obtain the final product.
The dissemination stage consists of 1) product implementation on a larger scale with two teachers and 85 students. The product was implemented in class V of Randudongkal Elementary School 01 and Randudongkal Elementary School 02. The implementation phase is used to determine the product's feasibility and practicality.
2) product enhancement and dissemination via seminars, scientific journal publications, and educator manuals.

RESULTS AND DISCUSSION
According to Hadzhikoleva, et al, (2019) HOTS is necessary for the success of each personal and professional development, as well as for the social and economic development of society. In fact, according to Zohar & Kohen (2016), the development of higher order thinking skill (HOTS) is an international priority for education, through which students can train themselves to face the demands of the modern era of the digital revolution 4.0.
More in Saavedra & Opfer (2012) mentions the need for HOTS among students in the 21st century as a need for progress, while initiative is a must as a result of globalization, technical developments, globalization, international competition, a developing economy, and global challenges in the environment. and politics.
Data and additional information were gathered during the needs analysis stage via a teacher need questionnaire. Two fifth-grade teachers from Elementary Schools 01 and 02 Randudongkal completed the questionnaire on teacher needs. The questionnaire that the teacher requires as a data collection instrument contains 19 indicators.
Teachers agree with the assessment that uses an android-based application to be effective and efficient and can learn to master technological devices, according to the analysis of teacher needs. Applications must be developed by the 2013 Curriculum's core competencies (CI), fundamental competencies (FC), and instructional materials.
According to the needs analysis results based on the teacher's needs questionnaire, teachers require tools for implementing practical, effective, and efficient assessments that measure cognitive competence accurately and objectively. Teachers require Android-based tools to implement a time-and resource-efficient assessment process.
The developed instrument's design is based on the grand theory of critical thinking indicators. Based on Ennis (2011) classified 12 indicators of critical thinking abilities into five categories. The 12 indicators of critical thinking skills are reduced to three indicators manifested in the creation of items.
The stages of developing items correspond to the grand theory of critical thinking indicators, followed by the development of scoring guidelines. The scoring guide is intended to assist teachers in determining the appropriate score for each item. The results of the knowledge assessment analysis are then converted into criteria for assessing student attitudes based on the Minimum Completeness Criteria (MCC).
The stages of developing multiple-choice items are determined by predetermined indicators of critical thinking. The purpose of this study is to create written test items that include multiple-choice questions. Ten multiplechoice questions are generated from the items developed using the three critical thinking indicators.
Expert judgment was used to validate the cognitive competency assessment instrument, tested on three experts in their fields, namely two instrument experts and a practitioner. The validator assigns a value between one and four. The analysis reveals that the ten items of the cognitive competency assessment instrument have a validity value greater than 0.3, indicating that it is valid.
Validation results indicate that the ten items of the cognitive competency assessment instrument have a validity coefficient greater than rtable, which is 0.444. It indicates that all ten items on the cognitive competency assessment instrument are valid, and none of them fail to meet the criteria. The cognitive competency assessment instrument's items may be used in large-scale trials.
The Intraclass Correlation Coefficient (ICC) assisted SPSS 22 in the reliability test of the cognitive competence assessment instrument. According to Suharsimi (2008: 75), states that an assessment instrument is considered reliable if r xx > 0.6. According to the ICC calculation, the reliability coefficient is rxx = 0.779, indicating that the cognitive competency assessment instrument is reliable and ready for testing.
On a small scale, twenty student respondents were tested. The validity of the data was determined using a point biserial correlation analysis in the excel program. The condition is that the coefficient of the rcount of biserial points must be greater than the value of rtable, which is 0.444 (for 20 samples). If the correlation coefficient is less than 0.444, the item of the assessment instrument with the lowest correlation coefficient is considered invalid/not used/invalid.
The validity results indicate that the ten items of the cognitive competency assessment instrument have a coefficient of validity greater than rtable, 0.444. It indicates that all ten items on the cognitive competency assessment instrument are valid, and none of them fail to meet the criteria. The cognitive competency assessment instrument's items may be used in large-scale trials.
In this small-scale trial, the reliability of the cognitive competency assessment instrument was determined using the Cronbach Alpha reliability test and SPSS 22.0 software. Table 1 illustrates the reliability analysis using a smallscale Cronbach Alpha. Cronbach Alpha reliability analysis yielded a value of 0.739 for the instrument reliability coefficient. When the index is equal to or greater than 0.70, the index is considered reliable (Mardapi, 2016). According to the findings of this analysis, the cognitive competency assessment instrument has a high degree of reliability in small-scale trials.
Exploratory Factor Analysis was used to determine the construct validity (EFA). The sample size was determined by using a random sampling technique on 60 respondents.
Valid data based on the explanatory factor analysis procedure are feasible. It can be continued for testing validity if the Keiser-Meyer-Olkin Measure of Sampling Adequacy (KMO MSA) requirements are more significant than 0.5 and the Chi-Square sig value is less than 0.05. The KMO value greater than 0.5 is 0.730 based on field test data, and the Chi-Square value is 0.000 sig. The analysis's findings are summarized in Table 2. .000 According to the results of SPSS data processing, the Measure of Sampling (MSA) coefficient, denoted by the letter "a" (diagonal direction from top left to bottom right), from the Anti-Image Correlation analysis, did not identify any items with a correlation value less than 0.5 with MSA criteria greater than 0.5. It demonstrates that the data derived from the test results are declared suitable for exploratory factor analysis.
The output of factor analysis consists of four major components that must be considered: (1) Explained Total Variance; (2) Screen Plots; (3) Component Matrix; and (4) Rotated Component Matrix. According to Total Variance Explained, three factors are formed. The number of factors is determined by the eigenvalues that remain greater than one. The eigenvalues represent each factor's relative importance in calculating the variance of the ten items analyzed.
Total variance can also be explained with a scree plot which can be seen in Figure 2. According to the scree plot in Figure 1, the eigenvalues decrease as the number of factors increases but remain more significant than 1. In contrast, the eigenvalues decrease as the number of factors increases, remaining greater than 1. In contrast, the eigenvalues decrease as the number of factors increases, remaining greater than 1. In contrast, the eigenvalues decrease as the number of factors increases, remaining less than 1, indicating three factors.
The component factor must have a magnitude of 0.3; the largest one is used if two component factors have the same magnitude. If no distribution rotation is performed, the following ten items are included in the analysis results: The component factors 1, 2, 3, 4, 5, 7, 9, and 10 are comprised of the items 1, 2, 3, 4, 5, 7, 9, and 10; item 6 is comprised of the component factor 2, and item 8 is comprised of the component factor 3.
The Rotated Component Matrix table demonstrates that the value of the significant component or factor loading increases as the small component or factor loading decreases, indicating a more defined and realistic factor distribution. The Rotated Component Matrix produces the following results: Items 1, 5, 7, and 9 are included in component factor 1, items 2, 3, and 4 are included in component factor 2, and items 6, 8, and 10 are included in component factor 3. The following are the results of the reliability analysis shown in Table 3.  Table 3, the reliability coefficient is 0.866 when using the SPSS version 22 program to conduct the analysis. When the index is equal to or greater than 0.70, the index is considered reliable (Mardapi, 2016). According to the findings of this analysis, the cognitive competency assessment instrument has a high degree of reliability in large-scale trials.
To determine the practicality of the students' cognitive assessments were taken from practitioners through a questionnaire. Subjectivity, systematic, construction, linguistic, and practical considerations were used to develop questionnaire statement items. The assessor's score for the assessment instrument's practicality is determined by the table above, and the score criteria are listed in Table 4. Apart from being valid and reliable, cognitive competency assessment instruments can also be practical for elementary school teachers to use in assessing students' cognitive abilities. A questionnaire was used to collect data on practitioners' perceptions of students' cognitive competency assessment instruments. Subjectivity, systematic, construction, linguistic, and practical considerations were used to develop questionnaire statement items.
The results of the calculation of the teacher's response to the cognitive assessment instrument's practicality are shown in Table 4. It was determined that the lowest practicality value was 37, and the highest was 41 out of five teacher respondents for all aspects. According to the researcher's score criteria in Table 4.19, each respondent deemed the cognitive competency assessment instrument to be practical.
In fact, HOTS is one of the key aspects of education. As the highest level in the cognitive process hierarchy, the HOTS will be used by students when they encounter unfamiliar problems, uncertainties, dilemmas, or new information. When faced with these situations, students remember them, compile facts, relate them to prior knowledge, and generate this information to achieve goals or solve complex situations (Yee et al., 2015;Hadromi et al., 2021). As a result, many researchers and educators conclude that higher-order thinking is able to solve problems better (Abosalem, 2016). Handayani & Lestari (2019) states that the development of instruments related to higherorder thinking skills can make students think critically, creatively, and solve problems. The development of the instrument can be an assessment model for teachers to measure students' processes and learning outcomes.
According to the findings of a preliminary study on android applications conducted by Ashari, Lestari, & Hidayah (2016) androidbased assessment instruments can provide feedback on students' mastered competencies. The use of Android as a learning platform can be a strategy for developing learning objectives that emphasize increasing students' motivation and attractiveness (Lin & Jou, 2013).

CONCLUSION
According to the analysis of teacher needs, teachers agree with the assessment that uses an android-based application to be effective and efficient and can learn to master technological devices. The developed applications must adhere to the 2013 Curriculum's Core Competencies, Fundamental Competencies, and instructional materials. The application product, which takes the form of an android-based cognitive assessment, demonstrates that it is valid, reliable, and practical.