Análisis de la equivalencia de la medida de la traducción de un test en un programa de evaluación estatal

Autores/as

DOI:

https://doi.org/10.15359/ree.20-3.9

Palabras clave:

Equivalencia de la medida, modelos de ecuaciones estructurales, análisis factorial confirmatorio

Resumen

Al traducir tests a uno o más lenguajes, surge la pregunta sobre la equivalencia de los ítems en los diferentes lenguajes. Esta equivalencia puede ser estudiada en el nivel de escala por medio de un análisis factorial confirmatorio (AFC) de grupos múltiples en el contexto de modelos de ecuaciones estructurales. Esta investigación analizó la equivalencia de la medida de la versión en español de un test construido originalmente en inglés utilizando un AFC de grupos múltiples. Se utilizaron muestras de hablantes nativos del idioma al que se tradujo el test, quienes tomaron el test en inglés y en español, específicamente hispanoparlantes. Los ítems del test fueron agrupados en 12 conjuntos de contenido similar y para cada conjunto se calculó un promedio. Cuatro modelos fueron analizados para examinar la equivalencia entre grupos. Los pesos factoriales y los resultados obtenidos en las dos versiones del test (español e inglés) sugieren la equivalencia de ambas versiones. Las técnicas estadísticas utilizadas en este estudio pueden asimismo ser usadas para analizar el desempeño en un test con base en variables dicotómicas o que pueden tratarse como dicotómicas como género, estatus socioeconómico, localización geográfica y otras variables de interés. 

Biografía del autor/a

Jorge Carvajal-Espinoza, Universidad de Costa Rica

Licenciado in Math Education and Master’s in Educational Evualuation from the Universidad de Costa Rica; PhD in Educational Measurement from the University of Kansas. He is a professor at the School of Mathematics, Universidad de Costa Rica, where he has taught for more than 20 years and is a researcher at the Centro de Investigaciones Matemáticas y Meta-Matemáticas, Universidad de Costa Rica. He has published internationally and has presented at international conferences in the field of Educational Measurement. He supervises the development and statistical analysis of Prueba de Diagnóstico, an entrance placement test at the School of Mathematics, Universidad de Costa Rica

Greg Welch, University of Nebraska

Received a Bachelor’s in Psychology and a Master’s in Applied Statistics from the University of Wyoming, and a Master’s and Doctorate in Research Methodology in Education from the University of Pittsburgh. Welch currently leads the evaluation efforts for Center for Research  on Children, Youth, Families & Schools at University of Nebraska-Lincoln (UNL) and has provided formative and summative evaluation expertise on a number of privately and federally funded projects. He also serves as an adjunct faculty member for the Quantitative, Qualitative, and Psychometrics Methods Program in the Department of Educational Psychology at UNL. Welch has taught numerous graduate level courses, including Introduction to Educational Measurement, Structural Equation Modeling, and Program Evaluation. He is a regular member of numerous doctoral committees for students in programs throughout the College of Education and Human Sciences. Greg Welch’s research agenda focuses on utilizing advanced methodological approaches to address important educational policy-related issues.

Referencias

American Educational Research Association (AERA), Asociación Americana de Psicología (APA), & National Coucil on Measurement in Education (NCME). (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

August, D., & Hakuta, K. (Eds.). (1997). Improving schooling for language-minority students. A research agenda. Washington, DC: National Academy of Science.

Bentler, P. M. (1995). EQS: Structural equations program manual. Encino, CA: Multivariate Software.

Byrne, B. M. (2006). Structural equation modeling with EQS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum. doi: http://dx.doi.org/10.1207/s15328007sem1302_7

Carvajal, J. (2015). Using DIF to monitor equivalence of translated tests in large scale assessment: A comparison of native speakers in their primary and the test’s source language. The Tapestry Journal, 7(1), 14-21. Recuperado de http://journals.fcla.edu/tapestry/article/view/88133/84742

Gierl, M., Rogers, W. T., & Klinger, D. A. (1999). Using statistical and judgmental reviews to identify and interpret translation differential item functioning. The Alberta Journal of Educational Research, 45(4), 353-376. Recuperado de http://ajer.journalhosting.ucalgary.ca/index.php/ajer/article/view/107/99

Hall, R. J., Snell, A. F., & Singer M. (1999). Item parceling strategies in SEM: Investigating the subtle effects of unmodeled secondary constructs. Organizational Research Methods, 2(3), 233-256. doi: http://dx.doi.org/10.1177/109442819923002

Hirschfeld, G., & von Brachel, R. (2014). Multiple-Group confirmatory factor analysis in R-A tutorial in measurement invariance with continuous and ordinal indicators. Practical Assessment, Research & Evaluation, 19(7), 1-12. Recuperado de http://pareonline.net/pdf/v19n7.pdf

Holmes, D., Hedlund, P., & Nickerson, B. (2000). Accommodating ELLs in state and local assessments. Washington, DC: National Clearinghouse for Bilingual Education.

Hu, L. & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structual Equation Modeling, 6(1), 1-55. doi: http://dx.doi.org/10.1080/10705519909540118

Lara, J., & August, D. (1996). Systemic reform and limited English proficient students. Washington, DC: Council of Chief State School Officers.

Lievens, F., Anseel, F., Harris, M. M., & Eisenberg, J. (2007). Measurement invariance of the Pay Satisfaction Questionnaire across three countries. Educational and Psychological Measurement, 67(6), 1042-1051. doi: http://dx.doi.org/10.1177/0013164406299127

Price, L. R. (1999). Differential functioning of items and tests versus the Mantel-Haenszel technique for detecting differential item functioning in a translated test. Paper presented at the annual meeting of the American Alliance of Health Physical Education, Recreation, and Dance. Boston, MA.

Robin, F., Sireci, S. G., & Hambleton, R. (2003). Evaluating the equivalence of different language versions of a credentialing exam. International Journal of Testing, 3(1), 1-20. doi: http://dx.doi.org/10.1207/S15327574IJT0301_1

Sireci, S. G., & Khaliq, S. N. (April, 2002). An analysis of the psychometric properties of dual language test forms. (Center for Educational Assessment, Report No. 458). Paper presented at the Annual Meeting of the National Council on Measurement in Education. Amherst: University of Massachusetts, School of Education. Recuperado de http://files.eric.ed.gov/fulltext/ED468489.pdf

Wu, A. D., Li, Z., & Zumbo, B. D. (2007). Decoding the meaning of factorial invariance and updating the practice of multi-group confirmatory factor analysis: A demonstration with TIMSS data. Practical Assessment, Research and Evaluation, 12(3), 1-26. Recuperado de http://pareonline.net/getvn.asp?v=12&n=3

Publicado

2016-09-01

Cómo citar

Análisis de la equivalencia de la medida de la traducción de un test en un programa de evaluación estatal (J. Carvajal-Espinoza & G. Welch , Trans.). (2016). Revista Electrónica Educare, 20(3), 1-18. https://doi.org/10.15359/ree.20-3.9

Cómo citar

Análisis de la equivalencia de la medida de la traducción de un test en un programa de evaluación estatal (J. Carvajal-Espinoza & G. Welch , Trans.). (2016). Revista Electrónica Educare, 20(3), 1-18. https://doi.org/10.15359/ree.20-3.9

Comentarios (ver términos de uso)