Aiken’s V Coefficient: Differences in Content Validity Judgments

Authors

  • Cesar Merino-Soto Universidad Católica los Ángeles de Chimbote, Peru

DOI:

https://doi.org/10.15359/mhs.20-1.3

Keywords:

psychometric, psychological test, statistical analysis, validation studies, methodology

Abstract

Objective: When two independent groups of expert judges study content validity, a formal test of the differences between their judgments is required, since different content validity judgments can be obtained. But generally, content validity research does not examine this likely source of discrepancies. This report describes the implementation of a method to evaluate the difference in Aiken's V coefficients applied to research work in sports science.

Methodology: The procedure applies an adaptation to construct the confidence interval of the difference between Aiken’s V coefficients and also implements a standardized estimator of the size of the difference between the V coefficients, specifically the arcsine transformation of V coefficients.

Results: In a secondary data analysis framework, two examples are developed, extracting data from both publications, and the difference between the impressionist-based conclusion and the empirical-based conclusion and formal evaluation is demonstrated. Statistical differences not previously observed were detected.

Conclusions and implications: The method to estimate differences in Aiken’s content validity coefficients for research allows an advance in the methodology to validate measurement instruments. The applicability of this procedure in the context of sports sciences and education sciences, as well as in the research design involved, is assessed.

References

Agresti, A. y Brent, C. (1998). Approximate is better than ‘exact’ for interval estimation of binomial proportions. The American Statistician, 52(2), 119-126. https://doi.org/10.1080/00031305.1998.10480550

Aiken, L. (1980). Content validity and reliability of single items or questionnaire. Educational and Psychological Measurement, 40(4), 955-959. https://doi.org/10.1177/001316448004000419

American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. American Educational Research Association.

Anscombe, F. (1948). The transformation of Poisson, binomial and negative-binomial data. Biometrika, 35(3/4), 246-254. https://doi.org/10.1093/biomet/35.3-4.246

Burgueño, R., Macarro-Moreno, J. y Medina-Casaubón, J. (2020). Psychometry of the Multidimensional Perceived Autonomy Support Scale in Physical Education with Spanish secondary school students. SAGE Open. https://doi.org/10.1177/2158244019901253

Cabero, J. y Llorente, M. (2013). La aplicación del juicio de experto como técnica de evaluación de las tecnologías de la información (TIC). Eduweb. Revista de Tecnología de Información y Comunicación en Educación, 7(2), 11-22. http://servicio.bc.uc.edu.ve/educacion/eduweb/v7n2/art01.pdf

Calonge-Pascual, S., Fuentes-Jiménez, F., Casajús Mallén, J. A., y González-Gross, M. (2020). Design and validity of a choice-modeling questionnaire to analyze the feasibility of implementing physical activity on prescription at primary health-care settings. International Journal of Environment Research and Public Health, 17, 6627. https://doi.org/10.3390/ijerph17186627

Claeys, C., Nève, J., Tulkens, P. M. y Spinewine, A. (2012). Content validity and inter-rater reliability of an instrument to characterize unintentional medication discrepancies. Drugs Aging, 29, 577-591. https://doi.org/10.1007/bf03262275

Cohen, J. (2008). Statistical power analysis for the behavioral sciences. Second edition. Lawrence Erlbaum Associates, Inc.

Collet, C., Nascimento, J. V., Folle, A. e Ibáñez, S. J. (2018). Construcción y validación de un instrumento para el análisis de la formación deportiva en voleibol. Cuadernos de Psicología del Deporte, 19(1), 178-191. https://doi.org/10.6018/cpd.326361

Cox, D. R. (1970). Analysis of binary data. Chapman y Hall/CRC.

Domínguez-Lara, S. (2017). Construcción de una escala de autoeficacia para la investigación: primeras evidencias de validez. Revista Digital de Investigación en Docencia Universitaria, 11(2), 308-322. http://dx.doi.org/10.19083/ridu.11.514

Escobar, J. y Cuervo, Á. (2008). Validez de contenido y juicio de expertos: una aproximación a su utilización. Avances en Medición, 6(1), 27-36.

Fitch, K., Bernstein, S. J., Aguilar, M. D., Burnand, B., LaCalle, J. R., Lazaro, P., ... Kahan, J. P. (2001). The RAND/UCLA Appropriateness Method User’s Manual. RAND corporation.

Freeman, M. F. y Tukey, J. W. (1950). Transformations related to the angular and the square root. Annals of Mathematical Statistics, 21, 607-611. https://doi.org/10.1214/aoms/1177729756

Gamonales, J., León, K., Muñoz, J., González-Espinosa, S. e Ibáñez, S. (2018). Validación del IOLF5C para la eficacia del lanzamiento en fútbol para ciegos. Revista Internacional de Medicina y Ciencias de la Actividad Física y del Deporte, 18(70). https://doi.org/10.15366/rimcafd2018.70.010

Glass, G. V., McGaw, B. y Smith, M. L. (1981). Meta-analysis in social research. Sage.

Hambleton, R. K. (1984). Validating the test score. En R. A. Berk (ed.), A Guide to Criterion-Referenced Test Construction (pp. 199-230). Johns Hopkins University Press.

Hernández-Nieto, R. A. (2002). Contributions to Statistical Analysis. Universidad de Los Andes.

Koller, I., Levenson, M. R. y Glück, J. (2017). What do you think you are measuring? A mixed-methods procedure for assessing the content validity of test items and theory-based scaling. Frontiers in Psychology, 8, 126. https://doi.org/10.3389/fpsyg.2017.00126

Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563-575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x

Lipsey, M. y Wilson, D. (2001). Practical meta-analysis. Sage.

Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35, 382-385. https://doi.org/10.1097/00006199-198611000-00017

McCullagh, P. y Nelder, J. (1989). Generalized Linear Models. Chapman and Hall.

Merino-Soto, C. (2016). Percepción de la claridad de los ítems: Comparación del juicio de estudiantes y jueces-expertos. Revista Latinoamericana de Ciencias Sociales, Niñez y Juventud, 14(2), 1469-1477. https://doi.org/10.11600/1692715x.14239120615

Merino-Soto, C. (2018). Confidence interval for difference between coefficients of content validity (Aiken's V): a SPSS syntax. Anales de Psicología, 34(3), 587-590. https://dx.doi.org/10.6018/analesps.34.3.283481

Moreno, E. y Gómez, M. (2017). Validación herramienta observacional para el análisis de rachas de lanzamiento en baloncesto. Revista de Psicología del Deporte, 26(1), 87-93.

Moscoso, M. S. y Merino-Soto, C. (2017). Construcción y validez de contenido del Inventario de Mindfulness y Ecuanimidad: una perspectiva iberoamericana. Mindfulness & Compassion, 2(1), 9-16. https://doi.org/10.1016/j.mincom.2017.01.001

Newcombe, R. G. (2012). Confidence Intervals for Proportions and Related Measures of Effect Size. CRC Biostatistics Series.

Ortega, G., Abad, M., Giménez, F., Durán, L., Franco, J., Jiménez, A. y Robles, J. (2018). Design and validation of a satisfaction questionnaire with sports programmes in penitentiaries. Apunts. Educación Física y Deportes, 131(1), 21-33. http://dx.doi.org/10.5672/apunts.2014-0983.es.(2018/1).131.02

Pedrosa, I., Suárez-Álvarez, J. y García-Cueto, E. (2013). Evidencias sobre la validez de contenido: avances teóricos y métodos para su estimación. Acción Psicológica, 10(2), 3-18. http://dx.doi.org/10.5944/ap.10.2.11820

Penfield, R. D. y Miller, J. M. (2004). Improving content validation studies using an asymmetric confidence interval for the mean of expert ratings. Applied Measurement in Education, 17(4), 359-370. http://dx.doi.org/10.1207/s15324818ame1704_2

Penfield, R. y Giacobbi, P. (2004). Applying a score confidence interval to Aiken’s item content-relevance index. Measurement in Physical Education and Exercise Science, 8(4), 213-225. https://doi.org/10.1207/s15327841mpee0804_3

Robles, A., Robles, J., Giménez, F. y Abad, M. (2016). Validación de una entrevista para estudiar el proceso formativo de judokas de élite. Revista Internacional de Medicina y Ciencias de la Actividad Física y del Deporte, 64. https://doi.org/10.15366/rimcafd2016.64.007

Robles, P. y Rojas, M. (2015). La validación por juicio de expertos: dos investigaciones cualitativas en lingüística aplicada. Revista Nebrija de Lingüística Aplicada, (18). https://www.nebrija.com/revista-linguistica/files/articulosPDF/articulo_55002aca89c37.pdf

Rodríguez, P. L., Pérez, J. J., García, E. y Rosa, A. (2015). Adaptación transcultural de un cuestionario que evalúa la actividad física en niños de 10 y 11 años. Archivos Argentinos de Pedriatría, 113(3), 198-204.

Rovinelli, R. J. y Hambleton, R. K. (1977). On the use of content specialists in the assessment of criterion-referenced test item validity. Dutch Journal of Educational Research, 2, 49-60.

Rubio, D. M., Berg-Weber, M., Tebb, S. S., Lee, E. S. y Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27(2), 94-104.

https://doi.org/10.1093/swr/27.2.94

Rücker, G., Schwarzer, G., Carpenter, J. y Olkin, I. (2009). Why add anything to nothing? The arcsine difference as a measure of treatment effect in meta-analysis with zero cells. Statistics in Medicine, 28(5), 721-738. https://doi.org/10.1002/sim.3511

Rücker, G., Schwarzer, G. y Carpenter, J. (2008). Arcsine test for publication bias in meta-analyses with binary outcomes. Statistics in Medicine, 27(5), 746-763. https://doi.org/10.1002/sim.2971

Sánchez-Meca, J., Marín-Martínez, F. y Chacón-Moscoso, S. (2003). Effect-size indices for dichotomized outcomes in meta-analysis. Psychological Methods, 8(4), 448-467. http://dx.doi.org/10.1111/j.1469-185X.2007.00027.x

Urrutia, M., Barrios, S., Gutiérrez, M. y Mayorga, M. (2014). Métodos óptimos para determinar validez de contenido. Educación Médica Superior, 28(3), 547-558.

Warton, D. I. y Hui, F. K. C. (2011). The arcsine is asinine: the analysis of proportions in ecology. Ecology, 92, 3-10. https://doi.org/10.1890/10-0340.1

Zou, G. Y. y Donner, A. (2008). Construction of confidence limits about effect measures: a general approach. Statistics in Medicine, 27, 1693-1702. http://dx.doi.org/10.1002/sim.3095

Published

2023-01-01

How to Cite

Merino-Soto, C. (2023). Aiken’s V Coefficient: Differences in Content Validity Judgments. MHSalud: Revista En Ciencias Del Movimiento Humano Y Salud, 20(1), 1-10. https://doi.org/10.15359/mhs.20-1.3

How to Cite

Merino-Soto, C. (2023). Aiken’s V Coefficient: Differences in Content Validity Judgments. MHSalud: Revista En Ciencias Del Movimiento Humano Y Salud, 20(1), 1-10. https://doi.org/10.15359/mhs.20-1.3

Comentarios (ver términos de uso)

Most read articles by the same author(s)

<< < 13 14 15 16 17 18 19 > >>