KAYNAKÇA

Baykul, Y. (2021). Eğitimde ve Psikolojide Ölçme: Klasik Test Teorisi ve Uygulaması. Pegem Akademi.

Bejar, I. I. (2002). Adaptive generative testing: From conception to implementation. In S. Irvine & P. Kyllonen (Eds.), Item generation for test development (pp. 199–218). Mahwah, NJ: Lawrence Erlbaum.

Bezirhan, U., & von Davier, M. (2023). Automated Reading Passage Generation with OpenAI’s Large Language Model. arXiv preprint arXiv:2304.04616.

BİLSEM Online (2023). Sıkça sorulan sorular: BİSEM sınav soruları yeteneğe göre değişir mi? [Frequently asked questions: Do BİLSEM exam questions vary depending on ability?] Retrieved from https://www.bilsemonline.com/sss sayfasından erişilmiştir.

Bormuth, J. R. (1970). On the theory of achievement test items.

Cohen, R.J. & Swerdlik, M. E. (2015). Psychological testing and assessment. New York: McGraw-Hill Education

Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Holt, Rinehart and Winston, 6277 Sea Harbor Drive, Orlando, FL 32887.

Drasgow, F., Luecht, R. M., & Bennett, R. E. (2006). Technology and testing. Educational measurement, 4, 471-515.

Ebel, R.L. Estimation of the reliability of ratings. Psychometrika 16, 407–424 (1951). https://doi.org/10.1007/BF02288803

Embretson, S. E. (2002). Generating abstract reasoning items with cognitive theory. In S. Irvine, & P. Kyllonen (Eds.), Generating items for cognitive tests: Theory and practice. Mahwah, NJ: Erlbaum.

Falcão, F., Costa, P., & Pêgo, J. M. (2022). Feasibility assurance: a review of automatic item generation in medical assessment. Advances in Health Sciences Education, 27(2), 405- 425. https://doi.org/10.1007/s10459-022-10092-z

Gierl, M. J. & Lai, H. (2016). Automatic item generation. In S. Lane, M. Raymond, & T. Haladyna (Eds.), Handbook of test development (2nd edition, pp. 410-429). New York: Routledge.

Gierl, M. J., & Haladyna, T. (2013). Automatic item generation: Theory and practice. New York: Routledge.

Guttman, L. A necessary and sufficient formula for matric factoring. Psychometrika 22, 79– 81 (1957). https://doi.org/10.1007/BF02289212

Gütl, C., Lankmayr, K., Weinhofer, J., & Höfler, M. (2011). Enhanced Automatic Question Creator – EAQC: concept, development and evaluation of an automatic test item creation tool to foster modern e-education. Electron. J. Elearn. 9, 23–38.

Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and Validating Test Items. B/W Illustrations: Routledge.

Irvine, S. H., & Kyllonen, P. C. (2002). Item generation for test development. Hillsdale, NJ: Erlbaum.

Kosh, A. E., Simpson, M. A., Bickel, L., Kellogg, M., & Sanford‐Moore, E. (2019). A cost– benefit analysis of automatic item generation. Educational Measurement: Issues and Practice, 38(1), 48-53. https://doi.org/10.1111/emip.12237

Kyllonen, P. C. (2009). New constructs, methods, and directions for computer-based assessment. The transition to computer-based assessment, 151-156.

MEB. (2018). Ortaöğretim Türk Dili & Edebiyatı Dersi Öğretim Programı [Secondary Education Turkish Language and Literature Curriculum] (9, 10, 11 & 12. Sınıflar [grade]). Ankara: MEB.

Narayan, S., Gupta, A., Khan, F. S., Snoek, C. G., & Shao, L. (2020). Latent embedding feedback and discriminative features for zero-shot classification. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16 (pp. 479- 495). Springer International Publishing. https://doi.org/10.1007/978-3-030-58542-6_29

OSYM (2018). Alan Yeterlilik Testleri (AYT) & Cevap Anahtarları [Advanced Proficiency Test and Answer Key]. Retrieted [07.06.2022] from https://www.osym.gov.tr/yks_2018_ayt.pdf

Parshall, C.G., Spray, J.A., Kalohn, J.C., y Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer.

Ryoo, J. H., Park, S., Suh, H., Choi, J., & Kwon, J. (2022). Development of a new measure of cognitive ability using automatic item generation and its psychometric properties. SAGE Open, 1-13. https://doi.org/10.1177/21582440221095016

Sayın & Gierl, (2024). Using OpenAI GPT to Generate Reading Comprehension Items. Educational Measurement Issues and Practice, 43(1), https://doi.org/10.1111/emip.12590

Sayın, A., & Gierl, M. J. (2023a). Using Automated Item Generation to Provide Individualized Feedback in Formative Tests. Congress of the National Council on Measurement in Education (NCME), 12-15 April 2023, Chicago, USA.

Sayın, A., & Gierl, M. J. (2023b). Automatic item generation for online measurement and evaluation: Turkish literature items. International Journal of Assessment Tools in Education, 10(2), 218-231. https://doi.org/10.21449/ijate.1249297

Sayin, A., Bozdag, S., & Gierl, M. J. (2023). Automatic item generation for non-verbal reasoning items. International Journal of Assessment Tools in Education, 10(Special Issue), 132-148. https://doi.org/10.21449/ijate.1359348

Shin, J., & Gierl, M. J. (2022). Generating reading comprehension items using automated processes. International Journal of Testing, 22(3-4), 289-311.

Singley, M. K., & Bennett, R. E. (2002). Item generation and beyond: Applications of schema theory to mathematics assessment. In S. H. Irvine & P. C. Kyllonen (Eds.), Item generation for test development (pp. 361-384). Mahwah, NJ: Erlbaum.

Sinharay, S., & Johnson, M. (2005). Analysis of data from an admissions test with item models. ETS Research Report Series, 2005(1), i-32. https://doi.org/10.1002/j.2333- 8504.2005.tb01983.x

von Davier, M. (2018). Automated item generation with recurrent neural networks. psychometrika, 83(4), 847-857. https://doi.org/10.1007/s11336-018-9608-y