Disclaimer

De meningen ge-uit door medewerkers en studenten van de TU Delft en de commentaren die zijn gegeven reflecteren niet perse de mening(en) van de TU Delft. De TU Delft is dan ook niet verantwoordelijk voor de inhoud van hetgeen op de TU Delft weblogs zichtbaar is. Wel vindt de TU Delft het belangrijk - en ook waarde toevoegend - dat medewerkers en studenten op deze, door de TU Delft gefaciliteerde, omgeving hun mening kunnen geven.

ATP Conference 2016 – A whole different ballgame, so what to learn from it?

This past week me and my test expert-colleague went to the ATP conference 2016 in Florida. ATP stands for the Association of Test Publishers. In the US national tests are everywhere. For each type of education: K12, middle school, high school, college and especially professional education tests are developed to be used nationwide. The crowd at the conference was filled up with companies that develop and deploy tests for school districts or the professional field (health care, legal, navy, military, etc.). I soon found out that these people play major league in test developing, usability studies and item analysis, while we at our university should be lucky if we find ourselves in little league.

Fair is fair: the amount of students taking exams at the university will never come near the amount of test takers they develop for. Reuse of the items developed in our situation is only in some cases a requirement, in many of our cases though reuse is not desirable. And their means: man power, time and money spent is impossible to meet. So what to learn from the Professionals?

Our special interest for this conference were the sessions on Technically Enhanced Item types (TEI’s). Since we are conducting some research into the use of certain test form scenarios, that include the use of TEI’s and constructed response items. For me, the learning points are more on the level of the usability studies, these companies do.

Apparently, the majority of the tests created by these test publishers, consist of multiple choice questions (the so called bubble sheet tests) and test developers overseas are starting to look towards TEI’s for more authentic testing. So TEI’s were a hot topic. In my first session I was kind of surprised that all question types other than multiple choice were considered to be TEI’s. Especially Drag & Drop items were quite popular. I did not think the use of this kind of question types would cause much problems for test takers, but apparently I was wrong.

Usability of these drag & drop items turned out to be much more complicated than I thought: Is it clear where students would need to drop their response? Will near placement of the response cause the student to fail or not: What if the response is partly within and partly outside the (for the student invisible) designated area? How accurate do they need to be? When to use the hotspot and when to drag an item onto an image? Do our students encounter the same kind of insecurities as the American test takers? Or are they more savvy on this matter. We do experience that students make mistakes in handling our ‘adaptive question’-type and since we are introducing different scenarios, I am aware even more than before, that we should be more clear to our students what to expect. We got a better idea what to expect when conducting our usability study on ‘adaptive questions’ and don’t assume that taking a test with TEI’s is as easy as it seems.

This conference made me aware that we need to make an effort in teaching our instructors and support workers about the principles of usability and be more strict about the application of these principles. This fits perfectly into our research project deliverables and the online instruction modules were are about to set up.

Be Sociable, Share!

Comments are closed.

© 2011 TU Delft