Blogs

Should curriculum and assessment authors be certified? A response to Rolf Molich


Authors: Gilbert Cockton
Posted: Mon, May 20, 2019 - 2:31:39

Rolf's blog begins with a question: “Should usability testers be certified?”—answering “Yes” and then, more emphatically, “YES!” I will focus on his closing question: “Is certification worth the effort?” However, his comparisons with certified doctors and pilots (or domestic gas engineers, psychotherapists, etc.) are mistaken. These individuals can cause considerable harm through incompetence. Usability testers need extensive help from others to cause harm beyond wasted resources. Furthermore, we are nowhere near making a political case for mandatory certification for usability and user experience professionals. Only legislation or industry-wide action would make certification mandatory. It is politically naive to think that this would be realistic within the next decade and probably more. Rolf’s opening question is clearly rhetorical. It makes for a good headline and opening to his blog, so let that, rather than its credibility, be its worth.

Worth is a very clever word in English (few other languages distinguish it from value). It can be a predicative adjective, taking an object like verbs do, as in that “because you’re worth it” strapline. Used this way, worth indicates that positives (feminine appeal, etc.) justify negatives (cost, time, poor unadorned body image, etc.)  So, what are the positives of the CPUX accreditation and what are the negatives?

The positives relate to Advanced Level Usability Testing being one of a family of three qualifications that are underpinned by impressive curriculum documents authored and edited by experts. These cover much ground and include some up-to-date material and valuable wisdom. For UX workers with no higher education or extensive industry courses, I would expect all three qualifications to be a good place to start. However, only Advanced Level User Requirements Engineering is adequate as a standalone qualification. This leads us into the negatives.

The foundation and user testing curricula are too narrow and overly dogmatic, and would not be approved in any well-run higher education institution. I have a degree in education, wrote my undergraduate dissertation on curriculum design, have a postgraduate certificate, and am on the UK national register of qualified teachers. Despite the expertise behind the CPUX curricula, they lack sufficient alternative perspectives and critical positions to equip anyone in interaction design (IxD), whatever their specialism, to develop an adequate playbook for working strategically in contemporary innovation. There is no springboard to professional growth in these curricula. They are currently closed-terminal qualifications with no indication that they are not the only word on the topics covered (and still less the last word).

I have taught high school curricula with criterion-based assessment and carried this expertise over into university teaching. I never have, and never will, use multiple-choice questions (MCQs). They are not learner-centered. Too often they attempt to trip students up (try Questions 1 and 3 in https://uxqb.org/wp-content/uploads/documents/CPUX-F_EN_Public-Test-Questions.pdf). Too often the “correct” answers are debatable, possibly to the point of being wrong. They make marking cheap and easy, and thus have no place in genuine human-centered practice. They also discriminate, rewarding language skills as much as domain knowledge, when many individuals will lack the required (second) language skills. Their use for decades in UK medical qualifications has spawned a whole industry of courses and guidance on how to handle MCQs. Yes, that’s a help desk, a fat user manual, and near mandatory courses on how to pass exams. If we can fight for users, then surely we can fight for our own learners too. Are MCQs really necessary? What are the pass mark and first-time pass rate for these exams, and the demographics of who has passed so far?

Is there evidence of curricula and assessment authors having any formal certification in the relevant education practices? Given Rolf’s position on certification, he must logically step down with his collaborators if any are not properly qualified educationally. Their extensive domain expertise in user-centered IxD developed over decades alone does not qualify them as educators who can professionally set a syllabus and assess learners (sadly, this true of almost every academic in higher education, even when they have had some training and accreditation).

So, is certification worth the effort? You may be surprised at my answer, which is Yes! Study of content that is more up to date and professionally informed than many current academic HCI textbooks is very valuable (although fewer professional books suffer here). I would urge everyone teaching HCI and IxD to look at the curricula at https://uxqb.org/en/documents/ and see how their syllabi match up. The only negative that could make certification not worth the effort are the MCQs. There are other negatives above, but none immediately outweigh the benefits of a “good enough for getting started” curriculum and the non-MCQ-assessed elements. I cannot imagine anyone engaging with the certification process and not benefiting (myself included, although I’d expect to fail on many MCQs, as I know far too much to correctly guess the examiners’ “correct” answer).

However, a Master’s level above Foundation and Advanced is needed. As Rolf notes: “Certification certainly will not make you a usability test expert in 10 days” (unlike certification of doctors and pilots). However, no route toward such expertise is offered and could well be severely obstructed by a foundation curriculum that sticks too uncritically to ISO 9241-210, a very dated idealized engineering design subprocess model with no empirical basis in typical design innovation work. It lacks links to strategic business considerations and vital creative practices. Indeed, it does not even acknowledge or respect either. It cannot be followed after a product or service is released. The Advanced Level User Requirements Engineering curriculum does much to compensate with its “model-based context of use analysis,” however idiosyncratic its naming and contrast to the “classic” version (plus a useful section on cooperation with others). 

The current curricula are no basis for a terminal qualification. For me, they will get you to advanced novice at best, without preparation for developing further. The user-testing curriculum is overly focused on fixed task user testing, overlooking other practices that can be viewed equally as standard activity models (e.g., learnability testing, free use in a lab or workplace). As far as evaluation for IxD is concerned, this comes too close to turning out one-trick ponies. Even so, they may be better one-trick ponies than some minimal HCI education turns out. This will be even more true for the Advanced Level User Requirements Engineering curriculum. In this sense, CPUX is not only worthwhile (with reservations, potentially calamitous for some) for an individual’s professional development, but also for formal HCI and IxD education, as well as for well-informed employers. I would expect graduates with HCI education to know what is in the three curricula, be able to follow their “standard activity models,” know when and why these need to be adapted, improved, or replaced, and know where the gaps in the curricula are and how to fill them. I must thus end with a proper acknowledgment of Rolf and colleagues’ valuable work here: Tusend tak! (but please spot the Kool Aid and don’t drink it).



Posted in: on Mon, May 20, 2019 - 2:31:39

Gilbert Cockton

An Editor-in-Chief of Interactions, Gilbert Cockton has retired from permanent employment but still works part-time as a professor. [email protected]
View All Gilbert Cockton's Posts



Post Comment


No Comments Found