Thursday, 30 October 2008

Week 4: UCD - Understanding users

In this week’s lecture we looked at design from the user's perspective in greater detail. First we looked at the user at a cognitive level. The three key cognitive aspects related to HCI are attention, perception and memory. Interfaces should be designed to draw our attention to certain details where appropriate, be perceptually unambiguous and use appropriate output for sensory modalities, and rely on recognition over recall to reduce load on memory (although due to cognitive techniques like scanning and chunking there is no need to keep the number of items on the page in accordance with the 7+/-2 rule). Secondly we looked at user research methods, which we also did in the second half of the seminar. The toolbox of research methods for the interactive designer is very diverse, including things like ethnography (which follows a real world context research paradigm), focus groups, card sorting, eye tracking, cognitive walkthroughs, and many more. Some are purely done by usability experts (which are cheap and quick) and others involve the user more (which are generally better suited for user centred design).

In the seminar we found out that we would be evaluating and re-designing four of the Traveline websites, which are used to provide information to the public on public transport. Two speakers came in to talk about Traveline and the class asked questions. It appears that our ideas may actually effect real change on the websites, which is great. However the political and financial influences will mean that only changes into the frontend interface end will likely be taken into serious consideration (because of its cheapness and ease of implementation). Nonetheless, I’m convinced that it will be an interesting and highly valuable experience. It’s just the sort of thing we will be doing as real interactive designers. I want to bring interactive maps to the site, but we will have to see whether this is viable and also most importantly whether users want maps. The needs of the user found from our own user studies are a key part of this exercise. The user always comes first in UCD!

Thursday, 23 October 2008

Week 3: UCD - Evaluating Existing Technologies

This week we looked at the methods that interactive designers use to evaluate interactive products. The key is to start with utility and get that right and then look for the user's experience (although both are as important as each other). In the HCI industry either experts perform product evaluation (known as the discount usability method because it’s cheap and quick), or ideally users are involved in the evaluation process. Experts often follow Nielsen (2001) design principles that provide heuristics for testing products. These include things like visibility, matching, error prevention, aesthetics and many more. Another method is a cognitive walkthrough where the experts attempts to put themselves in the user's shoes and work through the system. However clearly this surrogation is not as good as the real thing. So the best testing involves collecting quantative (generally usability data) and qualitative (generally user experience data) user data. One thing to point out is that we shouldn't quantify the user's experience. I couldn't agree more! The idea that we could turn someone’s experiences into effectively a list of number is ridiculous! We will be looking at some of the user evaluation methods in more detail next week.

In the seminar we first reported back on usability issues we found about games. One quibble I have about the new pro evolution soccer is that they have actually gone backwards in designing the menu system making things much more confusing and complex. It’s like the project managers told the game designers we have to make some changes but don't worry if they improve anything, there are many millions of loyal fans to the series that will fork out 40 quid for a copy regardless due to the squad list updates. It’s interesting to point out that the Wii version involves a more interactive control system, yet has not taken over in popularity, presumably because football game fans are perfectly happy with the mapping on the normal controllers. If I want to play football, I go out and play it!












After this we were told to evaluate the interface, (both physical and software design) of our mobiles. Seemed to be a lot of issues surrounding the iPhone and touchscreens. We then in groups came up with concept ideas for potential new mobile phone products. We rather cheekily stole the Nokia Morph concept (just search YouTube), which relies on nanotechnology to facilitate shape morphing depending on task and function. I personally rate the idea, but we will have to see whether the idea takes off once it’s released in the next 5 to 10 years!

 

Sunday, 19 October 2008

Week 2: Design Principles and Conceptual Models

This week we started looking at the conceptual frameworks and main principles that HCI professionals use to understand and assess good and bad design. Some design principles are more obvious than others. Clearly the functionality of the system being designed needs to be visible, and feedback can help the user know their current state in the system (so to speak). The idea of physically constraining the user so that they are almost "forced" onto the correct path is similar to the very important concept of affordances (see Norman 1988), which we looked at in detail in Cognitive Ergonomics earlier in the week. For example, why would you have a handle on a door that needs to be pushed??? These are all over campus. The affordance of the object indicates pulling not pushing. I was thinking that the fire exits make good use of affordances (just as well really). By applying any pressure to the handle it automatically pushes away from you and the door opens. Why can't they just do that for all doors? On the other side (the pull side) they could have the same mechanism but with the handle upside down so that it could only be pulled. the affordances of the door handles would thus match its function.

As well as design principles we also looked at conceptual models in the context of human to computer interfaces; from text command (instructing), through to speech (conversing), through to direct manipulation (mice, Wii remotes, etc), and beyond. A big question pointed out was; What comes next... Direct Brain Interface? further? I think DBI is surely the holy grail of HCI. Once perfected the interface would be completely transparent, in a sense uniting the user and computer system together to form (at the very least) a functional whole.

In the seminar we started by looking at examples of good and bad design that people sent in (as well as some rather irrelevent photos of people playing UNO I think). Then we headed to an exhibition at Sussex, which was making use of technology such as screens, directional speakers and also a chair with pressure sensors so that a talk started when the listener sat down. It inspired me to think that it would be great if everytime you got up whilst watching tv/dvd the seat recognized it and paused the program/film. I'm all up for more smart technology in the home and I think it may be be an area I will research during this degree. What about presure hobs in the kitchen that turned the gas off when no weight was on them for over say 30 seconds.

Anyway thats about it for this week. Looking forward to next week and will blog again then!

Sunday, 12 October 2008

Direct brain interfaces

Last week in the HCI introductory lecture I mentioned the possibility that Direct Brain Interfaces might mean that the usual trend of aging generations to abandon the latest technology (because it is so different to what they are use to), is in fact stopped because it would become so easy for everyone regardless of technical expertise to interact with technology. My lecturer commented that this was an interesting area of research, but as of yet impractical due to the levels of concentration required for the user to operate such systems. I just thought I'd point out that our scientific understanding of using such technology, both in terms of the complex brain neural networks and the patterns of activity of brainwaves, is still at a very early stage. I'm an optimist about the boundaries of science and what it can achieve. As I understand devices such as the emotiv headset (follow the link at the bottom), a device that reads user's brainwaves to allow them to move virtual objects around a screen, work by noticing kind of primitive signals. Everytime we think "left" for example the pattern of activation is different, but patterns between occurrences are similar, especially compared to those for "right". By interpreting these general patterns the headset can function. However when neuro/cognitive science become more advanced, I have no doubt that given technology will be able to follow and interpret more complex patterns and work without the user concentrating so hard. Anyway I would love to try out the emotiv headset, but it’s pretty unlikely given that it’ll set anyone back £150 this Xmas. 


Course start

I’ve created this blog to make my learning diary for my HCI module public. I will be posting something at least once a week, depending on what I come across in my studies. I’ve just finished the first week of my course and looks like it should be good including this module. Lots of opportunity to be creative, as well as applying my Cognitive Science knowledge to the field of HCI and interactive design.