Cone Trees

When Introducing UCD in an Organinzation, Technical Capability is Only Half the Story

Summary
This article is a reproduction of my chapter in the book, UX Storytellers- Connecting the dots, edited by Jan Jursa of IATV. You can download it for free or get it for the Kindle at Amazon. Other contributing authors include Deborah Mayhew (author of Cost-Justifying Usability), Aaron Marcus (author of The Cross-GUI Handbook for Multiplatform User-Interface Design) and Cennydd Bowles (author of Undercover UX).”In this eBook, ‘UX Storytellers – Connecting the Dots’, 42 UX masterminds tell personal stories of their exciting lives as User Experience professionals. The book brings together authors from around the world who paint a very entertaining picture of our multifaceted community.”- Amazon book descriptionMy story, “Technical Capability is Half the Story ” aims at helping User Experience professionals understand the real challenges involved when trying to introduce User-Centered Design (UCD) techniques in an organization where the goal is to ultimately integrate UCD into the organization’s Product Development Life Cycle (PDLC). It talks about how arming ones self with technical capability is only half the story, the other half being a team’s ability to effectively deal with soft issues and successfully engage with stakeholders. I hope you will learn from it and be able to put it to good use if you come across such a situation or are already in such a (tricky) situation.

A Story

Once upon a time, there were two internet companies that had recently introduced interaction design teams. Both of these companies managed to hire talented interaction designers (IxD). While they did hire some good IxDs, they were not following a User Centered Design (UCD) process.

Their process was basically as follows. The product team would come up with the information architecture (IA) of a new product and the interaction design team would come up with the user interface (UI) and iron out creases from the flow given to them at a low level (micro IA). Changes would be made to the product once it went live, based upon studying how it was performing through reports from the Management Information System (MIS) and web analytics.

Now this was not a very optimal method of doing things. It would often be the case that after going live, the IA and UI would be modified. This would have been perfectly acceptable except for the fact that these modifications were to be made on high level structures—both in IA and UI. This, of course, was not easy to do, since the entire web application or website rested on these high level structures. All of this cost the company dearly in terms of time and money, which was being spent on issues that were never anticipated.

At this time, none of the companies were conducting any form of user research or usability evaluations. The interaction design teams in both companies were aware of this and around the same time, both teams, frustrated with the state of affairs as they were, and for the good of their respective companies, decided to try and introduce UCD techniques into their organizations.

The interaction designers thought that conducting user research would help them provide product management with better inputs for developing a better IA validated by representative users. Not only that, it would also help them design a better UI by validating its ease of use throughout the Product Development Life Cycle (PDLC) through conducting usability evaluations. Both teams managed to grab opportunities for usability testing during development of new products around the same time.

After a year had gone by, one team had managed to set up a small unofficial but recognized user research group which their company was quite pleased with. Not only that, they had a pipeline of projects to keep themselves busy for the coming months. On the other hand, the second team was faring rather badly. Nobody wanted to let them conduct any usability tests. A new product was in the making and they did not get to conduct any user research for it either.

Their plan to introduce UCD techniques into the organization had pretty much failed and they were beginning to give up because of the lack of results they had seen. They were unimpressed by the response of the organization. Likewise the folks in their organization were unimpressed by the result of this group of interaction designers, who they thought would have saved time if they had simply stuck to what they were assigned to do in the first place.

If both these teams were very talented and technically capable, what exactly did one team do so right and the other so wrong? They both had the same destination, but took different paths in order to achieve it. The successful team put in extra effort, just as much as it did to technically implement its usability evaluations, to make sure that all their stakeholders—business leads, technical leads, product bosses, programmers and marketing folks—were happy and upbeat about the entire process, right from day one, whatever compromise it required. The other team simply conducted usability tests and cold bloodedly revealed its findings, which basically rather openly razed much of the work that the other teams were doing.

Takeaway

When you are trying to introduce UCD techniques in your organization and your goal is to ultimately integrate UCD into your organization’s Product Development Life Cycle, then arming yourself with technical capability is only half the story. The other half is your team’s ability to effectively deal with soft issues and successfully engage with stakeholders. With either part missing, you will not be able to go very far.

Technical capability is your team’s ability to make use of their collective knowledge of user research and usability evaluation along with IA and IxD methods so they can use the correct one or a combination of them in order to find out who the users are of a product and what they want and need. They can then validate assumptions about how users will use a product, and gather information continually about how easily the product being developed can be used by its intended users.

If you have strong technical capability (and by that I mean having a good understanding or experience of how to execute UCD techniques), you will naturally be able to demonstrate how valuable UCD can be when used on a project. But doing so is not as straight forward as it seems. When you try to do this, you will be met with varying levels of skepticism and quiet opposition. This is because, while you are in essence trying to simply improve the efficiency and effectiveness of the overall product life cycle, it is often interpreted by the others as showing inefficiency in the current way they are working, which is not a good thing for them. Nobody wants to look bad, especially when they can avoid doing so.

In order to implement your technical skills, you need to get hold of opportunities to demonstrate value in the first place . Then, you need to be appreciated for the work you are doing and get the right noises made so stakeholders and other influential people in your organization hear about the value the product has derived out of it. Going further, you need willingness from your stakeholders to take a few pains themselves in order to help you get further projects and set the ball rolling. In order to do so, you will need to effectively deal with soft issues all the way and successfully engage with your stakeholders. Otherwise known as ‘soft skills’, I will refer to it as soft capability.

Who are your stakeholders? They are everyone and anyone who is affected by your actions. This includes folks in product or project management, business, programming, analytics and marketing. That’s a lot of people, and that’s just how much opposition you might face when your user research actively crosses their paths. Folks from the product team may already go with the marketing guys for conducting interviews with users (they probably do focus groups too).

The product guys already use sales and customer checkpoint data to keep a pulse on what the user feels about the product and the programmers simply don’t agree with the itsy bitsy changes you make to the interface and flow to enhance the user experience as it increases their work load in an already tight plan.

You may think that soft skills are not unique to the situation I’m describing and that they are required in any sort of occupation across the industry. That’s correct. But the difference here is ‘how important’ is it? The difference is about ‘good to have’ versus ‘required’. Let’s say, if an organization has a systematic usability process in place, then technical capability basically translates to successful implementation of UCD techniques for you. Here, dealing with soft issues, just as in any other work area, will help increase efficiency of the department. But if you are trying to introduce UCD methods into your organization, then technical capability does not translate into successful implementation.

It is here that technical capability is indeed half the story, and ‘soft’ capability is the other half. You will be able to get a better understanding of what ‘soft capabilities’ are and how and when can they be used in the following two stories.

Example 1: The Key Keepers

Continuing from the previous story, this story puts a lens onto the team that tried to introduce user centered design (UCD) methods into its organization but had very little progress after a year.

One of the first opportunities the folks who wanted to try and introduce UCD techniques into their organization got was not on a company product but rather on their intranet. They had been assigned to work together on the redesign of the often complained about search page, search results page and a new Collaborative Question Answering (CQA) feature. The interaction designers managed to convince the Vice President of Engineering, who headed the project, to give them the opportunity to conduct a series of usability tests on the new search that was being developed.

They were unable to get him to let them conduct usability tests right from the beginning—from ideation into paper prototyping and then low fidelity wireframes, since he thought their time would be better utilized at this point to simply getting the UI design of the ground, based on stakeholder and product inputs. In any case, this would be a trial usability test and he could not afford to assign any resources on experiments. However, he did agree to let them conduct usability tests once they were ready with interactive prototypes since they would have achieved something concrete by then. In addition, he also agreed to let them do the same thing in the next iteration for CQA development as well.

The Vice President (VP) was quite pleased about the initiative taken by the interaction designers. Steve Krug’s book, ‘Don’t make me think’ had been lying on his work table for quite a while now and anybody who entered his cabin was sure to catch a glimpse of it. He had read a bit of it and it did make sense, though he would like to see results rather than simply reading about how usability could improve a product on paper. He enjoyed talking about how he was trying to bring ‘usability culture’ into the company and considered himself quite the usability evangelist.

All in all, the interaction designers could not have found a better person to get an opportunity from because things were already slightly in their favor, thanks to his positive outlook towards usability. Now all that was required was to show him that this stuff really worked!

When the time came, they set about conducting their usability tests. Their plan was an elaborate one. It was longer than the VP expected, especially since he had asked them to simply send over a quick one-page plan by email with the dates of the test as well as a short high level overview of what they would be doing, along with the test scenarios and tasks. The plan they mailed over attached as a lengthy eight page word document, had too much information in it according to the VP, and he simply called one of the three interaction designers over to his cabin and asked him to explain how they planned to proceed.

The VP wanted all eight test sessions to be done in a day and wanted to to see the findings the very next day. The interaction designers however said it would take them at least two days to conduct the usability tests and then another two days to analyze the data after which they would have the report ready with the findings he wanted. So the usability test was not to wrap up in a maximum of two days as the VP would have liked to have it, but would take around a week, something he was not too pleased about.

The interaction designers chose to set up camp in a larger war room. This would be their usability lab which would allow one facilitator, a note taker and an observer for each session. They mailed everyone on the intranet project mailing list, informing them that there could be one observer for each of the eight sessions, and anybody interested could mail them back to book a slot. When the VP thus decided to drop by on a test session, he was requested to come for the next one since there was already an observer in the room. In the next session, when he chose to ask a few questions while the participant was undergoing a task, he was not allowed to do so and was requested to make a note of any questions he wanted to ask and ask them after the test, in the end during debriefing.

In order to keep a pulse on what the tests were revealing, he had asked the interaction designers to give him a summary of the test findings at the end of each day. But they were not very keen to do this since they did not want the findings to be known and spread by word of mouth before they could present a report of the findings to the intranet team once all the tests were done. When the usability tests were over and they were analyzing session videos and making notes, the VP came by and inquired about what direction were the results generally pointing out to, and what the findings were, since by now they surely would have a fair idea of it.

Being the guy who was introducing usability tests into the organization, he wanted to have a look at the results before they were presented to everyone else so he could generally talk over lunch with his colleagues about ‘how most of his assumptions were validated’ by the test. It was his baby, after all. However, the interaction designers were vague about it again; they really did not want everyone to know the results before they presented the findings and recommendations, thinking it would dilute the whole effort.

Eventually the day arrived for presenting the findings. It was presented to the VP and most of the leads from the programming and product teams, in addition to a few other programmers and product folks working on search. The presentation was well made and the recommendations were convincingly put forth. All in all, there were reluctant but agreeing nods to the findings. The report did not speak much about what was working well in the search and SERP (search engine results page), perhaps there were indeed not many positives. The report concentrated on the utter failure of the faceted search that was neither noticed nor understood by most of the participants, the excessive unused elements that cluttered each search result listing and also how the positioning of the search user interface failed to imply it was to be used for both global and local section searches.

The report also let the interaction designers vent their frustration because they had pointed out the issues with faceted search and positioning of the search UI to the product team and the VP at the time of paper prototyping, but they had not paid much attention to it at the time. This point too was made clear during the presentation quite a few times.Once the presentation was over, a bunch of programmers impressed with the empirical findings went over and congratulated the interaction designers on their work, although they now had a lot more UI fixes to make, thanks to the usability test recommendations that were agreed upon. And this was all the praise the interaction designers got.

They barely received any praise from any of the senior folks on the project.The report did not make the VP look good in any way. In fact it made him look bad. And so was his experience interacting with the interaction designers from the first day of planning for the usability test right up to the end with the report. The findings criticized most of the project at a structural level, constantly reinforcing the message that related risks had been pointed out but nothing has been done about them. What started out with a positive outlook towards usability had transformed into a bad exercise for him.The VP’s final take was that the project could be seen as a waste of time since it took up a week and most of the findings they agreed upon were very minor changes they could have done without. At this stage, the plan could not accommodate structural changes and the VP, along with other heads, dismissed the results and quickly agreed that the sample size was too small.

And this made it easy for them stuck to their gut feeling that the faceted search would do just fine. The same was the case for the search UI positioning in the page. The VP also managed to get the product leadership’s consensus that the data was skewed and was bent towards supporting the viewpoint of the interaction designers.That was pretty much the end of their UCD gig as long as they were on this project under the VP. He spread the word about how difficult they were to work with, how they were not being aligned with project goals and that they could do with more professionalism. The VP also cancelled his earlier agreed plan to let them perform usability tests in the next iteration when they worked on CQA. With the VP spreading such a negative influence, it was not going to be easy for them to get opportunities to conduct usability tests or user research on other projects in the future either.

Takeaway

Who are the key keepers? Those who give you or your team the opportunity to implement UCD techniques, some form of usability evaluation or user research on a product are the key keepers. Usually higher up the organization chart and very influential, they hold the keys to the kingdom—the kingdom where you can get to keep yourself busy improving the user experience of products by incorporating UCD techniques in not just a few, but all projects. While they hold the power to give you a continual list of opportunities over time, they also hold the power to close the gates and shut you down. In other words, they make or break your group. Make sure you never make them look bad in anyway at the expense of trying to achieve perfection. Because ultimately,if they decide to not like your work, it will not look good, however good you may think it is or it actually be.

Looking back at the story, there was nothing really the interaction designers had actually done wrong, but they could have made a few concessions for the VP. They could have sent over a one page email plan as the VP had requested, they could have allowed him to attend any usability test session he wanted and let him ask questions in one of them before they requested him to hold them till the end, and they could have provided him with a summary of findings at the end of the day, just the way he wanted. When it came to the report, cushioning in hard findings with a lot of good stuff to say about the project, even if it was superficial, would have harmed nobody.They should also have avoided venting their frustrations about how the findings were in line with what they had pointed out as risks earlier.If they had done this, it would have been much more likely that they would not have been shut down.

By capitalizing on the VP’s initial positive outlook, the same exercise would have come to a very fruitful end. So if there is anyone you should use your soft skills on across your range of stakeholders, the key keepers are the ones you should use it with most, because their voice matters the most in terms of getting approval for the UCD activities you are trying so hard to introduce as a better way of executing projects in your organization.

Example 2—Stakeholder Goals Over User Goals

This story is about a group of interaction designers who were newly hired by a company that had grown huge and done very well making business to business (B2B) portals in the apparel domain. As in the previous story, it was primarily the product team which shaped the information architecture (IA). The interaction designers did not have much of an influence in determining IA although their inputs were taken for micro IA. Their main task was to come up with the user interface. No form of user research or usability evaluation existed and the interaction designers saw the current product development process as a grossly inefficient process. They would often have to go back to square one from a stage where the UI was not only prototyped to high fidelity but was being developed by the programming team as well.

It was thus not surprising how much they wanted to introduce usability testing which they would like to see happening throughout the software development life cycle (SDLC) from beginning to the end. This would help not only them but the entire project teams to deliver products faster and with less time and effort wasted. A year down the line after a lot of talk, two interaction designers managed to break through with their product head. He wanted to use their help in understanding the major pain points their users faced while using one of their portals for fabric manufacturers and traders. He also wanted to know how best they could go about improving their portal in relation to the pain points that were to be figured out.

The product head gave them a week to do all they wanted as long as they did it on a zero budget. The interaction designers’ plan was to first conduct telephone interviews to understand the most common problems their users faced, after which they would follow it up with a usability test to validate those concerns. But since they had just a week,they decided to only conduct telephone interviews and present their findings to the stakeholders. Taking into assumption that things would go well and the stakeholders would be impressed with what they would uncover, they would then ask for another week to conduct usability tests. After getting a list of phone numbers out of the customer database,they began their interviews. In four days, they worked extremely hard and managed to successfully complete 100 semi-structured interview sessions. They spent another long, hard day affinity diagramming.

Once they completed data analysis, they put in a few extra hours at the cafe below their office. After many a cups of coffee and bagels, they were ready with their report which they would refine over the weekend. Monday came and they presented their report to the project stakeholders. Besides letting the stakeholders know what was working well and a number of interesting UI concerns they had uncovered, they talked about how much their users complained about the clutter on the homepage and the search engine results page which was mainly due to a lot of banner ads and featured listings. They pointed out that participants were unaware of the free registration option which was hard to to locate on the portal. The report also mentioned that participants found faceted search very difficult to understand since the portal used full page refreshes instead of partial page rendering (PPR). In addition, they let their stakeholders know how participants were taken to the portal’s registration page instead of the product details page when participants used a search engine to search for the best prices they could find for the fabrics they were interested in purchasing.

By the end of the meeting, everyone was tired and the stakeholders were not impressed. The interaction designers were not in a good mood either. What happened during the meeting was that while there was agreement on a few findings, most of the meeting went on arguing about how most of the findings directly clashed with either business and marketing goals or could not be implemented due to technical limitations. The stakeholders did find some of the findings useful but thought there was too much signal-to-noise ratio. In their opinion, the interaction designers were not aligned with business goals or the technical constraints the project was working within. In fact, the stakeholders agreed to work on certain features based on their findings that they thought made sense but asked the interaction designers to skip the usability test which the product head had earlier agreed to because they did not want them to validate concerns that could not be addressed, since they clashed with business goals and technical constraints.

The interaction designers’ hard work had gone waste and they did not get any user research opportunities thereafter for almost another year.In addition to other findings, their findings revealing clutter on the website due to ads and paid listings were actually not revealing at all to the stakeholders who had known all about it, but this was where user goals and business goals clashed. Ads and paid listings were a substantial portion of their revenue and the trial user research activity was not going to change their business model.

Takeaway

More often than not, business goals and user goals don’t align well. When trying to introduce UCD into an organization, this is something you should take note of. When you get an opportunity to conduct UCD activities,you are trying to demonstrate value. Begin by showing that you are aligned to business goals unless you want to start of on the wrong foot. When you focus on findings that are essentially user goals clashing with stakeholder goals, then you are diluting the effort and impact of your activity. So focus on and present findings that do not clash with business and marketing goals in order to get maximum mileage from your effort.

If it is of any consolation, as your stakeholders begin to trust you more and give you more UCD projects, you can then make your case down the line, after establishing credibility with them and all the data collected over multiple research activities, if you foresee an alternate business model for increasing revenue. Also, respect technical limitations. There is nothing you can do about them. Avoid making recommendations that cannot be incorporated due to technical limitations. Making the recommendation does not magically lift those limitations. An ideal solution is not a solution, a realistic one is. So concentrate on what is achievable and you will do a lot better.