The divide

There’s a divide in our world.

Workers at your company live with it every day.

It’s so much a part of our landscape – a part of normal – that many do not see it.

It has to do with our software. You might not care about that yet. But this is short, so stay with me.

Productivity suites and apps are tied to the Web. And Web apps are tied to the desktop. Microsoft, Apple, Google, Oracle, SAP (the MAGOS), all are making sure this happens. This evolution fattens their bottom lines and lets them move to the subscription business models that create the annuities they love to see on their balance sheets. This transition has been happening a long while. It’s mom and apple pie.

And then there are corporate apps – the parts of your world where you touch your company’s data. You might have tens of these. Or even hundreds. Some were built by your IT staff. Some were purchased. Many are Web deployed – via what they used to call your Intranet.

And these do yeoman’s work. Meat and potatoes work. (For vegans, quinoa and kale work.)

These have in common:

  1. They are stovepipes. Sealed off. The databases are closed off unless somebody has spent the effort to pry it open.
  2. They don’t look and feel like each other. They were built or bought at different points in your evolution, from different managers or different companies.
  3. They lock you into infrastructure. Browser types and releases, operating systems, databases, middleware. Whatever, you’re stuck supporting that infrastructure until you move away from them.

Meantime, the MAGOS suites and apps have fewer of these problems. They are open – at least interoperable with other apps from the same vendor. They tend to have a consistent user experience. And they are relatively free of lock-in…

… except they lock you in to a MAGOS .

For most businesses,  it’s tempting to dump the in-house apps and move to vendor suites and accept the lock-in.

Even though the transition will cost. Hundreds of thousands, millions, tens of millions.

And once that money is spent, you are well and truly locked in.

There has to be a better way.

There is.

Stay tuned.

A transformation

There’s a transformation happening in the world of work.

Most older companies are run on a hierarchical structure. Within that structure they tend to rely on keeping their people in formal roles, and in limiting mobility between roles (promotion to the next level on success, demotion or firing on non-success). Staff must often get the approval from upper hierarchy to make decisions. The thinking is that only executives have the interests of the organization in mind, and that their review of decisions prevents the lower levels from breaking the organization.

Most startups are run flatter, without strictly formalized roles. Decisions are often the result of peer review or even are allowed unilaterally. New projects form, are marketed, integrated, absorbed, in a flurry of massive creativity. Startups then will either (after some time – months or years) succeed enough to discover and build a working business, and gain funding from customers and/or from investment or acquisition, or they die.

Once they move past being a startup, they tend to turn into the other kind of organization – with formal roles, with formal planning and management by the “traditional” hierarchical structure.

Most A workers – the creative people that make new products and process ideas – prefer the flatter organization as it prevents the organization from stifling their new ideas and creativity. Most B and C workers prefer the hierarchy as it absolves them of responsibility.

What’s the result? Startups get the creative gold and the A players. Older organizations stratify and die.

It doesn’t have to be this way.

solve your training problem with software

I’m a software designer and manager of people by trade, and a trainer by necessity.

I want to solve a problem we managers have.

The Problem

I hire new developers to work on our software products. My approach is “hire for talent, not skills.” E.g., I hire based on raw talent and engineering, without caring much about their “previous skills” in particular computer languages, operating systems or particular technologies. So I hire future engineering geniuses, and working for us, they gain specific skills in languages and technologies while working on our team and eventually become brilliant, productive, creative developers working on our products.

So how do we get them productive? We train them. We show them how to debug JavaScript, and how to code SQL, and how to use SVN to commit changes to our code, and our tools and methods for functional and unit testing – which are always going to be different than other companies codes, methods, processes, etc.

After years of this, I’ve got a basic metric. It takes 12-18 months to get a new developer fully productive. During that time we’re paying them a salary and benefits, and it’s taking more effort from other staff and me to get up to speed – by far – than they’re returning to us in ideas and quality code. We’re fixing the bugs they accidentally introduce. We’re showing them how to use tools. Etc.

Sure, we use Agile process and team-building is continuous and we build knowledge base wikis and keeping our knowledge stored and accessible in other ways. Yes, we actively write down  every part of everyone’s job that we can isolate and identify. But it changes all the time. We change tools. We change methods. We adopt new processes. So we’re always rewriting. It never ends.

It’s the expense and effort and time and lost productivity I’m getting at here.

Aside: Some managers hire for skills; they get people who might need some less training in some areas. But can we agree that every manager, no matter what industry they are in, or what types of staff they need, has to train new people in what’s proprietary about their company, what tools they use, their processes. No staff is instantly productive.

The questions

  • What is your time to productivity for staff? Do you have a “productivity problem” for new staff or even for job changes?
  • What processes do you use to solve this? What do they cost (time or money or both)?
  • What tools do you use to solve this? What do they cost? (time, setup, maintenance, money)?

The framework

I’ll lay some suggestions as to a framework for answers, that is, if you’re going to contribute ideas:

  1. Always say what industry you’re in, the positions for which you’re training, your hiring philosophy.
  2. Characterize the people you’re training (mindset, skill set, personality). If it’s varied, specify a couple examples.
  3. State your metrics: time to productivity, hours spent per person for training. etc.
  4. Say what tools and processes you use that make it easier. Use names – and point to websites if you like.
  5. Be honest about your biggest challenges – don’t worry about impressing anyone. These could be challenges with the people, with the budget, with the tools, or anything else I can’t think of. For this exercise we need to see your pains, and I will always share mine.
  6. Lastly, and I hate to have to say it, but: please be polite, and considerate of everyone’s time. No spamming, no selling productivity products, no insults or smarter-than-thou stuff. Don’t criticize other people, don’t brag on yourself. and don’t make us wade through ads. Be of help.


Design applications users love

Sidebar: The Five Basic Interface Dos

You shipped! Congratulations!

You did everything by-the-book: spent downstream time planning the project, got client management sign-off, wrote a functional spec, coded like mad and made the rollout date. The app is solid, on time and on budget. The client oozes confidence.

It is only then that things start to go wrong.

The calls start… end-users want to know how to save a report. How to email it. How to request a second copy. Basic stuff. Are these people idiots? It’s right there on the first menu… and in Help. Can’t they even press F1?

Your voice mail is full. What about your schedule? You’ve got a week to do the spec for version two. You’d vow never to offer service/support on a client contract again, but clients would never go for it.

How can you avoid “schedule-suck” and lost money? (National averages say calls to your help desk cost you $32.00 apiece.)

Well… what if you brought users in from the start? Let them have a say in how your application gets designed?


“What?” you say, “Users can’t design applications!” Have you talked to these… these people? Didn’t you hear that horror story—that project where we involved end users in design? Indecision, fights, slipped schedules, cost overruns…

That’s right, of course. Giving users control over your design process would be counterproductive. End users generally don’t know how a finished application should be coded. That’s why they hired you.

Still, there are many things users know that you don’t. Wouldn’t it be great if you could actually work more closely with the users? You’d know:

  • The ins and outs of their project workflow
  • What “little things” you could add to make their jobs easier.
  • What to do (and not to do) before coding, avoiding costly midstream changes
  • What 20% of functionality will please 80% of your users and save you a lot of time coding stuff users don’t really care about
  • Where the problem areas are so you can solve them—while they’re still solvable

You must involve users in your design process, gather the information you need, and still retain enough control to get your project out on time and on budget. How? Here’s a structured method that lets you guide and counsel your users through the design process.

I. Build a design team

To start, don’t leave the design process to the programmers alone… build an application design team using people from your company. Ideally, you want to include people of many different disciplines, each of whom complements each other, so you can cover most of the bases. You’ll need:

  • A trainer/facilitator—a “people person” who can handle communicating with end users and orienting them in clear, non-technical language
  • A technical writer to record things and assemble the spec
  • A programmer who can say what can and can’t be coded—and to play the “computer…” you’ll see in a bit…
  • An expert in application content and business use. Can be a full-fledged professional software designer, or a “power user” from inside your company.

The team has to be just big enough, but not so big it becomes an Olympian feat just to schedule a meeting. Four people is a minimum (facilitator, writer, programmer, content expert), eight an absolute maximum.

Meet with your team prior to project kick-off. Have lunch together and get a feel for how they work together. Over appetizers and drinks, give them an idea of the process you plan to use (this one!). Explain the benefits: reduced development cost, faster time to deployment, increased user acceptance and lower support costs. Seal the deal with a nice dessert (I hear the creme brulee is excellent).

Now it’s time to get down to work. As soon as possible, meet with your team and develop a list of product function questions to ask your end-users. Typical questions: “what tasks do you perform in your job?” “Do you use paper forms… if so, are there samples?” Write the questions down.

II. Conduct onsite interviews with users.

Now it’s time to visit your users where they work. You’ll need to schedule a meeting with all of them in a conference room, bring that list of product function questions, and write down every answer that they give you.

But you’ll do more than that. You’ll look around at each user’s workspace and note:

  • Where is their computer located: On their desk? On a common table where six people share it? In a conference room? In their boss’ office?
  • Do they have to switch from one application to another a lot? How many interruptions a day by peers, clients, ringing phones (remember?)?
  • Do they know how to use a mouse, or are they typewriter jockeys who love to memorize keystroke chords that would make Brahms jealous?
  • Do they live in their manuals, hoarding every doc page like a treasure map, or do the books gather dust on the shelves?
  • How many hours a day do they use the computer?
  • Is the computer a “microwave oven” or a “gourmet range” to them?

Write this all down too and develop a profile of each user.

After you start seeing repetition, summarize similar users and give these summaries names, like “Jim” and “Betty.” The names will help later on, when the team makes a case for a design or feature (Example: ‘Jim would love this Excel export function!’ ‘Yeah, but Betty wouldn’t care.’). Write these down on one sheet of paper and write “user profiles” on top of the sheet.

From the data you’ve gathered, develop and record a list of five to nine common tasks your users want to perform with your product.

Task list rule of thumb: Yes, “nine” sounds like a tough upward limit, but after a certain point, the total number of tasks can cause “feature crash,” where you’ll find it hard to make your design specific to any task. Cognitive designers say there are hard limits to human perception and short-term memory. The maximum number of discrete units the average person can retain is “Seven, plus or minus two.” That’s a max of nine and a minimum of five units.  So if the number of tasks exceeds ten, you might want to reframe your design into smaller chunks. Example: If your application will have data entry and reporting functions and that puts you over ten tasks, split the design effort into a “data entry” chunk and a “printing” chunk. Each chunk can have up to nine associated tasks. Later on, you can unify the chunks.

Word the tasks in non-technical language. Now commit them to a single sheet of paper and write “Task List” on top.

You have now completed two deliverables for this step: User profile and task list. Put them in your design project file and let’s move on.

III. Crank out low-tech prototypes fast (with paper!)

The common software prototyping buzz says, “don’t write your prototypes in C++. Use an easier, higher-level language, like Visual Basic or Delphi to crank them out quickly.” But there are costs associated even with these “cheaper” prototyping methods:

  • VB and Delphi coding still require work, so designers get quickly attached to their design investments. After a doing the grunt work, some designers start to resent constant changes to their code. They could end up defending their work, rejecting even minor suggestions.
  • If you put a programmer into a development environment, no matter how simplified, they will find it hard to resist the temptation to “app-smith,” perfecting every dialog box and painstakingly lining up buttons. They might even add unasked-for and unneeded features and try to debug the prototype so it never crashes.
  • Sometimes a middle manager likes the prototype so much, they say, “Great! Ship it tomorrow.” And so you end up spending the next four years supporting a product that was whacked out in two days for a quick prototype.

This will be a prototype, not an application. The basic idea is to:

  • Be able to prototype and make changes quickly, so you can respond to user feedback from usability testing (see the next section for more on this)
  • Develop prototypes you can feel OK about throwing away
  • Make a prototype nobody (especially those pesky middle managers) will confuse with an actual product

That’s why you should use paper “low-tech” prototypes and stay away from the computer.

The materials are simple: Paper, colored pens, acetate sheets (which can provide a wipe-off surface for mocked-up dialogs when users have to “type data in” during testing), glue pens, adhesive tape. Sure, you can add sophistication if you want… some users of this method draw dialogs, windows, buttons and list boxes in a high-end drawing program, making them look as realistic as possible. But for your purposes, “crudely drawn” can also be effective; the raw, immediate look can communicate to your testers exactly what’s being accomplished: a “sketch” to use for evaluating your system design.

IV. Design with users in mind

You’ve got your user profiles and task list from step II, and your low-tech prototyping kit from step III. Now you’re at the “blank sheet of paper” step. How to fill that in with a usable design?

  1. Brainstorm with your team to find a central, consistent “metaphor” for how your system will look and feel. (To “brainstorm” means that everyone can contribute ideas and nobody can refine any of them, argue or shoot any down).

Your system’s metaphor can come from real-world objects—(remember the “desktop” metaphor used originally by the Xerox Star and Macintosh?). You can use a tool, document, or other object as the basis for your metaphor. A metaphor can come from:

  • The real world (a “telescope” can let you see data from far away, a “keypad” as found on a telephone can provide a way to let users punch in data quickly, an “assembly line” can process information using a series of discrete “machines,” etc.).
  • The world of computers (e.g., a “spreadsheet” metaphor, a “control panel,” etc.)

It’s best to draw metaphors from objects your users actually work with and tasks they perform—with or without a computer—in their workday.

Write the metaphors down on a whiteboard.

  1. Try to sketch out how each metaphor would basically work with the data. Don’t take it too far—maybe five minutes per metaphor. Examine how the metaphor would actually work. It’s okay to hit dead ends and change direction. That’s what this design phase is for.
  2. Narrow the field to two or so metaphors that seem particularly strong. Split your group up into two teams, and spend an hour sketching out how each system would look and feel. Now pick one. But keep the other on the back burner—you might decide you prefer it later.
  3. Whip out your design kits and start to cut, draw and paste your systems. Think about the points where users will have to interact with the system, how the system will indicate what the user should do, and how it will respond at each point. Start to script these out. Pick someone from your group to play a “computer” and another to play one or more of the users from your profiles. Rehearse their interactions.

V. Usability test on $5.00 a day

Now it’s time to bring users back into the loop.

When they hear “usability test,” many managers go blind. They’ve heard about how much Microsoft spent on their Windows 95 usability lab… high-tech video setups, sophisticated GSR measurement systems, one-way mirrors, sound baffling so reviewers could watch without disturbing testers, etc. And Microsoft brought in thousands of users… with a total bill that ended up in the millions.

But really, to set up your “usability lab,” you don’t need anything more than a quiet, private room in which to run and monitor your tests. Maybe a video camera if you want videotaped results; but even this is optional.

And if you cannot test with a hundred users, test with ten… three… even one. The basic idea is that any test is better than none. Remember, you don’t want to get too far away from users’ needs at any point—that is, unless you’ve grown attached to the sound of a ringing phone.

To prepare for the test:

  1. Pull out the list of the users you interviewed and ask for additional candidates from your marketing department or sales force. Contact the clients and schedule the tests; let them in on the exciting news that they’ve been selected to help you design the new application’s interface.
  2. The team should write up a scenario for using your product that makes sense in the prospective testers’ business. (Example: “You are a sales assistant for the XYZ corporation; you provide field support for the sales force, enter orders, etc….”)
  3. Zoom in on key tasks to be performed under the scenario, map them to your task list, and write up a test script that incorporates these tasks. Provide sample data—as real world as possible—for the test. Provide no information about how to accomplish the tasks, only what tasks need to be done. Provide any data that would need to be entered (better to take care of this for them so they do not get distracted by having to think up data).
  4. Determine ahead of time how you’ll quantify the test. Will you measure the time it takes the user to complete each task? Will you count good vs. negative comments? Will you create a usability ratings system?
  5. Rehearse the test using your new script with someone from your company who hasn’t already seen your prototype. Try to knock as many kinks out of the prototype (and the test script) as possible.
  6. If your system needs to display error messages or prompts, make sure you design these ahead of time. There probably won’t be time to do this during an actual test.

When running the tests, you’ll need three people to work with the user and prototype:

  • A “facilitator,” who can put the user at ease, explain the parameters of the test, and basically monitor the status of the test. Your team’s “people-person” is the best fit for this. This is the only speaking role in the test; all other team participants should just watch.
  • A “computer,” who will manipulate the paper prototype in response to user input. The computer should not talk at all; they can optionally “beep” if the user makes an error (if that’s not too silly). Usually the team’s programmer takes on this task.
  • A “scribe,” who will record at each test step (1) what the user did (2) what the user was trying to do (3) an analysis of any stumbling blocks found. The team’s technical writer is a natural for this job.

Here’s how a typical test should go:

  1. Bring your users in one by one. Greet them in the lobby and lead them to the test room. Offer them some juice or coffee. Make them feel comfortable and at home.
  2. The facilitator should introduce each test user with a little “spiel” that explains the following:
  • You’re testing a prototype of a new system
  • That any response the user makes is valid; there are no “wrong answers”
  • That what’s being tested is the prototype; not the user! (Stress this one!)
  • The scenario for product use (“you are a sales assistant…”)
  • Any additional needed materials (e.g., draft documentation, a telephone if you want to simulate phone support, etc.)
  1. Offer to answer any questions they have at this point. And also note there will be another “Q & A session” after they finish the test.
  2. Hand them the printed list of test tasks and scenarios and get started with the test.
  3. As your test progresses, you will definitely notice areas where your product will need work. Don’t let that discourage you! You’re trying to find the problem areas in the design. The facilitator should let the user struggle just to the point of frustration, but not beyond.

In some cases, this can be somewhat painful to watch for the designers as well as the test users. Designers might want to call out hints to the test user. Try to avoid this. It’s up to the facilitator to keep the user focused on the product for as long as is needed. If the user starts to get so frustrated it would stop the test, it’s OK for the facilitator to stop that test step and give the user hints to keep things moving. But we’re not trying to teach the user how to use our prototype system. We’re trying to find the stumbling blocks now, when we can still make changes easily (it’s only paper).

  1. When the test is done, ask the user, “How did you find the experience?” “Is it hard or easy to use this product in general?” “What areas of product function are easiest or hardest to grasp?”

VI. Iterate your design

Between each test, have a quick meeting with the design team to discuss your “gut feelings” about the test. Decide if there are any areas you’d change right now. Then get out the scissors and tape and change them! Bring the changed version to the next test and retest your changes. Re-work again if needed.

The idea is to quickly refine the product design between tests. That’s how you maximize the “bang for the buck” inherent in this process.

VII. Where to go from here

Beyond this point, you move into the realm of actually coding your application. It’s important to use all the information you’ve gathered to this point. Most commonly, the “scribe” who wrote up the tests then goes on to write up a complete specification for the product’s look, feel and function. You want to be very specific about every aspect of the application. How does it behave? How does it respond? How to cue the users what to do next? How to report errors? How to show successes?

Once parts of your application are coded and can run fairly well, it’s probably also a good idea to re-test the “high-tech” version with a new set of users (make sure they fit the profile you first developed). At this point you want to confirm your previous results.

Now what?

This article has dealt very generally with a design workflow you can use for applications. Following you’ll find a few great books you can use to take your design journey even further.

Basic design scenario work and cognitive issues

  • Carrol, John; Scenario-Based Design, New York, John Wiley and Sons (ISBN 0-471-07659-7)
  • Booth, Paul; An Introduction to Human-Computer Interaction; 1989, New York, Psychology Press (ISBN 0-86377-123-8)
  • Zetie, Carl; Practical User Interface Design: Making GUIs Work, 1995, New York, McGraw-Hill (ISBN 0-07-709167-1)
  • Tognazzini, Bruce; Tog on Interface, 1993, New York, Addison-Wesley (ISBN 0-201-60842-1)
  • Nielsen, Jakob; Usability Engineering, 1994, Academic Press (ISBN 0-12-518405-0)

A style guide (good for building your paper prototype kits)

  • Fowler, Susan and Victor Stanwick; Gui Design Handbook; 1997; New York, McGraw Hill (ISBN 0-12-263590-6)

Usability testing:

  • Rubin, Jeffrey; Handbook of Usability Testing; New York, 1994, John Wiley and Sons (ISBN: 0471594032)
  • Dumas, J., Redish, J. (1993) A practical guide to usability testing. Norwood, NJ: Ablex (ISBN: 089391990X)

Good to give your manager if s/he says “usability is too expensive”

  • Bias, Randolph G. and Mayhew, Deborah J. Cost-Justifying Usability, New York, Academic Press (ISBN: 0120958104)

Theory (if you’re so inclined…)

  • Laurel, Brenda; (1993) Computers as theatre. Wokingham, UK: Addison-Wesley (ISBN: 0201550601)
  • Laurel, Brenda; The Art of Human-Computer Interface Design, New York, Addison-Wesley (ISBN 0-201-51797-3)
  • Norman D. A.; The Psychology of Everyday Things; New York, Currency/Doubleday (ISBN: 0385267746)
  • Norman D. A.; Things That Make Us Smart: Defending Human Attributes in the Age of the Machine; New York, Addison-Wesley (ISBN: 0201626950)


Sidebar: The Five Basic Interface Dos

1. Put the User in Control

The user should always control your application, not vice versa. This means:

  • Know the user’s tasks. Make the users’ tasks easy and don’t call attention to the interface. Users want to do things with the software, not to it—they want to “project earnings for next year,” not “Create an earnings DEFINE based on multiplying the CUR_YR value in the EARNING segment times the… etc.” The best interface blends in to the user’s environment. If a car used a pull-down menu interface with dialogs, highway deaths would soar.
  • Stay interactive. For processes with long wait times with no system feedback, let the user cancel.
  • Avoid modes that limit user’s choices. If you do use modes, make them visually obvious, easy to learn and easy to get out of.
  • Let the user customize the application, but don’t make them do it. Users’ abilities and preferences vary. Let them customize the interface—aesthetics, color and power level. But also provide good defaults so casual users are not required to customize the interface.

2. Communicate Directly

  • Give users direct and intuitive ways to perform their tasks. Let users manipulate objects directly, rather than typing or selecting commands. Example: It’s easier to move a window by clicking and dragging its border than by estimating and typing destination coordinates into a dialog box.
  • Keep word use in your application clean, consistent and free of jargon. If you must use jargon, use jargon from the user’s industry rather than computer jargon. A rule of thumb: If it’s hard to describe how to perform routine tasks to uninitiated users, your application requires redesign.

3. Act Consistently

Your application should act consistently so the user can quickly learn it and know what the outcome of any action would be before committing the action. Consistency is a major aspect of the “intuitiveness” of applications. Therefore, you should:

  • Make your application consistent with the real world. Build on user’s real-world relationships by exploiting concrete metaphors and natural mapping relationships. To reduce the learning curve, use familiar concepts that may already be in place.

As a corollary, don’t use real-world words for system functions that might confuse the user. IBI example: we call it a “table” but everyone else calls it a “report,” so we should call it a “report.”

  • Make your application consistent with itself and with other applications in the environment. If pressing Enter closes a dialog box and performs the highlighted action, it should always work that way. Windows programs use the Ctrl-C keypress to copy text; don’t use Ctrl-A because of a personal (though possibly valid) preference. Decide whether the benefits of the change are greater than the problems users will experience because of the inconsistent behavior.

When developing a cross-platform application where conventions are different, favor the idiosyncrasies of each environment. Your users work within their environment, not with your applications across platforms.

  • Be consistent in reference. Don’t refer to the same object or action by different terms in different places.

4. Be Forgiving

  • “Error” is normal—we learn by trial and error. You want to promote the user experimenting with your application—it’s usually the most effective way to learn it. But users may not be aware of the pitfalls in your application. Even with the best-designed interface, users make physical mistakes (pressing a different key than they meant) and mental mistakes (making an incorrect mental decision about which command or data to select). Your interface should:
    • Minimize opportunities for error (through clarity, consistency, and changes made based on specific usability testing for common user errors)
    • Accommodate mistakes without pain or penalty
    • Handle errors with grace. Error message must never imply the user is at fault. Instead, they should state the problem and offer solutions.
  • There are limits to human perception, memory and reasoning. Don’t expect people to overcome their limits to use your application. Don’t require the user to calculate information (such as the day of week corresponding to a particular date) or remember information (such as a code that was typed on a screen hidden by an immovable dialog). If your application can provide this information, it must.

5. Strive for Beauty

  • Aesthetics are actually quite important. Testing shows that people instinctively prefer attractiveness to feature-richness. What’s more, applying good graphic design principles to your application can increase the clarity of your model, decrease errors, and make the user enjoy the experience. Pay close attention to spatial grouping, contrast and three-dimensionality.

Make your application so beautiful it is a delight to use.