Why Will Medical Professionals Use Laptops?

Steve Jobs famously said that “laptops are like trucks. They’re going to be used by fewer and fewer people. This transition is going to make people uneasy.”

Are medical professionals truck drivers or bike riders?

We have witnessed truck drivers turn into bike riders in almost every computing context:

Big businesses used to buy mainframes. Then they replaced mainframes with mini computers. Then they replaced minicomputers with desktops and servers. Small businesses began adopting technology in meaningful ways once they could deploy a local server and clients at reasonable cost inside their businesses. As web technologies exploded and mobile devices became increasingly prevalent, large numbers of mobile professionals began traveling with laptops, tablets and smartphones. Over the past few years, many have even stopped traveling with laptops; now they travel with just a tablet and smartphone.

Consumers have been just as fickle, if not more so. They adopted build-it-yourself computers, then Apple IIs, then mid tower desktops, then laptops, then ultra-light laptops, and now smartphones and tablets.

Mobile is the most under-hyped trend in technology. Mobile devices – smartphones, tablets, and soon, wearables – are occupying an increasingly larger percentage of total computing time. Although mobile devices tend to have smaller screens and fewer robust input methods relative to traditional PCs (see why the keyboard and mouse are the most efficient input methods), mobile devices are often preferred because users value ease of use, mobility, and access more than raw efficiency.

The EMR is still widely conceived of as a desktop-app with a mobile add-on. A few EMR companies, such as Dr Chrono, are mobile-first. But even in 2014, the vast majority of EMR companies are not mobile-first. The legacy holdouts cite battery, screen size, and lack of a keyboard as reasons why mobile won’t eat healthcare. Let’s consider each of the primary constraints and the innovations happening along each front:

Battery – Unlike every other computing component, batteries are the only component that aren’t doubling in performance every 2-5 years. Battery density continues to improve at a measly 1-2% per year. The battery challenge will be overcome through a few means: huge breakthroughs in battery density, and increasing efficiency in all battery-hungry components: screens and CPUs. We are on the verge of the transition to OLED screens, which will drive an enormous improvement in energy efficiency in screens. Mobile CPUs are also about to undergo a shift as OEM’s values change: mobile CPUs have become good enough that the majority of future CPU improvements will emphasize battery performance rather than increased compute performance.

Lack of a keyboard – Virtual keyboards will never offer the speed of physical keyboards. The laggards miss the point that providers won’t have to type as much. NLP is finally allowing people to speak freely. The problem with keyboards aren’t the characteristics of the keyboard, but rather the existential presence of the keyboard itself. Through a combination of voice, natural-language-processing, and scribes, doctors will type less and yet document more than ever before. I’m friends with CEOs of at least half a dozen companies attempting to solve this problem across a number of dimensions. Given how challenging and fragmented the technology problem is, I suspect we won’t see a single winner, but a variety of solutions each with unique compromises.

Screen size – We are on the verge of foldable, bendable, and curved screens. These traits will help resolve the screen size problem on touch-based devices. As eyeware devices blossom, screen size will become increasingly trivial because eyeware devices have such an enormous canvas to work with. Devices such as the MetaPro and AtheerOne will face the opposite problem: data overload. These new user interfaces can present extremely large volumes of robust data across 3 dimensions. They will mandate a complete re-thinking of presentation and user interaction with information at the point of care.

I find it nearly impossible to believe that laptops have more than a decade of life left in clinical environments. They simply do not accommodate the ergonomics of care delivery. As mobile devices catch up to PCs in terms of efficiency and perceived screen size, medical professionals will abandon laptops in droves.

This begs the question: what is the right form factor for medical professionals at the point of care?

To tackle this question in 2014 – while we’re still in the nascent years of wearables and eyeware computing – I will address the question “what software experiences should the ideal form factor enable?”

The ideal hardware* form factor of the future is:

Transparent: The hardware should melt away and the seams between hardware and software should blur. Modern tablets are quite svelte and light. There isn’t much more value to be had by improving portability of modern tablets; users simply can’t perceive the difference between .7lb and .8lb tablets. However, there is enormous opportunity for improvements in portability and accessibility when devices go handsfree.

Omni-present, yet invisible: There is way too much friction separating medical professionals from the computers that they’re interacting with all day long: physical distance (even the pocket is too far) and passwords. The ideal device of the future is friction free. It’s always there and always authenticated. In order to always be there, it must appear as if it’s not there. It must be transparent. Although Glass isn’t there just yet, Google describes the desired paradox eloquently when describing Glass: “It’s there when you need it, and out of sight when you don’t.” Eyeware devices will trend this way.

Interactive: despite their efficiency, PC interfaces are remarkably un-interactive. Almost all interaction boils down to a click on a pixel location or a keyboard command. Interacting with healthcare information in the future will be diverse and rich: natural physical movements, subtle winks, voice, and vision will all play significant roles. Although these interactions will require some learning (and un-learning of bad behaviors) for existing staff, new staff will pick them up and never look back.

Robust: Mobile devices of the future must be able to keep up with medical professionals. The devices must have shift-long battery life and be able to display large volumes of complex information at a glance.

Secure: This is a given. But I’ll emphasize this is as physical security becomes increasingly important in light of the number of unencrypted hospital laptops being stolen or lost.

Support 3rd party communications: As medicine becomes increasingly complex, specialized, and team-based, medical professionals will share even more information with one another, patients, and their families. Medical professionals will need a device that supports sharing what they’re seeing and interacting with.

I’m fairly convinced (and to be fair, highly biased as CEO of a Glass-centric company) that eyeware devices will define the future of computer interaction at the point of care. Eyeware devices have the potential to exceed tablets, smartphones, watches, jewelry, and laptops across every dimension above, except perhaps 3rd party communication. Eyeware devices are intrinsically personal, and don’t accommodate others’ prying eyes. If this turns out to be a major detriment, I suspect the problem will be solved through software to share what you’re seeing.

What do you think? What is the ideal form factor at the point of care?

*Software tends to dominate most health IT discussions; however, this blog post is focused on ergonomics of hardware form factors. As such, this list avoids software-centric traits such as context, intelligence, intuition, etc.

About the author

Kyle Samani

Kyle Samani

Kyle is CoFounder and CEO of Pristine, a VC backed company based in Austin, TX that builds software for Google Glass for healthcare, life sciences, and industrial environments. Pristine has over 30 healthcare customers. Kyle blogs regularly about business, entrepreneurship, technology, and healthcare at kylesamani.com.


  • Wake me up when this happens.

    In the meantime, chugging along here with a free Flex/Flash-based web app EHR that isn’t going HTML5 fast enough.

  • You lost me at “Mobile is the most under-hyped trend in technology.” Really??
    The article you link to is…meh…and itself seems to be talking more about “wearables” than mobile.

    Your mention of “raw efficiency” is humorous as this is not what EHRs do for a practice…provide efficiency that is. If they did this, docs would have gone to them long ago and wouldn’t need monetary incentives from the government – the savings from efficiency alone would have made EHRs a must-have.

    If you are actually in the “trenches” you’ll see that, like most businesses, desktops and laptops just make sense in a medical practice.

    Does this mean there is no room for a tablet? No, but tablets are mainly output devices…EHRs are generally input systems.

    Sure you can connect a keyboard to a tablet, tap, pinch, slide, and glop together some method of remotely accessing you windows based EHR (which most are), but his is far from efficient.

    Right now, I think the best answers to mobile in a medical practice are the Lenovo Yoga 2 Pro or the MS Surface. I mention Windows oriented devices as a majority of EHR software is for Windows, this is just reality.

    I got a Yoga about 4 months ago and it is excellent. Perfect for a doc to carry around. It can act as a tablet when desired, or a laptop when needed (I rarely use as a tablet). My wife has been using a Surface for about 3 months and thinks it is great, though I like the Yoga better (bigger screen).

    Mobile has its place in medical, but let’s not forget that a medical practice is a data intensive business, and as much as one want to make a tablet do this work, they are just not made for it.

  • Just what we need are more toys to distract Medical Professionals from actually talking and listening to patients…When you get the technology down to where it can be a chip in my head or contacts and not distract me then I’m in otherwise……Laptop/tablet is the closest thing we have…

    Lenovo Helix and Asus Tiachi are the closet things to what we need currently…Surface doesn’t have the horses for voice recognition and to get all the tasks done for an input driven operation like an EMR…Helix is awesome but somewhat pricy. Weigh for others for equipment that has to be lugged around all day gets to be a consideration along with battery life. The Lenovo Twist deserves honorable mention but for some reason they have stopped using i7 chips in the newer models

  • “Through a combination of voice, natural-language-processing, and scribes, doctors will type less and yet document more than ever before.”

    …and lets not forget the impact of ICD-10. Laptop demise is not going to be soon. There are too many things they have to cover.

  • I can’t talk for doctors, but laptops to me are very important. Far more computing power then pads or phones, much better and larger screens, decent real keyboards (usually). No, I don’t normaly walk around with them. But I use them at home and when I travel, and if I’m working in a place like a bank I take one with me to many meetings. I can type at high speed on them, instead of pecking away. They are overall far more productive for me then pads or phones.

    In some hospitals I’ve seen doctors walking around with smaller laptops, taking notes along the way. I’ve seen nurses and others walking around ER’s and patient floors with them as well. Oh, and I’d rather see them then clipboards any day!

  • Great overview of hardware form factors. Yet what matters most is ease of capturing data. It has to be a combination of advanced software technologies and hardware devices. Whatever device (combination of h/w and s/w) allows clinicians to capture their encounter best will work in the long run. I know you are trying to bring another variable into the EMR equation, and that is great.

  • Thank you for this summary. I think that the future will be “exocortex”, and this future is more close tan we can imagine. Using Athur Clark phrase, this will be a reality 4 years after everyone stop laughing. The main thing that makes me believe that, is the fact that we don´t have enough bandwidth for all the stimulus that IT send to us in our every day, and we will have to integrate with the devices in order to get and process all the information in a competitive way.

Click here to post a comment