Data security in healthcare

Privacy in digital health: Trust in a scandal-plagued era

Source: Adapted from Healthcare IT (Trent Yarwood & Justin Warren)

Healthcare data security compliance

For medical practitioners to help us to stay healthy, we must – by necessity – give them information about ourselves. This information is deeply personal; indeed, information about our health is some of the most personal information that exists.

Changing risk profile

It is increasingly important for healthcare workers to not only have good procedures but also a good understanding of issues in digital privacy and security.

Doctors and other healthcare staff are very familiar with principles of confidentiality. However, new digital models of healthcare mean that the “paper chart in a locked cupboard” mind set is no longer sufficient to minimise the risks to patient privacy.

One of the challenges of digital privacy is that healthcare workers often do not have a high degree of technical knowledge; doctors and practice managers may take at face value the assurances offered by health software vendors – and impressive sounding, yet empty phrases like “bank-grade security” – without deep understanding of the risks.

As part of the ethical imperative to “first, do no harm”, healthcare staff must remember that the loss of patient confidentiality is a very serious harm, regardless of whether it is a “sensitive” issue such as mental or sexual health.

The benefits of digital health come through sharing of information; this sharing, by necessity, increases the risk to privacy.

According to research by the Office of the Australian Information Commissioner, health service providers are generally thought to be the most trustworthy: out-ranking financial institutions, governments and charities. Once this trust is broken, however, it is very difficult and time-consuming to rebuild.

More importantly, for individuals whose confidentiality is lost, it can never be regained; once personal information is in the wild, it can never be unseen.

Data capitalism

Personal information has been monetised very successfully by the likes of Google and Facebook. Large tech companies are also moving into health data. Alphabet (Google) subsidiary DeepMind entered a contract with the UK National Health Service, and Amazon has this year announced a partnership with JPMorgan Chase and Berkshire.

While the privacy implications of your search engine history or Facebook likes are concerning enough, health information is considered far more personal. Health insurance company NIB has already indicated its desire to gain access to the vast trove of information soon to be stored in the My Health Record system. In the UK, the Google DeepMind/NHS partnership was found to have breached the UK’s data protection laws, which are far more stringent than those in Australia.

Yet this doesn’t appear to be what people want. The 2017 OAIC report into Australians’ attitudes towards privacy notes that 85 per cent of people are either ‘annoyed’ by unsolicited marketing activity or concerned about where the marketer obtained their information. The majority (86 per cent) of people believe an organisation has misused their information if it was provided with the information for one purpose and used it for another. Combining these figures with Australians’ sensitivity about health data means the ‘surveillance capitalism’ business model of selling targeted advertising based on personal data is a particularly poor fit for health data.

The HealthEngine betrayal

It is for this reason that the story of HealthEngine – a venture capital-backed Australian company offering healthcare appointment booking services – is particularly interesting in terms of both patients and their client practices.

It is undoubtedly useful for medical practices to be able to automate appointment bookings and take pressure off their front of house staff, while patients benefit from being able to arrange appointments more conveniently. Nonetheless as part of their responsibility to patient confidentiality, healthcare staff must understand what these third-party tools are doing with their patients’ data.

The Australian Broadcast Corporation revealed that HealthEngine shared some information about patient appointment bookings with third parties, including a personal injury law firm. HealthEngine insisted that this “‘sharing’ of data is done only with patients’ “express consent” and that it is therefore acceptable. However others, including civil society groups and peak medical bodies disagree, and say that these practices are not in line with community expectations of privacy.

With over $37 million of venture funding riding on HealthEngine, it is understandable the company would seek to downplay its actions but must ensure the patients are fully informed when ‘consenting’ to sharing their data with third parties for services unrelated to booking an appointment with their GP.

Matters of trust

The incentives of VC-backed technology start-ups and those of data capitalism-focused companies are frequently not aligned with those of patients. While sharing people’s data with third parties may not matter much when it relates to fairly innocuous content such as cat photos, using this same business model for healthcare-related systems can have much more serious consequences.

If patients cannot trust the systems their doctors use patients will be less inclined to share information their doctors need to be able to provide them with effective healthcare. Patient health will suffer if patients can no longer trust that their healthcare providers will put all of their patients’ needs first. Healthcare staff can only choose to prioritise these needs while the digital health sector must ensure that innovations respect privacy by design. Developers, meanwhile, must not compromise patient care in the pursuit of profit.

Leave a Reply

Your email address will not be published. Required fields are marked *