Download PDF
Rex Liao
Wellington School of Medicine
Otago Medical School
University of Otago
Logan Z. J. Williams
School of Medicine
Faculty of Medical and Health Sciences
University of Auckland
Gisela A. Kristono
Wellington School of Medicine
Otago Medical School
University of Otago
Angela Ballantyne is an Associate Professor of bioethics at the
University of Otago, Wellington. She has an interest in the ethics
of big data, such as that contained within electronic health records
(EHRs) and has published several papers on this topic.
Electronic health records are digital files containing patient informa-
tion that are used by medical practitioners to guide management. In
some circumstances they are also used for research.
This interview has been edited for clarity and conciseness with Asso-
ciate Professor Ballantyne’s approval.
Are there particular benefits of conducting research
from EHR data over other study methods?
EHRs usually give you a much broader picture. They do not replace
something like a randomised control trial (RCT), but in combination
they can be really helpful. Populations that are typically excluded
from traditional research can be represented by EHRs. RCTs give
you high-quality evidence by reducing the variables and therefore
are often not representative of real-world populations. Historically,
women, particularly if they were of child-bearing years or pregnant,
were excluded from trials. This still has an impact today, for example,
on the accuracy of our cardiovascular clinical guidelines, which were
based on populations with systemically under-represented women.
However, all of these patients will typically receive clinical care, so
often the only place you can find a picture of how interventions are
working for these populations is in the clinical data.
What are some ethical considerations of using this
data for research?
Research with clinical records is a type of secondary use of the data.
You collect the data for clinical care, and subsequently use it to an-
swer research questions. There is so much data-sharing, linking, and
secondary use going on – it is a very complex ecosystem. The first
ethical challenge is that it is very difficult to get patients’ consent for
each use of their data, but we could take a more transparent ap-
proach. If we are not getting explicit consent from patients to use
their data for research, we must increase transparency so patients
can easily find out what is happening with their data, the justification
for its use, and who is responsible for managing data security. Af-
ter transparency comes public engagement. There are concerns of
backlash if the public are not engaged. For example, with EHRs in
Australia, where 2,500,00 people opted out of the new EHR system. 1
There are concerns around bias in the data as well. If the input data
are biased, the result will also be biased, and ’knowledge’ based on
the data can risk embedding and perpetuating bias. If the data coming
in is not representative, then the results will naturally not be repre-
sentative. There was the case in Aotearoa where the passport photo
of an Asian applicant was rejected by the automated photo checker
when it concluded his eyes were shut. 2 Most of the faces used to train
facial recognition programs are European, so it is much less accurate
with non-European faces. While clinical data is much more repre-
sentative than traditional research data, it still reflects the bias that
results from different ethnic, gender and geographic access to care.
We know that doctors systematically undertreated African Ameri-
cans for pain, and if we do not effectively correct for this bias when
using the clinical data, there is a risk the resulting algorithm could sug-
gest African Americans need less pain relief than European patients. 3
Governance is another important ethical consideration. Te Mana
Raraunga, the Māori Data Sovereignty Network, are a group of
Māori academics who advocate for formal co-governance and pow-
er-sharing models for the use of datasets containing Māori data. 4
Are there circumstances where ethics review may
not be needed?
The regulatory system is incredibly fragmented, so health data can fall
under many different pieces of regulation. This makes it very com-
plex for researchers to know how they can use it. Under the Health
Information Privacy Code, health agencies can release data if it is not
identifiable, or if it is within the parameters of the purpose for which
the information was disclosed. So if a patient is in hospital, and disclos-
es their health information to the clinician, the clinician can share that
with the rest of the clinical team or call other departments for advice.
The patient would expect that to happen in a hospital, so you do not
have to ask for consent each time.
However, research is outside the original purpose for which the data
was collected from the patient. So if clinicians or researchers want
to use it for research (in an identifiable form), they either need to go
back and re-consent each patient or ask a research ethics committee
for approval to use the data without explicit consent. One argument
I have made in a recent paper is that I think we need some sort of
data-specific research ethics committee in Aotearoa. 5 Research eth-
ics committees have expertise in clinical and observational research,
but do not really have expertise in data security, computer science,
or statistics. A data-specific research ethics committee would include
experts in data science, data ethics, lawyers, Māori data governance,
and health.
Also, in a lot of health research, people want access to identifiable
data too. The EHRs are not accurate enough that researchers are
prepared to use them in their current state. Typically, they need to
go back and recheck things, and to do this they need the identifiable
datasets. As soon as you want access to identifiable data you need to
go through research ethics approval, which can be burdensome and
discouraging for researchers.
If the data has been made non-identifiable, are there
any issues with using the data for research without
going through an exhaustive process?
The glaring problem with this is that existing regulation assumes there
is a clear difference between identifiable data and de-identified data.
However, this is not really true, and our regulations have not caught
up with that.
There have been cases where data scientists have proved they can
re-identify individuals from supposedly de-identified datasets. Al-
though there is probably not a huge incentive for someone to do that,
it is very misleading to the public to say that de-identified datasets
are unable to be re-identified. Take for example the Integrated Data
Infrastructure (IDI). A lot of the rhetoric around it is that it is all anon-
ymous, but Aotearoa is a population of around 4,700,000 people,
and there are data about many aspects of our lives in there. I think
we need to have a much more nuanced conversation with the public
about this. I trust the IDI because I trust their process of vetting and
training researchers, and I trust the professionalism of the research-
ers; just like I trust clinicians and medical students not to share my
clinical data inappropriately when I see them at the hospital or at the
general practitioner’s (GP) clinic.
I am also bothered by agencies who claim their datasets are secure,
but then act surprised when there is a data breach. There will be
data breaches, just as there will be medical errors in the medical sys-
tem. The question is how often we think it is going to happen, what
plans we have to mitigate that harm, and how these risks are weighed
against the benefit we think we can achieve from sharing and using
the data.
Should the primary function of EHRs be for patient
care or for research?
This relates to the concept of a learning health-care system. Clini-
cal care operates under a best interest model – the goal is to help
this patient get better. In research, you are trying to generate knowl-
edge to inform medicine and so you are weighing the goals of society
against the interests of the research subjects. In the past, we sepa-
rated clinical care and research in response to high-profile research
ethics scandals. These were cases of doctors exploiting their patients
by conducting research on them at the expense of the patients’ best
interests. Some examples include the Tuskegee study in the United
States and the “unethical experiment” in Aotearoa (addressed in the
Cartwright Inquiry). 6,7 In response to the public outrage – which was
justified – many governments had public inquiries, and the results of
which effectively split research off from clinical care.
Proponents of a learning health-care system challenge this separa-
tion, and argue that it would be better to have a constant feedback
cycle where you are providing clinical care, evaluating that care, then
feeding this new knowledge back into clinical care. 8 They are arguing
for much more integration of research into clinical practice, and this
could take a whole range of different forms: from the use of EHRs
for research, to pragmatic trials. For example, you could take two GP
clinics; one might roll out a new policy on how to treat back pain while
the other continues their existing care, and then we compare results.
Some have argued that for these sorts of minimal-risk trials you could
add a simplified informed consent process into the clinical consulta-
tion, rather than having the full research informed consent process.
I recently published papers that argued, in certain contexts, patients
have an ethical obligation to share their data. 9,10 In Aotearoa the
health care we receive is evidence-based and the reason we have
this evidence is because prior patients (from around the world) have
contributed to the research enterprise. So as part of paying that for-
ward, we should, under certain circumstances, be willing to share
our clinical data for research. This enables future patients to benefit
from the knowledge gained from our data just as we have benefit-
ted from previous patients. Ethically, I think this is much clearer in a
public health system, in the sense that there is solidarity with all of
us generating knowledge and benefitting from each other. I think this
would be different in a private health system. Regardless, there have
to be parameters of some kind to ensure the data is being used in a
trustworthy way, and governed appropriately.
Overall, I think the primary function of EHRs should still be patient
care, but I think a very important secondary function is research.
Can there be a conflict between these two goals?
One way there could be conflict is if groups who have high levels of
distrust of the medical community choose not to seek the needed
health care because they are worried about lack of data confidential-
ity. One place we saw this was the controversy involving the Ministry
of Social Development (MSD) data-for-funding contracts. The MSD
argued that it had a right to individual client level data (rather than
aggregated data) because it needed the client level data to properly
evaluate non-governmental organisation (NGO) services, particular-
ly where clients were using multiple services. Some NGOs, such as
Rape Crisis, pushed back on that. 11 They serve a vulnerable commu-
nity and they warned that people would stop seeking their services
(or lie about their personal information) in fear of the NGO passing
that information to the MSD. So we need to avoid a situation where
public distrust of data sharing and/or secondary research leads to
patients failing to seek care, or being reluctant to disclose sensitive
information to their providers.
Are there times where public health research
involving these datasets outweighs the individual
interests of patients in control over their data?
Again, I think it is a spectrum. We already have accepted public health
principles for when we can take data, whether a patient consents to
it or not. For example, with notifiable diseases the potential threat
to the public outweighs the interests of the individual. We must still
minimise the autonomy and liberty restrictions, and maximise data
security and de-identification as much as possible.
Ethics committees do grant consent waivers to allow researchers to
use health data without consent when the public interest in the re-
search outweighs the personal interest in privacy. I think this is broad-
ly reasonable (though I would argue for slightly different criteria). For
example, maybe we want to look at the relationship between influ-
enza immunisation during pregnancy and fetal and neonatal health
outcomes; we would need to link the mother and child’s health re-
cords, and might also want to link to Births, Deaths and Marriages
Registration to include data on still births. I think this sort of study,
prima facie, has high public interest and could potentially justify pro-
ceeding without consent. Transparency, community engagement, and
governance would be important issues to consider here.
It has been suggested that alongside basic
demographic and clinical information, EHRs should
also include a more comprehensive evaluation of
societal and behavioural determinants of health.
What do you think about that?
A lot of that information is probably getting discussed in an informal
manner but not comprehensively collected. I can see the benefit of
collecting social data, though GPs do not have a huge amount of time
anyway, so you are weighing up how valuable it is going to be with
how long it will take to collect. You also need to try to ensure con-
sistency in how the information is recorded and coded, and the more
information you collect the more variability you are going to have to
manage. Another concern is how quickly that information changes
and keeping the information updated. For example, living situations
might change reasonably often. However, if you can also use those re-
cords for research purposes, you are maximising the benefit relative
to the investment in data collection.
I also think you are going to run into trust issues with patients. When
questions arise organically and are relevant to the clinical consultation,
I think patients find that quite natural and understand its purpose.
However, they might be wary if they suddenly feel like they are get-
ting this interrogation from their doctor, the sort they might expect
from Work and Income. For any data you are collecting, you have got
to make sure that you are still operating within the spectrum of trust
and that patients understand why these questions are being asked
and feel that it is safe to tell you. One thing we know in relation to
data collection is that patients make up stuff if they do not trust you.
Trust is core to the clinical relationship, and we can not lose that.
Theoretically, if collection of this information was
normalised, could this information be used in a way
that affects health inequities?
It is a question of what you do with the data. You have to ask what
is the purpose of collecting the data, what is the context, have you
communicated appropriately with the target group, and is everybody
on the same page? It could decrease health inequities if that data en-
sures more vulnerable patients get the care they need. For example,
we can map populations to show where the health need is greatest.
One way you could see an increase in health inequities is if there is
backlash among certain populations who suddenly feel like they are
being surveilled in a way they do not trust. They might start to disen-
gage from the health system.
It is also important how you present the output of the research. Do
you frame the results according to a deficit narrative (why certain
populations are failing to achieve good health) or do you have a resil-
ience narrative (why, despite systemic racism, are some populations
doing well and how can we learn from that). These narratives can be
really powerful.
Part of what is tricky about EHRs is that on one hand, they give
you the most comprehensive picture of health needs in Aotearoa.
They are often better than research that systemically excludes a lot
of populations from the research pool. So, they are especially useful
for planning health service delivery and trying to address complex
multi-dimensional problems such as the relationship between pov-
erty and health, and to target high needs groups. On the other hand,
vulnerable marginalised groups tend to have more distrust of cen-
tralised systems. They are the ones who may be more reticent about
volunteering their data to the government, and often for very good
reason. When you look at the history of research and public health,
we see that governments have collected data about populations in
order to implement policies around segregation, forced re-education
of children, dispossession of land, and so on. This is why it is so im-
portant to proceed at the pace of trust and involve communities in
setting a research agenda that meets their needs.
Are there any unique perspectives that we should
keep in mind as future doctors of Aotearoa that
international research will not necessarily cover?
First, it is important to consider the extent to which research based
on overseas data will be relevant and applicable to our popula-
tion – both in terms of biological samples and health data. Māori and
Pasifika populations are not well represented in the international
genomic resource base. There is a risk of increasing health inequity
if this under-representation is not addressed, because the research
results will not deliver genomic technologies with clinical utility for
these ethnic groups.
A second challenge is how to honour Te Tiriti o Waitangi and the
need to develop appropriate co-governance models for big data (de-
rived from EHRs or biological samples). There is lots of debate about
social license. Social license is the degree to which a community ac-
cepts a practice, in this case data sharing, linking, and re-use. Often
you do not know you have breached the social license until you have
stepped too far and you get public backlash. So, the idea is that you
have accepted data use within the social license. Te Mana Raraunga
has argued that we also need a cultural license, which means the
extent to which iwi and Treaty partners think data use is culturally
appropriate.
Finally, if people wanted more information about
this topic what do you recommend?
I would suggest people look at the United Kingdom (UK) Nuffield
Council Reports. 12 They do high-quality and accessible work on all
sorts of medical ethics topics, with recent reports on artificial intel-
ligence and big data. Also, the UK health system is similar enough to
what we have in Aotearoa that a lot of the information is still very
relevant to us.
One thing it does not cover is the Aotearoa-specific focus on the Te
Tiriti o Waitangi. Te Mana Raraunga and their website has links to
great resources on data sovereignty.
References
1. Gothe-Snape J. ABC News [Internet]. Sydney: ABC. My Health
Record opt-outs top 2.5m as service moves to ‘evolving’ choice; 2019
[updated 2019 Feb 20, cited 2019 Mar 06]. Available from: https://
www.abc.net.au/news/2019-02-20/my-health-record-opt-outs-top-
2.5-million/10830220
2. Regan J. Stuff.co.nz [Internet]. Wellington: Stuff Limited. New
Zealand passport robot tells applicant of Asian descent to open eyes;
2016 [updated 2016 Dec 08, cited 2019 Mar 06]. Available from:
https://www.stuff.co.nz/travel/travel-troubles/87332169/new-zealand-
passport-robot-tells-applicant-of-asian-descent-to-open-eyes
3. Singhal A, Tien YY, Hsia RY. Racial-ethnic disparities in opioid
prescriptions at emergency department visits for conditions
commonly associated with prescription drug abuse. PLoS One.
2016;11(8):e0159224.
4. Te Mana Raraunga [Internet]. New Zealand. [cited 2019 Mar 06].
Available from: https://www.temanararaunga.maori.nz
5. Ballantyne A, Style R. Health data research in New Zealand:
updating the ethical governance framework. N Z Med J.
2017;130(1464):64–71.
6. Centers for Disease Control and Prevention [Internet]. Atlanta:
U.S. Department of Health and Human Services. U.S. Public Health
Service Syphilis Study at Tuskegee [updated 2015 Dec 14, cited 2019
Mar 6]. Available from: https://www.cdc.gov/tuskegee
7. Ministry of Health [Internet]. Wellington: MoH. The Cartwright
Inquiry 1988 [updated 2017 May 03, cited 2019 Mar 06]. Available
from: https://www.health.govt.nz/publication/cartwright-inquiry-1988
8. The Learning Healthcare Project [Internet]. Newcastle: Institute
of Health and Society, Newcastle University. Background: Learning
Healthcare System [cited 2019 Mar 06]. Available from: http://
www.learninghealthcareproject.org/section/background/learning-
healthcare-system
9. Ballantyne A, Schaefer GO. Consent and the ethical duty to
participate in health data research. J Med Ethics. 2018;44(6):392–6
10. Ballantyne A. Adjusting the focus: a public health ethics approach
to data research. Bioethics. 2019;3(3):357–66.
11. Robson S. Radio New Zealand [Internet]. Wellington: RNZ.
Controversial data-for-funding plan scrapped; 2017 [updated 2017
Nov 07, cited 2019 Mar 06]. Available from: https://www.radionz.
co.nz/news/political/343233/controversial-data-for-funding-plan-
scrapped
12. Nuffield Council on Bioethics [Internet]. London: Nuffield Council
on Bioethics; 2014–2019 [cited 2019 Mar 06]. Available from: http://
nuffieldbioethics.org/
Acknowledgements:
The authors would like to thank Associate Professor Angela Ballantyne
for her time and contribution towards this interview.
Conflicts of Interest
Rex Liao is a NZMSJ student reviewer.
Logan Zane John Williams is the Editor-in-Chief of the NZMSJ.
Gisela Kristono is the Deputy Editor of the NZMSJ.
This article has gone through a double-blinded peer review process
applied to all articles submitted to the NZMSJ, and has been accepted
after achieving the standard required for publication. The authors
have no other conflict of interest.
Correspondence
Rex Liao: [email protected]
Rex Liao
Wellington School of Medicine
Otago Medical School
University of Otago
Logan Z. J. Williams
School of Medicine
Faculty of Medical and Health Sciences
University of Auckland
Gisela A. Kristono
Wellington School of Medicine
Otago Medical School
University of Otago
Angela Ballantyne is an Associate Professor of bioethics at the
University of Otago, Wellington. She has an interest in the ethics
of big data, such as that contained within electronic health records
(EHRs) and has published several papers on this topic.
Electronic health records are digital files containing patient informa-
tion that are used by medical practitioners to guide management. In
some circumstances they are also used for research.
This interview has been edited for clarity and conciseness with Asso-
ciate Professor Ballantyne’s approval.
Are there particular benefits of conducting research
from EHR data over other study methods?
EHRs usually give you a much broader picture. They do not replace
something like a randomised control trial (RCT), but in combination
they can be really helpful. Populations that are typically excluded
from traditional research can be represented by EHRs. RCTs give
you high-quality evidence by reducing the variables and therefore
are often not representative of real-world populations. Historically,
women, particularly if they were of child-bearing years or pregnant,
were excluded from trials. This still has an impact today, for example,
on the accuracy of our cardiovascular clinical guidelines, which were
based on populations with systemically under-represented women.
However, all of these patients will typically receive clinical care, so
often the only place you can find a picture of how interventions are
working for these populations is in the clinical data.
What are some ethical considerations of using this
data for research?
Research with clinical records is a type of secondary use of the data.
You collect the data for clinical care, and subsequently use it to an-
swer research questions. There is so much data-sharing, linking, and
secondary use going on – it is a very complex ecosystem. The first
ethical challenge is that it is very difficult to get patients’ consent for
each use of their data, but we could take a more transparent ap-
proach. If we are not getting explicit consent from patients to use
their data for research, we must increase transparency so patients
can easily find out what is happening with their data, the justification
for its use, and who is responsible for managing data security. Af-
ter transparency comes public engagement. There are concerns of
backlash if the public are not engaged. For example, with EHRs in
Australia, where 2,500,00 people opted out of the new EHR system. 1
There are concerns around bias in the data as well. If the input data
are biased, the result will also be biased, and ’knowledge’ based on
the data can risk embedding and perpetuating bias. If the data coming
in is not representative, then the results will naturally not be repre-
sentative. There was the case in Aotearoa where the passport photo
of an Asian applicant was rejected by the automated photo checker
when it concluded his eyes were shut. 2 Most of the faces used to train
facial recognition programs are European, so it is much less accurate
with non-European faces. While clinical data is much more repre-
sentative than traditional research data, it still reflects the bias that
results from different ethnic, gender and geographic access to care.
We know that doctors systematically undertreated African Ameri-
cans for pain, and if we do not effectively correct for this bias when
using the clinical data, there is a risk the resulting algorithm could sug-
gest African Americans need less pain relief than European patients. 3
Governance is another important ethical consideration. Te Mana
Raraunga, the Māori Data Sovereignty Network, are a group of
Māori academics who advocate for formal co-governance and pow-
er-sharing models for the use of datasets containing Māori data. 4
Are there circumstances where ethics review may
not be needed?
The regulatory system is incredibly fragmented, so health data can fall
under many different pieces of regulation. This makes it very com-
plex for researchers to know how they can use it. Under the Health
Information Privacy Code, health agencies can release data if it is not
identifiable, or if it is within the parameters of the purpose for which
the information was disclosed. So if a patient is in hospital, and disclos-
es their health information to the clinician, the clinician can share that
with the rest of the clinical team or call other departments for advice.
The patient would expect that to happen in a hospital, so you do not
have to ask for consent each time.
However, research is outside the original purpose for which the data
was collected from the patient. So if clinicians or researchers want
to use it for research (in an identifiable form), they either need to go
back and re-consent each patient or ask a research ethics committee
for approval to use the data without explicit consent. One argument
I have made in a recent paper is that I think we need some sort of
data-specific research ethics committee in Aotearoa. 5 Research eth-
ics committees have expertise in clinical and observational research,
but do not really have expertise in data security, computer science,
or statistics. A data-specific research ethics committee would include
experts in data science, data ethics, lawyers, Māori data governance,
and health.
Also, in a lot of health research, people want access to identifiable
data too. The EHRs are not accurate enough that researchers are
prepared to use them in their current state. Typically, they need to
go back and recheck things, and to do this they need the identifiable
datasets. As soon as you want access to identifiable data you need to
go through research ethics approval, which can be burdensome and
discouraging for researchers.
If the data has been made non-identifiable, are there
any issues with using the data for research without
going through an exhaustive process?
The glaring problem with this is that existing regulation assumes there
is a clear difference between identifiable data and de-identified data.
However, this is not really true, and our regulations have not caught
up with that.
There have been cases where data scientists have proved they can
re-identify individuals from supposedly de-identified datasets. Al-
though there is probably not a huge incentive for someone to do that,
it is very misleading to the public to say that de-identified datasets
are unable to be re-identified. Take for example the Integrated Data
Infrastructure (IDI). A lot of the rhetoric around it is that it is all anon-
ymous, but Aotearoa is a population of around 4,700,000 people,
and there are data about many aspects of our lives in there. I think
we need to have a much more nuanced conversation with the public
about this. I trust the IDI because I trust their process of vetting and
training researchers, and I trust the professionalism of the research-
ers; just like I trust clinicians and medical students not to share my
clinical data inappropriately when I see them at the hospital or at the
general practitioner’s (GP) clinic.
I am also bothered by agencies who claim their datasets are secure,
but then act surprised when there is a data breach. There will be
data breaches, just as there will be medical errors in the medical sys-
tem. The question is how often we think it is going to happen, what
plans we have to mitigate that harm, and how these risks are weighed
against the benefit we think we can achieve from sharing and using
the data.
Should the primary function of EHRs be for patient
care or for research?
This relates to the concept of a learning health-care system. Clini-
cal care operates under a best interest model – the goal is to help
this patient get better. In research, you are trying to generate knowl-
edge to inform medicine and so you are weighing the goals of society
against the interests of the research subjects. In the past, we sepa-
rated clinical care and research in response to high-profile research
ethics scandals. These were cases of doctors exploiting their patients
by conducting research on them at the expense of the patients’ best
interests. Some examples include the Tuskegee study in the United
States and the “unethical experiment” in Aotearoa (addressed in the
Cartwright Inquiry). 6,7 In response to the public outrage – which was
justified – many governments had public inquiries, and the results of
which effectively split research off from clinical care.
Proponents of a learning health-care system challenge this separa-
tion, and argue that it would be better to have a constant feedback
cycle where you are providing clinical care, evaluating that care, then
feeding this new knowledge back into clinical care. 8 They are arguing
for much more integration of research into clinical practice, and this
could take a whole range of different forms: from the use of EHRs
for research, to pragmatic trials. For example, you could take two GP
clinics; one might roll out a new policy on how to treat back pain while
the other continues their existing care, and then we compare results.
Some have argued that for these sorts of minimal-risk trials you could
add a simplified informed consent process into the clinical consulta-
tion, rather than having the full research informed consent process.
I recently published papers that argued, in certain contexts, patients
have an ethical obligation to share their data. 9,10 In Aotearoa the
health care we receive is evidence-based and the reason we have
this evidence is because prior patients (from around the world) have
contributed to the research enterprise. So as part of paying that for-
ward, we should, under certain circumstances, be willing to share
our clinical data for research. This enables future patients to benefit
from the knowledge gained from our data just as we have benefit-
ted from previous patients. Ethically, I think this is much clearer in a
public health system, in the sense that there is solidarity with all of
us generating knowledge and benefitting from each other. I think this
would be different in a private health system. Regardless, there have
to be parameters of some kind to ensure the data is being used in a
trustworthy way, and governed appropriately.
Overall, I think the primary function of EHRs should still be patient
care, but I think a very important secondary function is research.
Can there be a conflict between these two goals?
One way there could be conflict is if groups who have high levels of
distrust of the medical community choose not to seek the needed
health care because they are worried about lack of data confidential-
ity. One place we saw this was the controversy involving the Ministry
of Social Development (MSD) data-for-funding contracts. The MSD
argued that it had a right to individual client level data (rather than
aggregated data) because it needed the client level data to properly
evaluate non-governmental organisation (NGO) services, particular-
ly where clients were using multiple services. Some NGOs, such as
Rape Crisis, pushed back on that. 11 They serve a vulnerable commu-
nity and they warned that people would stop seeking their services
(or lie about their personal information) in fear of the NGO passing
that information to the MSD. So we need to avoid a situation where
public distrust of data sharing and/or secondary research leads to
patients failing to seek care, or being reluctant to disclose sensitive
information to their providers.
Are there times where public health research
involving these datasets outweighs the individual
interests of patients in control over their data?
Again, I think it is a spectrum. We already have accepted public health
principles for when we can take data, whether a patient consents to
it or not. For example, with notifiable diseases the potential threat
to the public outweighs the interests of the individual. We must still
minimise the autonomy and liberty restrictions, and maximise data
security and de-identification as much as possible.
Ethics committees do grant consent waivers to allow researchers to
use health data without consent when the public interest in the re-
search outweighs the personal interest in privacy. I think this is broad-
ly reasonable (though I would argue for slightly different criteria). For
example, maybe we want to look at the relationship between influ-
enza immunisation during pregnancy and fetal and neonatal health
outcomes; we would need to link the mother and child’s health re-
cords, and might also want to link to Births, Deaths and Marriages
Registration to include data on still births. I think this sort of study,
prima facie, has high public interest and could potentially justify pro-
ceeding without consent. Transparency, community engagement, and
governance would be important issues to consider here.
It has been suggested that alongside basic
demographic and clinical information, EHRs should
also include a more comprehensive evaluation of
societal and behavioural determinants of health.
What do you think about that?
A lot of that information is probably getting discussed in an informal
manner but not comprehensively collected. I can see the benefit of
collecting social data, though GPs do not have a huge amount of time
anyway, so you are weighing up how valuable it is going to be with
how long it will take to collect. You also need to try to ensure con-
sistency in how the information is recorded and coded, and the more
information you collect the more variability you are going to have to
manage. Another concern is how quickly that information changes
and keeping the information updated. For example, living situations
might change reasonably often. However, if you can also use those re-
cords for research purposes, you are maximising the benefit relative
to the investment in data collection.
I also think you are going to run into trust issues with patients. When
questions arise organically and are relevant to the clinical consultation,
I think patients find that quite natural and understand its purpose.
However, they might be wary if they suddenly feel like they are get-
ting this interrogation from their doctor, the sort they might expect
from Work and Income. For any data you are collecting, you have got
to make sure that you are still operating within the spectrum of trust
and that patients understand why these questions are being asked
and feel that it is safe to tell you. One thing we know in relation to
data collection is that patients make up stuff if they do not trust you.
Trust is core to the clinical relationship, and we can not lose that.
Theoretically, if collection of this information was
normalised, could this information be used in a way
that affects health inequities?
It is a question of what you do with the data. You have to ask what
is the purpose of collecting the data, what is the context, have you
communicated appropriately with the target group, and is everybody
on the same page? It could decrease health inequities if that data en-
sures more vulnerable patients get the care they need. For example,
we can map populations to show where the health need is greatest.
One way you could see an increase in health inequities is if there is
backlash among certain populations who suddenly feel like they are
being surveilled in a way they do not trust. They might start to disen-
gage from the health system.
It is also important how you present the output of the research. Do
you frame the results according to a deficit narrative (why certain
populations are failing to achieve good health) or do you have a resil-
ience narrative (why, despite systemic racism, are some populations
doing well and how can we learn from that). These narratives can be
really powerful.
Part of what is tricky about EHRs is that on one hand, they give
you the most comprehensive picture of health needs in Aotearoa.
They are often better than research that systemically excludes a lot
of populations from the research pool. So, they are especially useful
for planning health service delivery and trying to address complex
multi-dimensional problems such as the relationship between pov-
erty and health, and to target high needs groups. On the other hand,
vulnerable marginalised groups tend to have more distrust of cen-
tralised systems. They are the ones who may be more reticent about
volunteering their data to the government, and often for very good
reason. When you look at the history of research and public health,
we see that governments have collected data about populations in
order to implement policies around segregation, forced re-education
of children, dispossession of land, and so on. This is why it is so im-
portant to proceed at the pace of trust and involve communities in
setting a research agenda that meets their needs.
Are there any unique perspectives that we should
keep in mind as future doctors of Aotearoa that
international research will not necessarily cover?
First, it is important to consider the extent to which research based
on overseas data will be relevant and applicable to our popula-
tion – both in terms of biological samples and health data. Māori and
Pasifika populations are not well represented in the international
genomic resource base. There is a risk of increasing health inequity
if this under-representation is not addressed, because the research
results will not deliver genomic technologies with clinical utility for
these ethnic groups.
A second challenge is how to honour Te Tiriti o Waitangi and the
need to develop appropriate co-governance models for big data (de-
rived from EHRs or biological samples). There is lots of debate about
social license. Social license is the degree to which a community ac-
cepts a practice, in this case data sharing, linking, and re-use. Often
you do not know you have breached the social license until you have
stepped too far and you get public backlash. So, the idea is that you
have accepted data use within the social license. Te Mana Raraunga
has argued that we also need a cultural license, which means the
extent to which iwi and Treaty partners think data use is culturally
appropriate.
Finally, if people wanted more information about
this topic what do you recommend?
I would suggest people look at the United Kingdom (UK) Nuffield
Council Reports. 12 They do high-quality and accessible work on all
sorts of medical ethics topics, with recent reports on artificial intel-
ligence and big data. Also, the UK health system is similar enough to
what we have in Aotearoa that a lot of the information is still very
relevant to us.
One thing it does not cover is the Aotearoa-specific focus on the Te
Tiriti o Waitangi. Te Mana Raraunga and their website has links to
great resources on data sovereignty.
References
1. Gothe-Snape J. ABC News [Internet]. Sydney: ABC. My Health
Record opt-outs top 2.5m as service moves to ‘evolving’ choice; 2019
[updated 2019 Feb 20, cited 2019 Mar 06]. Available from: https://
www.abc.net.au/news/2019-02-20/my-health-record-opt-outs-top-
2.5-million/10830220
2. Regan J. Stuff.co.nz [Internet]. Wellington: Stuff Limited. New
Zealand passport robot tells applicant of Asian descent to open eyes;
2016 [updated 2016 Dec 08, cited 2019 Mar 06]. Available from:
https://www.stuff.co.nz/travel/travel-troubles/87332169/new-zealand-
passport-robot-tells-applicant-of-asian-descent-to-open-eyes
3. Singhal A, Tien YY, Hsia RY. Racial-ethnic disparities in opioid
prescriptions at emergency department visits for conditions
commonly associated with prescription drug abuse. PLoS One.
2016;11(8):e0159224.
4. Te Mana Raraunga [Internet]. New Zealand. [cited 2019 Mar 06].
Available from: https://www.temanararaunga.maori.nz
5. Ballantyne A, Style R. Health data research in New Zealand:
updating the ethical governance framework. N Z Med J.
2017;130(1464):64–71.
6. Centers for Disease Control and Prevention [Internet]. Atlanta:
U.S. Department of Health and Human Services. U.S. Public Health
Service Syphilis Study at Tuskegee [updated 2015 Dec 14, cited 2019
Mar 6]. Available from: https://www.cdc.gov/tuskegee
7. Ministry of Health [Internet]. Wellington: MoH. The Cartwright
Inquiry 1988 [updated 2017 May 03, cited 2019 Mar 06]. Available
from: https://www.health.govt.nz/publication/cartwright-inquiry-1988
8. The Learning Healthcare Project [Internet]. Newcastle: Institute
of Health and Society, Newcastle University. Background: Learning
Healthcare System [cited 2019 Mar 06]. Available from: http://
www.learninghealthcareproject.org/section/background/learning-
healthcare-system
9. Ballantyne A, Schaefer GO. Consent and the ethical duty to
participate in health data research. J Med Ethics. 2018;44(6):392–6
10. Ballantyne A. Adjusting the focus: a public health ethics approach
to data research. Bioethics. 2019;3(3):357–66.
11. Robson S. Radio New Zealand [Internet]. Wellington: RNZ.
Controversial data-for-funding plan scrapped; 2017 [updated 2017
Nov 07, cited 2019 Mar 06]. Available from: https://www.radionz.
co.nz/news/political/343233/controversial-data-for-funding-plan-
scrapped
12. Nuffield Council on Bioethics [Internet]. London: Nuffield Council
on Bioethics; 2014–2019 [cited 2019 Mar 06]. Available from: http://
nuffieldbioethics.org/
Acknowledgements:
The authors would like to thank Associate Professor Angela Ballantyne
for her time and contribution towards this interview.
Conflicts of Interest
Rex Liao is a NZMSJ student reviewer.
Logan Zane John Williams is the Editor-in-Chief of the NZMSJ.
Gisela Kristono is the Deputy Editor of the NZMSJ.
This article has gone through a double-blinded peer review process
applied to all articles submitted to the NZMSJ, and has been accepted
after achieving the standard required for publication. The authors
have no other conflict of interest.
Correspondence
Rex Liao: [email protected]