Episode image for Naked and Infamous: Exposing Fertility Apps’ Use and Abuse of Health Data
LOADING ...
Preview of episode

Want to listen to the full episode and all our other episodes?

Hearsay allows you to fulfill your legal CPD requirements every year.

Our yearly subscription is only $299/year.

With a yearly subscription, you can access all of our episodes AND every episode we release over the next year.

Episode 100

Naked and Infamous: Exposing Fertility Apps’ Use and Abuse of Health Data

Law as stated: 22 September 2023 What is this? This episode was published and is accurate as at this date.
Dr Katharine Kemp of UNSW's Faculty of Law and Justice joins David in Curiosity to explore her research paper Your Body, Our Data: Unfair and Unsafe Privacy Practices of Popular Fertility Apps. Touching on the use and misuse of users' health information, how that information is gathered, why that information is being collected, and how to best protect users.
Substantive Law Substantive Law
22 September 2023
Katharine Kemp
UNSW
1 hour = 1 CPD point
How does it work?
What area(s) of law does this episode consider?Unfair and unsafe privacy practices in popular fertility apps.
Why is this topic relevant?Fertility apps have transformed how consumers monitor their reproductive health. Sitting within the broader FemTech market, they are marketed as a convenient, purportedly data-driven way for women to track their menstrual cycles, assist conceiving a child, or manage a pregnancy.

However, these seemingly convenient tools too often come at the cost of compromised privacy. The sensitive personal data that fertility apps can collect goes straight to the heart of some of the most intimate moments in our lives – and as the research of today’s guest shows, that intimate data can be obtained unfairly and retained unsafely.

Dr Katharine Kemp’s research paper Your Body, Our Data: Unfair and Unsafe Privacy Practices of Popular Fertility Apps found serious privacy flaws in many fertility apps popular with Australian women.

What are the main points?
  • Fertility apps can collect extensive personal data – not just from user entries, but through tracking – and potentially misuse this information without the users’ knowledge or consent.
  • Aside from the expected behavioral data input by the user (such as menstrual cycle tracking, fertility symptoms, mood changes, and digestive symptoms), the apps also analyze usage data.
  • This involves monitoring user behavior within the app, such as the articles they read, the support groups they join, and the time they spend on certain information
  • The presentation of user options and application interfaces in popular fertility apps might encourage users to unknowingly give away more data or permit extensive tracking of their online behavior.
  • Certain apps do not clearly state what differentiates their subscription and membership tiers, presenting vague or misleading descriptions.
  • In one app the only difference between a gold and silver membership was that the gold permitted invasive tracking, an option that was not clearly stated to the user.
  • Katharine also found issues around consent for the use of personal data for targeted ad purposes.
  • Sentences bolded in the app’s terms to imply consent contradicted other unbolded terms within the text in meaning and intent, leading to potential confusion.
  • In terms of data retention, some apps are saying they’ll keep the data for three years and others for six or more. There is clearly no good reason for keeping data this long.
  • Such long retention times are adding to the possibility of misuse internally within an organisation or by external bad actors.
  • In Katharine’s view, what’s needed is not necessarily time limits, but strict enforcement of the rules to discourage data handling laziness.
What are the practical takeaways?
  • In relation to using fertility apps, Katharine advises potential users to spend time searching for and adjusting privacy settings to better protect personal data.
  • Sometimes, but not always, EU-headquartered apps tend to have better privacy or privacy options.
  • She advises users to avoid answering unnecessary questions, and to delete their account and data from the app when they are no longer using it, rather than just deleting the app itself.
  • For those interested in a career in legal academia, Katharine stresses the importance of understanding the balance of power within privacy regulation and how it pertains to equality and social justice.
  • She advises students to be flexible in their career, stating that often a specialty finds the individual, and to always adhere to their principles.
Show notesKatharine Kemp, Your Body, Our Data: Unfair and Unsafe Privacy Practices of Popular Fertility Apps (2023)

David Turner = DT; Katharine Kemp = KK; Ross Davis = RD

0:00:00DTIf you’ve been listening to recent episodes of Hearsay, you might have noticed we’ve been doing a little bit of a deep dive into data and privacy law. And today we’re looking at a particular fascinating example, fertility apps.

Fertility apps have transformed how consumers monitor their reproductive health. Sitting within what’s sometimes called the broader “femtech” market, these apps are marketed as a convenient and purportedly data-driven way for women to track menstrual cycles, assist in conceiving a child, or to track the progress of a pregnancy.

But these seemingly convenient tools often come at the cost of compromised privacy. The sensitive personal data that fertility apps can collect goes straight to the heart of some of the most intimate moments in our lives. And as the research of today’s guest shows, that intimate data can be obtained unfairly and retained unsafely.

Joining us today in the Curiosity Recording Room is Dr. Katharine Kemp, Associate Professor at the Faculty of Law and Justice at UNSW, and the Deputy Director of the Allen’s Hub for Technology, Law, and Innovation. Katharine’s research paper, Your Body, Our Data: Unfair and Unsafe Privacy Practices of Popular Fertility Apps, found serious privacy flaws in many fertility apps popular with Australian women.

Katharine, thank you so much for joining me today on Hearsay!

KKThanks so much for having me.
DTNow before we get into this topic, and it’s a fascinating one, tell us a little bit about your career in the law, because you’ve had a very varied career in the law, haven’t you?
KKYes, I’ve actually had quite an unusual path from practicing law to becoming an academic. When I was in my late 20s, I was a barrister at the Melbourne bar when my father-in-law actually became very ill in South Africa, and so we moved to Cape Town. And I survived a serious injury during a violent crime soon after we arrived and was hospitalized. And so in the space of a few months, I went from having a busy practice at the Melbourne bar, a very helpful clerk, good friends, and a lot of the things that we take for granted as Australians living in Australia, to being in a new country with no job, no right to work because they’d changed the immigration laws while I was in hospital. I was obviously recovering from the physical and psychological side of the crime, and I couldn’t drive, I couldn’t even get a bank account because the South African bank said, you actually need to have a job before you can deposit money with us. So it was quite a bleak time, to say the least. But after a struggle with all of those things, I discovered that I could actually work a certain number of hours a week if I was enrolled in the Master of Laws program. And so I started the Master of Laws, and that is ultimately how I ended up lecturing at the university there, co-authoring a book, consulting to the competition commission in South Africa; working with some really amazing people. And it meant that ultimately when I six years later came back to Australia and completed my PhD at UNSW, I also got the chance to work with some very inspiring academics at UNSW, the likes of Professor Graham Greenleaf and Professor Lyria Bennett-Moses, and be part of a faculty that has a very genuine focus on social justice. And so I’m very glad to be part of that with my research on the intersection of competition and consumer and privacy regulation.
DTThat is such a unique and kind of harrowing story of a journey into legal academia. It’s certainly not what I expected when I asked the question. Thank you for sharing it with us. And you’re so right that this sort of data and privacy field of legal practice and legal research is really one that’s concerned with social justice. I think we’ve talked on the show before about the opportunities in commercial practice to contribute to or work on social issues. And this feels like one of the richest examples of that, that the way in which consumer data is handled really can pose a risk to users and also is an opportunity to show respect and compassion for users of software applications. Now your focus in your recent research has been on fertility applications. I’m familiar with some of them in the market, but for some of our listeners who maybe haven’t used one before or are otherwise unfamiliar with how they work, give us a bit of a precis about what a fertility app is.
0:05:35KKFertility apps can be really useful tools for people who are perhaps trying to understand their menstrual cycles, or prevent themselves from getting pregnant, or get pregnant, or just track something that looks like it’s going wrong with their fertility symptoms. Because what they do is allow the user to log a lot of their own information and to be able to see over time what kind of trends are evident there, what are the patterns of their cycle or anomalies within their cycle and so forth. So people for a long time did that using paper charts and tracking their cycles in that way. But as you can imagine, having that single source of information and being able to have it all easily in the one place and potentially speak with your doctor about it and show them some of the trends that are emerging can be a really helpful way of understanding that part of our health better.
DTAnd the kind of user experience of these sorts of applications is they’re often presented a bit like a calendar and the user logs the start and end of a period, for example, and it extrapolates from that data collected over the user’s period of usage, things like ovulation dates and that can then carry on into tracking the progress of a pregnancy.
KKYeah, so it gives people supposedly – this doesn’t always work out necessarily depending on the quality of the app and what theories they’re basing their predictions on – but potentially it can help people to understand when they might be ovulating during a month, when they might be fertile, so that they can either avoid having sex or make sure they have sex in those times depending on what their goals are. And so they do have that potential to not just record but to help in predicting and see what’s coming.
DTAnd so your research into these applications, tell us a little bit about the methodology of that research. How did you go about investigating the privacy practices here?
KKWhat I was looking at was what are those privacy terms and user interfaces that a person sees when they decide to use one of these apps. Or that they could see, rather, because a lot of these are relatively hidden in fine print and in obscure corners of the app itself. And so I looked at 12 of the most popular fertility apps in Australia based on their usage data, for example, and downloads – a combination of these factors. And then went and saw; what are their privacy policies? What are the privacy notices that come up? What are those initial screens that are shown to the user that may – or most commonly may not – show them privacy choices in how the app is going to use their information? And if you dig down into the fine print of the privacy policies, what is it that you discover about what they say they’ll do with the data and how they treat it? What extra purposes like research, targeting, profiling, and so forth are mentioned in that fine print? And this is something that a lot of consumers can’t do for themselves because these policies might run to thousands of words that are in themselves very difficult to understand if you’re not spending your life reading privacy policies and analysing them like this as I happen to be.
DTYeah, well absolutely, and I suppose for an application that is marketed as a convenient alternative to a manual process, very few users have the patience, or the time, or the inclination to read a policy even if they were so inclined.
KKIt’s simply a matter of practicality. You know, something I hate is that often people speaking about this will say; “oh, we’re so terrible as consumers, we just never bother to read the policies” and so forth. But the researchers have actually presented the evidence on this and shown that for the average person, it would take them six working weeks a year to read all of the privacy policies that apply to them. So trying to push it down on the consumer as them somehow being negligent for not keeping up with this when it wouldn’t allow them any choices in any case is pretty ridiculous.
0:10:22DTI agree completely. It’s a complete misconception of where the onus should lie in terms of these contractual relationships. We have extremely frictionless processes for accessing these applications, for getting started with one of these contractual relationships, and we expect the user to add a whole lot of friction, really optionally, by choosing to read through a 2,000 word set of terms and conditions or privacy policy in a tiny scroll box on a mobile phone before they use a game or a fertility app or something like that and it’s absurd.
KKSort of, often “we need more consumer education about how companies are being sneaky” rather than “just stop being sneaky”.
DTYeah, that’s right, yeah. And especially, and again, I like that you said before that you’re investigating the intersection of competition law, consumer law and privacy because we often think of privacy law as a discrete field, that it’s the Privacy Act, it’s the Australian Privacy Principles, it’s privacy policies but there are broader consumer law considerations. In Australia, unfair contract term provisions should render a lot of those sort of click-wrapped terms and conditions – which we have no capacity to negotiate, which we’re told to either accept or reject by a body with far greater negotiating power than us – really should be rendered void.
KKYeah, there’s certainly room to argue that some of these privacy terms are unfair contract terms. The way we define unfairness under the Australian consumer law and particularly the examples provided are not very well adapted to privacy terms because they’re tending to focus on things like when you can terminate a contract, for example, or what kind of extra charges might be imposed and so forth. So if we better adapted some of that law, it could certainly come within that or potentially an unfair practices prohibition which has been advocated for already and also looking at our misleading or deceptive conduct and false representations, prohibitions in the Australian consumer law are all possibilities.
DTAnd I want to come back to some of that discussion about law reform proposals both in terms of reform of the Privacy Act and around some of those consumer law proposals in a little bit. First though, as I said at the top of the episode, this is sort of a deep dive into an example to cap off a bit of a mini series we’ve been doing on privacy and data in Australia. And one of our recent guests talking about the consumer law dimension of privacy and data collection practices with popular software applications talked about this concept of dark patterns, especially ones that involve features of the user experience or the user interface of an application that don’t necessarily match what the terms and conditions or the privacy policy say.That there’s a bit of a mismatch between the purported rights of the consumer in those documents and how easily or effectively they can exercise those rights when they’re using the application itself.

TIP:  As you might have seen, there’s so much happening in the privacy space at the moment. We first touched on the space with Alec Christie in Episode 83 – The Privacy Parables: Understanding Australia’s Privacy Act in the GDPR Age. That episode explored the idea of GDPR adequacy and came in just around the time of the Privacy Act Review Report. We had Alec back in Episode 91 – The Privacy Parables II: The AG’s Privacy Report and the Future of Privacy in Australia. To take us through the recommendations in that Report and whether the privacy regime is fit for purpose. Definitely have a listen to both of those episodes if you’re interested in the space. The recent guest David mentions is Shaun Temby. Shaun came onto the podcast to discuss the consumer law dimensions of data collection such as the use of dark patterns. Shaun’s episode will be released very shortly. So definitely look out for that one!

Tell us a little bit about dark patterns and these applications because that’s sort of what you’re describing when you’re describing a lot of these choices being hidden under several layers of abstraction.

KKYeah, so this comes down to the way that those user interfaces are designed, what it is that we see on the screen, and what it is that we can find within the app when we are actually using them. And so having a look at these fertility apps, there were some striking examples of ways that choices were presented to consumers if they were presented. In one case, there was a fertility app that gave you the choice between gold membership or silver membership. Those were the two possibilities. And at the time that I was originally looking at these, they actually gave exactly the same description in the little box presented to consumers of both of those memberships. So all you’re left with is really the impression that we generally think gold must be better than silver. So you tick that and it was ticked already. And so the general impression was you were getting the best. If you dug down far into the fine print of the privacy policy, you would discover that the difference between gold and silver membership is actually that the gold membership permitted more deterministic as we might say, more specific, tracking of your behaviour across the app and other apps than the silver membership. So it was actually for gold membership, you get less privacy. It was more specific targeting and so forth. And so if for instance, that company tried to argue; “well, we actually think it’s a good thing to be more finely tracked and targeted”, then why didn’t they put that in the highlighted explanation for consumers? And that is what we’d expect to see if we have proper design of that user interface; is clear information that doesn’t potentially trick consumers into acting against their own best interest. Another example, they had a box that you had to tick about use of your information for targeted advertising. And they had in bold; “we don’t share your health information with advertisers”. And then in a non-bold sentence after that said, they do share your information with “advertising partners”. So directly contradicting each other, those sentences, but the one that suited their purposes was in bold and what the person would notice and therefore make them much more inclined to tick the box and give the so-called consent.
0:17:17DTWas that just an error or was there some intended difference in meaning between advertisers and advertising partners?
KKThat is what I was attempting to discover and looking even into the privacy policy itself, there wasn’t an explanation of that. So for the average consumer, they would just to be realistic, they would just notice what was in bold; “we don’t share your health information with advertisers”. So it is very difficult to tell just to what extent companies might be attempting to be actually dishonest or whether they’ve just been completely slack and not edited properly and so conveyed this misleading information.
DTIt sounds like there’s broadly two categories in which these dark patterns are being deployed or two purposes to which they’re being deployed, I suppose. One is in terms of the collection of different types of information and obscuring the extent to which information is collected and the other is sort of obscuring the extent to which it’s used. Let’s start with that first category. What information are these apps actually collecting? Because I suppose a lot of users would expect, well, it’s collecting the information that I enter, it’s collecting the dates on which I log the start and end of a period, and that’s what it’s collecting. So what other forms of data are these applications collecting that users might not be aware of?
KKYeah, you’re right. There’s that obvious active provision of information for that; what we might call the logging data of. I enter when I have my period, I enter when I have these fertility symptoms, when I have sex, when I have mood changes, digestive symptoms and so forth, and you’re aware of that. But what I noticed in the privacy policies was that the apps tend to create this different category of information that they often call something like usage data. And then what they’re getting at there is that they are monitoring not just what you enter into the app, but how you’re using it. So for example, did you read three articles in a row on infertility, and what it’s like to use Clomid for infertility, what it’s like to use particular drugs for infertility? And did you then go and join a support group via the app for people who are trying to conceive after miscarriage or after they’re 35? And so these kinds of indications of how long you read certain things and what people you connect with and what you’re interested in on the app can be used to form a pretty detailed profile on your likely health status and age concerns and so forth. And yet that kind of usage data, according to the privacy terms, was shared a lot more broadly and for more purposes, as if it wasn’t sensitive information. And there’s this tendency among quite a lot of companies, and I think particularly data brokers, in dealing with that kind of behavioural data to quite disingenuously classify it as not health information and therefore not sensitive information. For instance, they might say, this is not a list of people who suffer from post-traumatic stress disorder and chronic pain. It’s a list of people who are interested in post-traumatic stress disorder and chronic pain. And so they know what that’s code for, but to pretend that’s not sensitive information and therefore requires more careful handling and a higher standard of obligation under the privacy legislation is thoroughly disingenuous.
DTFor some of our listeners who don’t regularly practise in the area, can you give us a little bit of a reminder on how health information is treated differently to personal information under the Act?
KKYeah, so essentially what you need in the case of health information, which comes within the broader category of sensitive information, is to, for example, get the consent of the individual it concerns before it’s collected. And in order to disclose that information to another party, likewise, you generally need the consent of the individual with some narrow exceptions. And so there are greater constraints on your use of that information compared to the broader category of personal information, where, for example, when you’re collecting it, you can give notice to the person it concerns rather than having to get their consent to that collection in the first place.
DTAnd so we can immediately see why it’s so important how this information is categorised; why whether you’ve read an article on conceiving after miscarriage is health information or just personal information really affects the degree to which it can be disclosed further without the user being made aware of that disclosure. The other category of sensitive information is about finances, is that right?
0:22:43KKNot necessarily. There can be some financial information that would be sensitive, but not as a broad category. It’s not classified as sensitive information. So we would have things like biometric data and health information and so forth that would be classified as sensitive information. But certainly there is some financial information that wouldn’t reveal the, kind of – there’s a long list within the Privacy Act that includes things like about your political beliefs, your race, religious beliefs and so forth. So yeah, financial data broadly isn’t one of those in the list.

TIP: The meaning of “health information” is contained in section 6FA of the Privacy Act 1988 (Cth). That section broadly specifies that information or opinion about a person’s health, illness, disability or injury, their wishes about future health services, or health services provided is health information. More broadly still, that health information comes under the definition of sensitive information contained in the section 6 general definitions. 

DTThat disclosure of, I suppose what might in practical terms be health information, but is often classified as more easily disclosable personal information. For what purposes is it being disclosed and does that drive revenue for these applications?
KKYes, so these apps will have different business models. So some of them are very much driven by sharing data with other companies and saying in the fine print in some cases that they will share information about somebody’s usage data or their profile and the like with other companies for advertising purposes or straight out sell the information which they might claim is de-identified but it not explain how it’s de-identified in a way that you would have any confidence in that. Others seem to be focused on using that information for their own research or for the research of other organisations that they’re in partnership with. There’s one app that’s owned by a drug development company that seems to link with its other drug development business and business with pregnancy tests and ovulation tests and so forth. So they seem to have a lot of different interests, ultimately, whether through broad commercial interests or using a targeted advertising model within their app or that research interest for wanting to use that personal information for other purposes.
DTOn one of the earlier episodes in this series, speaking with Alec Christie at Clyde & Co, we talked about de-identification and how it’s not always as easy as just removing a username and the argument that information can’t really be regarded as de-identified if it’s capable of re-identification, that is it can be inferred who the individual is based on some other data.
KKThat’s especially the case with some of the information that we’re talking about here because there’s really interesting research by Vanessa Teague and some other Melbourne researchers who were the ones who revealed that the Medicare data that had been released as supposedly de-identified was in fact re-identifiable. And one of the interesting things with information like that collected by the apps is that they sometimes collect the parent’s date of birth along with the children’s dates of birth. And then if they believe that they can just remove the email address or the name of this person and start sharing that file and it’s not re-identifiable and that’s very misguided. Because, in fact, if you have, let’s say, a particular mother who has two children, then that combination of their dates of birth usually means that within a particular region of Australia, there is only one mother with two children born on those particular days and so that is re-identifiable. And likewise, even if it’s only one child, if the mother’s either above a certain age, a particularly older mother or a very young mother, they can take only one child’s birth date in order to re-identify that information. So it’s an area where we would want to be very sure of advanced de-identification techniques or just, let’s not say advanced, adequate de-identification techniques for actually ensuring that this is not personal information.
DTAbsolutely. One of those uses to which this personal information that may in fact be health information is being put by these applications is for research purposes. I think in your paper, you identified that half of the applications you studied were doing that. What are some of the particular issues around the research use of that information? What are some of the concerns there?
KKI’d say I’ve got three main concerns with that. The first is that they tend not to give the consumer any choice about whether their information is used for that research purpose and so there was one app that allowed you to write to them and ask to please not be included in the research. But others just by default said; “your information will be used for these research purposes” and they would often claim; “well, it’s de-identified when we do that”. But some of them were either completely vague about how that happened or mentioned de-identification techniques which were entirely inadequate like we remove your name and email address and then share this with other companies or research organizations or use it in our own research. And in addition to that, none of them were promising that they would abide by certain ethics guidelines or submit to ethics oversight of their research which makes it quite different to, for example, a university researcher – who is subject to those ethical guidelines and ethical oversight – that might be taking great care in the way both the information is handled and what the outcomes, potential risks from the research itself will be. Compared to these cases where the reassurances sometimes were things like; “we hand pick the research organizations” as if that should give some confidence to the user that this will all be okay.
DTYeah, I suppose you can especially see with those research purposes by commercial organizations that the secondary purposes to which that research information might be put once it’s in the hands of a third party are very difficult to discern. One of the other concerns you identified in your paper is unnecessary retention of data. In an earlier episode in this series, Alec Christie talked us through his summary of the Privacy Act in sort of three can’ts – and the last being you can’t keep it forever, you can’t keep the data that you collect forever. But you identified that these applications are often retaining data for longer than it is necessary for the purposes for which it’s collected. Tell us a bit about that.
0:30:16KKIn some cases the apps were saying in the fine print we’ll keep this information for three years after you stop using the app. Or for six years, seven years after you stop using the app and clearly there’s no good reason for keeping people’s health information for that period of time. And not just no good reason but it is exposing the information to potential misuse either within the organisation or from bad actors outside the organisation for every day longer that they keep that information. And so as we saw with data breaches like Optus and Medibank last year, holding on for longer than is required is already problematic under the Privacy Act. I think that we need more specific regulation about that, not necessarily a period of time for which that information can be kept because across different sectors and different kinds of companies that is going to vary. But a really important thing is that we need strict enforcement of those rules so that companies don’t opt in favour of being lazy about that because the reality is that it is more costly nowadays to go and delete data than to keep it. That wasn’t always the case. And so without the rules in place, the incentive is actually just to hang onto it because that saves you a task and maybe one day you’ll find a use for that information. And we need to bend the incentives in the other direction so that it protects people’s information from those hacks and misuses.
DTYeah, we’ve talked about that kind of collect everything approach driven by the perception that; “well, data is valuable, more data is more valuable, we might as well collect everything”. But the potential liabilities that presents have kind of come home to roost over the last 24 months with some very high profile data breaches, especially around the collection of identifying documents and things like that. But you mentioned that we need to be enforcing the existing laws around this. How does retention of health information for up to seven years after a user deletes their account sit well with the existing obligations in the Privacy Act around retaining information for as long as is required for the permitted purpose?
KKYeah, well, obviously, I think that would be a very difficult argument for the app to make to prove that that health information is actually necessary for that period of time. And what we have argued generally, there is generally a consensus that our privacy regulator is under-resourced and that if it were better resourced, say more in line with the likes of the ACCC – not necessarily having the same funding as the ACCC because the ACCC covers many different areas and sectors and so forth – but proportionately at least, then we would see better enforcement. And we need that political will to make our privacy regulator strong and effective for these rules to actually bite.
DTLet’s talk about law reform now, we’ve alluded to it a couple of times in the show so far. And let’s start with privacy law reform and then maybe we’ll talk about some of the consumer law changes that are on the horizon. There has been a recent review of the Privacy Act which has proposed some very broad changes to bring Australian law into, kind of, GDPR-parity status. We’re not presently regarded as sufficiently on the level with GDPR to comprise GDPR adequacy here in Australia.

TIP: On the 16th of February 2023, the Commonwealth Attorney General’s Department published its long-awaited Privacy Act Review Report.  The Report represented the finalisation of two years of consultation and review of the Privacy Act 1988. The question at the heart of the Report was whether the Act and its mechanisms are still fit for purpose in modern day Australia. 

So in the context of your research into these fertility applications, what are some of the proposals that have come out of the recent report for the amendment of the Privacy Act that might help in addressing some of these practices?

0:34:45KKI think central is our definition of personal information. It is so important to the kind of arguments that companies are often making at the moment as to why their data practices actually aren’t covered by the privacy legislation at all or why you get claims that this is privacy safe or privacy compliant, privacy respecting and so forth. Quite vague explanations of why a particular data practice is okay. And often that comes down to either a claim that the information has been de-identified when it hasn’t or that it wasn’t personal information in the first place when in fact the company knows that it is actually using that information to address a particular individual or to combine data to engage in this data matching with other organisations about a particular individual. And that gets to the level of having two companies saying; “we didn’t use any personal information in this data matching exercise because we hashed the email address, and created a unique code for the person in place of the email address, and then we put our two files together and ended up magically with a much more detailed profile in both of our hands. But no, that wasn’t personal information”. Now, that is clearly not in keeping with the spirit of the law and many would say not in keeping with the letter of the current law. But we need absolute clarity on that and on concepts such as individuation where increasingly what we’re concerned about is the ability to address a particular individual, knowing that that person without necessarily a legal name attached to them, because that is not the prime concern here anymore. We know that we all walk around with that particular phone and that if you are talking to our phone, you’re talking to us. So this concept that it’s name and address that are still the concern rather than individuation is very problematic. I think other big ticket items in the reform of the Privacy Act would be the definition of consent; that it should not be implied consent as sufficient to constitute consent. But that we have that unambiguous active consent that is always informed and to have those changes so that you don’t have this pretense that this has been chosen by the individual. And beyond that, not putting it all down to that mechanistic approach to privacy of saying; “we showed you a notice, you ticked the box so we get to do whatever we like with the information”. But having a test of fairness and reasonableness that says often we really actually don’t have a choice about it, our employer tells us this is the software you will use and you’re not going to make a big fuss at your job or your kid’s school, your daycare, whatever. You are not making a real choice. So the data practice should be fair and reasonable aside from any claim of consent.
DTYeah, absolutely. You mentioned that you’d wanna get past this practice of providing the user with notice of a particular privacy practice or asking for consent usually in the form of an application or a website, getting that consent and then going on to do whatever the company wants to do with the data they collect from the user, whether or not the user’s truly understood the consent that they’ve given even actively. Although the proposed changes to the Privacy Act that the Attorney General’s Department have suggested will bring us closer to the GDPR in terms of requiring data collection notices and a more active sort of consent, it doesn’t seem like we’ll have that layer of fairness and reasonableness that you were describing, that moving past the more mechanistic, as you said, approach to user consent. Is there a jurisdiction in the world that has that sort of approach to privacy? Is it something that we could emulate here? What does the path to that look like?
KKWell, the Commonwealth Attorney General’s Department in its final report on the Privacy Act review has raised this possibility of a fair and reasonable test. And we don’t know whether the government is going to pick that up at this point, and it would be a test that isn’t satisfied by mere consent. If you were making comparisons to other jurisdictions, then you might think of the legitimate interest test in the EU, which is separate to the consent justification. And that will always be a bit different to our situation because it is fundamentally connected to privacy as a human right being balanced with other interests, and because we don’t have that recognition of privacy as a fundamental right in Australia, say as part of a bill of rights, then that couldn’t work in the same way as in Australia. And so that’s one of the reasons we speak in terms of this fairness and reasonableness of the data practice. So it’s a possibility. There’s a lot of people lobbying against it, especially, for example, from the ad tech industry and saying; “well, fairness is a vague concept, and how could we be expected to meet this standard?”. In fact, though, we have fairness as a standard in other laws under the Australian consumer law, for example, or financial services regulation. It’s certainly not unheard of, and it is possible to develop that case law and guidance, and even at the outset to have some indication of what that means sufficient to comply with the law.
0:41:14DTI suppose on the flip side of that, a test of fairness and reasonableness, it’s a little bit harder for a regulator to have the confidence that a prosecution or a civil penalty action or something like that is likely to succeed, no? I mean, we see that with unfair contract terms. Granted, unfair contract terms are only very soon going to become something that’s subject to a civil penalty, but we were saying earlier in this episode that many of these privacy practises might be regarded as unfair contract terms, but the capacity of an individual consumer to exercise their right to have that contractual term declared void is kind of a paper tiger.
KKYes, we’ve seen, for example, from the Federal Court’s treatment of the unconscionable conduct prohibition that these kinds of standards can end up in very high thresholds being imposed for their application. And so it is possible that if we had such a test of fairness and reasonableness, it might, in its interpretation by the Australian courts, be given a narrow interpretation. That’s, I think, a very real risk and one that can’t be completely controlled, but also a reason that the concerns that are raised that this would be somehow desperately unfair for companies trying to deal with personal information shouldn’t be given too much weight.
DTThat’s a good point. You can’t really have it both ways, can you? You can’t say it imposes an enormous regulatory burden, but also it couldn’t possibly be used by regulators because it’s too unfair. Yeah, that makes sense. Speaking of unfairness tests, let’s talk about some of the consumer law changes that are on the horizon. You mentioned unfair practises, prohibitions earlier in the episode. Recently, we talked to Sean Temby about the unfair practices prohibition around dark patterns and some of the, I suppose, practical issues in terms of the developers who might be responsible for designing user interfaces. Are they likely to have that level of exposure to a compliance or regulatory team who can pick up on these sorts of issues? Are we going to have the same issues in identifying what does constitute a breach and what just constitutes a behavioural nudge or clever design? Tell us your perspective on some of the unfair practices suggestions.
KKI think such a prohibition would fill some of the existing gaps. For example, in potentially capturing instances where a business might not be directly dealing with consumers itself, and so it might not be engaging in any conduct or making any representations to consumers themselves, but could come up with a particular business product that assists others to either mislead consumers or act unfairly towards consumers. And so their entire business model might be unfair and risk this serious harm to consumers. In addition to that, there are a number of practices that we’ve seen that don’t fit neatly within concepts of misleading or deceptive conduct. Because as we’ve said, sometimes the real information might be there, but the practice is such that it doesn’t give consumers a choice about this, and consumers aren’t always capable of understanding the full consequences of those data practices, which are becoming more and more complex and require specialised knowledge to understand. And so that seems to be a real area where it’s important not just to look at; is this actually misleading? Is it creating a false impression? Or is it a matter of unfairness that goes beyond the misleading label or fits below that very high threshold for unconscionable conduct and doesn’t fall within our misuse of market power provisions because the company might not have substantial market power in providing this service. So I think data practices are a key area where that kind of prohibition would be useful.
DTThere’s really sort of two examples that come to my mind where the existing protections aren’t really adequate. One is where the statement is true, but is really designed to make you act against your interests. In our last episode, our guest described the example of closing an account with an application, and when doing so, you’re prompted with an option to choose whether your personal data should be deleted when you close your account, but it’s accompanied by a big red box that says, warning, this can never be undone. And so you’re given this sense of peril that maybe you should just leave it just in case you change your mind, which of course is probably not in your interests and is almost certainly in the interests of the party trying to retain the data. And the other is where the terms and conditions that govern the user’s use of the application are eminently fair and reasonable, but they can’t actually be exercised because of the design of the application.
0:46:34KKYeah, there’s many cases where you can see because of where the settings are located in the app and how the information is arranged, that even though technically you might as a user be able to, if you put all the time in and had the expertise, find the settings and use them to your advantage, everything in the design of that interface is stacked against you being able to act in your interests and so require more time and more understanding and more attempts because sometimes these settings are set so that you actually have to continually go back to them and change them all so that they’re so broad that they create unnecessary inconvenience for you. So there are lots of ways in which you would not necessarily be able to say; “that’s misleading”, but it might very well be unfair.
DTI just recently experienced an example of that. I think this morning, I was on a website, I was presented with an option to accept or reject cookies. I rejected them. The application stopped working because I had rejected necessary cookies. So I was like, well, if you reject all of them, then it won’t work. So you’ve got to accept the necessary ones, but there was only an option to accept everything or reject nothing or reject everything. And I suppose if I had drilled down three or four more menu tiers, I probably would have found some ability to select some and not others. But the options that were presented that were easy to find were: make this impossible to use, or, just get on with my day and accept the advertising cookies.
KKYeah, and I think what we’ve seen from the Federal Court so far, in its views on user interfaces, particularly in the two cases, the ACCC brought against Google, doesn’t give us a real confidence that consumers are just going to be protected by the misleading conduct prohibition. In the first Google case, which was the one about the location data, the Federal Court gave that clear opinion that by sort of showing these settings that seem to indicate to the user that by turning location data history off, they were stopping that physical tracking when in fact there was another setting they had to go to. But in the second Google case, the court was acknowledging that when Google created this pop-up notice to be shown to consumers that involved a change in its privacy terms, that it had spent months perfecting this pop-up box with sort of test groups and so forth to be most likely to get user’s consent. And in the process to remove references to privacy and privacy policies and to construct a notice that would require a lot of scrolling down and interpreting of graphics to understand what was going on. And yet that was found not to be misleading. And so I think we couldn’t have confidence that the ACL is going to do the job of protecting consumers against these dark patterns as it stands.
DTOne area that I’m quite interested in terms of how unfair practices might apply to dark patterns is how they might also apply to what we sometimes call anti-patterns. So it’s just bad patterns. Because I think that Google example is a fascinating one because it shows the level of thought that might go into a behavioural nudge that might be regarded by some as a dark pattern or almost rising to the level of misleading the consumer. But I think many developers of consumer applications are not putting nearly that level of thought into user interfaces. And sometimes these dark patterns arise as a result of poor design rather than by intentional design. That the ability for the user to exercise their rights under the terms and conditions or to adequately understand the uses to which their data is put is really because of the design choices or the lack thereof that have been made by the developer rather than by some sort of well thought out process. Because the proposal around unfair practices, there wouldn’t be an intention requirement here would there? In the same way there’s no intention requirement for misleading and deceptive conduct.
0:51:05KKYeah, exactly right. And so that would have a bearing on a couple of matters. It would, as with the misleading or deceptive conduct prohibition, be relevant to what recourse is available and what kind of penalty or absence of penalty would apply if this prohibition were infringed. And it would also have an impact on the fact that from the designer’s side, they would have that higher obligation to not be reckless in the way they’re designing things and to make sure that they have in place processes that ensure that the design actually doesn’t work against the consumer like that.
DTWe’re nearly out of time, but before we finish up, I had two questions for you. One, to return to the fertility apps that you’ve studied. I imagine many of our listeners have used or are using these applications. Knowing intimately how they work, do you have any tips for users in order to keep themselves privacy safe?
KKYeah, I think a couple. Just as a bit of a rule of thumb, you might sometimes find that some of the apps where the developers are based in the EU will sometimes have higher standards. Not always, but because they will be subject to the GDPR on the whole. In some cases we’ll find that they give you better choices and some better treatment, although I didn’t find any that were perfect. In addition to that, spend the time and hunt for the privacy settings because some of them will have privacy settings. They will be set in favour of less privacy and you will have to do the work of switching that to greater privacy in, for example, restricting tracking, which you might also be able to do through your mobile phone’s general settings in restricting tracking for advertising purposes. And when you stop using the app, then also don’t just delete the app itself. You will sometimes find that there is a buried setting that allows you to delete the account and delete the data. And if you can’t find one, get in touch with the company to ask to delete the account and the data itself, rather than potentially leaving it sitting there exposed for a longer period of time. Some of these apps have got long lifestyle questionnaires in them which ask amazingly sensitive questions like; “do you feel safe at home? Have you had trouble paying your bills?”. And so forth; got nothing to do with fertility. Just find the skip or the little X that might be hidden and very faint, speaking of design choices, and avoid answering those unnecessary questions when it comes to what’s needed for your fertility tracking.
DTAll great tips. And my last question for you, for some of our younger listeners who are maybe just starting their career in the law or still at law school, we talked earlier about your unusual journey into legal academia, but for some of our listeners who might be interested in pursuing a career in academia, what tips do you have for them to start them on that journey?
KKWell, I think in this area of law in particular, it’s so important in that interaction of law at the technological frontier in terms of how well we treat each other as humans. And in the context of the privacy issues we’ve been talking about, to understand that privacy is power, and taking away somebody’s privacy is often about control. Whether by governments, even in controlling personal relationships. And that’s what’s at stake when we’re talking about privacy. So these are really critical areas as a matter of social justice and basically how we treat and respect each other as humans. There are some great resources available – to give my own UNSW Allen’s Hub for Technology, Law and Innovation a plug – we have a newsletter that goes around and a great website and run events that focus on technology, law and innovation, of course, and have opportunities for student internships. There are in the area of privacy in particular, other great blogs like the Salinger Privacy Blog and elevenM, where people can keep up to date with what’s coming up and really understand this area as they go along. I think more broadly, I enjoy being an academic in part because of the autonomy that it allows you in choosing what you think is important and being able to take a principled approach to your work. But with all of these kind of plans that we make, as I was saying at the start, so often what you end up doing was not your plan A or B. It might be plan G, who knows? And you adapt to those changes. Very often your specialty finds you – probably always! But when it comes to matters of principle, that’s where you don’t adapt. I think it’s vital to know the difference.
0:56:44DTI think that’s a great tip, knowing what to be flexible about in your career and what to hold fast on. And I’m going to remember that heuristic of privacy is power. I think when we talk about privacy, both in a legal context and just in an ethical one, sometimes we struggle to identify precisely why we care. And we often hear these sort of trite aphorisms of; “well, if you’ve got nothing to hide, why do you care?”. But you’ve really hit the nail on the head with this idea that it is about the balance of power between individuals, between individuals and corporations, between government and its subjects, and the idea that good privacy regulation keeps that balance in check.
KKI think it’s very important to understand those “I’ve got nothing to hide arguments”. Very often what people mean is; I think that I personally would look pretty good because of my fortunate, in quotes, ethnic background, educational background, gender, and the fact that I happen to be born on the right side of the tracks and not taking into account that privacy is actually the most important for the most vulnerable among us, the people who can be particularly excluded or discriminated against and manipulated because of the information that is discovered about them. And that we can’t only look to what are our interests and some misguided idea that any of us have nothing to hide.
DTVery true. Katharine, thank you so much for joining me today on Hearsay.
KKOnly a pleasure.
0:58:23RDAs always, you’ve been listening to Hearsay the Legal Podcast. I’d like to thank today’s guest, Katharine Kemp, for being a part of it.

As you well know, if you’re an Australian legal practitioner, you can claim one Continuing Professional Development point for listening to this episode. Whether an activity entitles you to claim a CPD unit is self-assessed, but we suggest this episode entitles you to claim a substantive law unit. More information on claiming and tracking your points on Hearsay can be found on our website.

Hearsay the Legal Podcast is, as always, brought to you by Lext Australia, a legal innovation company that makes the law easier to access and easier to practice, and that includes your CPD.

Hearsay is recorded on the lands of the Gadigal People of the Eora nation and we would like to pay our respects to elders past and present. Thanks for listening and see you all on the next episode of Hearsay!