[00:00:11.750] – Liz Fraley
Greetings and welcome to Room 42. I'm Liz Fraley from Single-Sourcing Solutions, I'm your moderator. This is Janice Summers from TC Camp, she's our interviewer. And welcome to Dr. Bill Hart-Davidson, today's guest in Room 42. Bill is a professor in the Department of Writing, Rhetoric, and American Cultures, and Associate Dean for Research and Graduate Education in the College of Arts and Letters at Michigan State University.
[00:00:34.150] – Liz Fraley
He earned his PhD in rhetoric and composition from Purdue University. He's a Senior Researcher in the Writing and Digital Environments Research Center, the co-inventor of Eli Review, a software service that supports writing instruction, and the co-founder of Drawbridge, a learning technology company.
[00:00:51.630] – Liz Fraley
Today, Bill is here to help us start answering the question, “How do good communication practices produce better patient outcomes? Welcome.
[00:01:00.570] – Bill Hart-Davidson
Thank you. It's great to be here.
[00:01:02.680] – Janice Summers
It is really a delight to talk to you and I'm excited to have you here today. So one of the things I wanted to ask you about, and it's something that we touched on before, and it's one of those key things that I think you had said is in all of the work that you've done—which is a lot of work that you've got published—
[00:01:24.030] – Janice Summers
One of the key things you mentioned was feedback and how that plays into the clinical work that you've done, as well as the machine learning I think, also you had tied that in. Could you explain a little bit more about that?
[00:01:42.450] – Bill Hart-Davidson
Feedback is so important to us as writers and as learners. I think it's what you're always looking for as a learner to know if you're on the right path. I've done something and I'm looking for a reaction either from the world outside or from another person that I'm interacting with. I play music and you hit a note and you want to hear if it's the right note—
[00:02:16.050] – Janice Summers
Does it resonate?
[00:02:17.430] – Bill Hart-Davidson
Yeah, and so there's a whole tradition of understanding feedback and systems as a recursive process being the underlying mechanism for learning. But I think it becomes so ubiquitous, so foundational that we can forget just how important it is. We can take it for granted.
[00:02:46.210] – Bill Hart-Davidson
So a lot of my work over the years has just been bringing it back to the surface and asking folks to pay attention to it again, as an important thing. Are people getting the right feedback at the right time to help them meet their goals?
[00:03:04.290] – Janice Summers
Well, now and how do you manage that? Is that part of that feedback, too, is “I'm not getting feedback? I'm not getting the expected outcome, so I have to change.”
[00:03:19.740] – Bill Hart-Davidson
I think, if we brought anything, our little research team and I would credit several collaborators of mine, both from rhetoric and writing, technical communication side and the human medicine side with this as an approach that we've brought to chronic illness care.
[00:03:44.130] – Bill Hart-Davidson
Our research team focuses on heart disease and diabetes. So these metabolic illnesses, they don't resolve or get really serious quickly. When you're treating them, you are really instituting an intervention, you're doing something different. Whether that's taking a medication, or changing your behavior, changing what you eat, changing how much you exercise. Then for a long time, maybe nothing happens that directly impacts your outcome, and so the question is, “Am I doing the right thing?”
[00:04:24.020] – Janice Summers
Right? How do you get in that void? There's a huge gap.
[00:04:28.610] – Bill Hart-Davidson
So that feedback, if it's missing, is often the thing that gets people to go off-course. They're like, “Well, I don't know if this is even doing anything, and so maybe then they'll stop doing it.” So it could be a lack of feedback. It could be just the timing of it. It could be that the feedback they're getting is confusing. They don't know if it's the right thing that they should be doing or not.
[00:05:02.800] – Bill Hart-Davidson
I think I offered this analogy when we spoke before. If we think about chronic illness as a learning problem, that is, we have to learn how our body responds differently and make some changes and see if those changes cause it to pull it back into normal range, then it's a learning problem that's going to take a long time.
[00:05:27.270] – Bill Hart-Davidson
So what we have to do is make sure that all the players involved, whether it's the healthcare provider, the patient, all of their people who play a role in that—the pharmacist maybe—are on the same page about whether we're monitoring these things adequately to make sure that what we are doing with a care plan is yielding the right results. It's not easy to do.
[00:05:59.450] – Janice Summers
Well, and the interesting thing, too, is because that's feedback-driven and that feedback is important to make course corrections and changes, is it possible to insert intentional feedback places that reinforce behaviors?
[00:06:20.200] – Bill Hart-Davidson
This is where our expertise as writers and communicators, really… If you think about how we set up editorial processes. So if we're a technical communicator working with a subject matter expert, we might actually ask them for shorter pieces with more iterative loops so that we can translate more often.
[00:06:45.890] – Janice Summers
[00:06:48.400] – Bill Hart-Davidson
The technical information builds up and we go, “Okay, wait a minute. Before we go any further, let me check and make sure I understand what you're saying, because I have to communicate this to an audience of non-specialists.” That's a common scenario.
[00:07:04.710] – Bill Hart-Davidson
My strategy for that as a technical writer is going to be to go a little bit shorter in terms of the interval between feedback intervention. That's really what we're doing with chronic illness, is we are being more intentional and introducing more frequency to the feedback moments we get.
[00:07:32.700] – Bill Hart-Davidson
So if you are a typical patient being treated for diabetes in the US, it's recommended that you see your doctor once every three months. You might see them less than that if you have fewer risk factors. So it might come to six months per year.
[00:07:56.610] – Bill Hart-Davidson
If you think about having to make a bunch of daily decisions about eating, about exercise, that's a lot of decisions over six months. And you only get a result, maybe, if everybody is just paying attention in that seven minutes that you get to talk to the doctor, you maybe get the information you're looking for to say, “Is everything I'm doing now okay? Like all these four medications I'm taking, plus all these diet modifications I've made, plus…”
[00:08:33.090] – Bill Hart-Davidson
So what we have done with our clinical trial is exactly what you mentioned. We've tried to make sure that those patient encounters with the provider are much more intentional. We use a simple tool for that, a checklist, so that we ensure that each time they talk to their provider at three months, at six months, at nine months, they're talking about those four or five key things every single time in order of priority. So that's one thing that we're doing.
[00:09:06.710] – Janice Summers
So nothing's left for chance.
[00:09:10.170] – Bill Hart-Davidson
Yeah, because that's a very important meeting and if somebody is just having an off day, you might forget one of those things like, oh, yeah. We didn't talk about my lipid panel. Or I don't know how my cholesterol is doing this six-month period, and so now I have to wait six more months for that? That seems like a bad idea.
[00:09:33.710] – Bill Hart-Davidson
That checklist is one of the key things. Our study is called Office GAP, Office Guidelines Applied to Practice. The checklist is just the standard guidelines that all of these providers know for treating diabetes and cardiovascular disease, and so we're not introducing any new information. We're just putting it in order and putting it in a list.
[00:10:00.410] – Bill Hart-Davidson
Most of the time, the providers are like, “Actually, this is great because now I can keep these patients on track, and I can remember not to forget everything.” If you get handed off from a nurse practitioner to a physician, maybe the nurse practitioner does one or two of the items. That's fine.
[00:10:23.900] – Janice Summers
They're all on the same page.
[00:10:25.420] – Bill Hart-Davidson
[00:10:26.230] – Janice Summers
Now, are the patients also collaborators on this list of questions?
[00:10:30.630] – Bill Hart-Davidson
Yeah. If you think about the existence of the checklist as a feedback mechanism, it's two-way. What can also happen is let's say you go in—this happens quite a bit—you're going down the list and you get to blood pressure. That's one of the items.
[00:10:54.800] – Bill Hart-Davidson
It says, well, my blood pressure is out of range. So are we doing everything we need to be doing? Because I don't see on here what we might be doing differently. So the doctor can receive that feedback from the patient and go, “You're right. We might actually change the dose here and see if that will bring you into range for the next time that we meet.”
[00:11:27.190] – Bill Hart-Davidson
There's an element of the checklist that allows for a shared responsibility for the care plan, and especially for folks who might be… There's a power differential between the patient and the provider in many cases. So the checklist opens up a possibility for you not to necessarily challenge your provider, but to say, “Are we doing everything we can here? Because I'm still not seeing what I'd like to see in terms of the results.”
[00:12:00.130] – Janice Summers
So it's giving a little more equal power?
[00:12:06.160] – Bill Hart-Davidson
Yeah. In the research group that I'm part of, this is actually one of the underlying principles. We really have three underlying principles. One of them is shared decision-making. We think that shared decision-making and chronic illness is not optional, and that's because most of the time you are managing your own care plan.
[00:12:38.360] – Janice Summers
Yes. It's what you do when you're not in the doctor's office.
[00:12:42.530] – Bill Hart-Davidson
[00:12:44.550] – Janice Summers
You have to be an owner of it.
[00:12:46.760] – Bill Hart-Davidson
You do. You have to understand it first. You have to know what each of the interventions you're being asked to do is for in a straightforward way, and then you have to know if they're working or not, and so that's really what the checklist helps you do. So shared decision-making is one.
[00:13:04.070] – Bill Hart-Davidson
The other two are…we can think about them as a nice Venn diagram. We want to be right in the middle. One is evidence-based practice. So we want to make sure all the providers are using the most up-to-date evidence-based interventions.
[00:13:21.880] – Bill Hart-Davidson
For that, it's surprising that… These providers are very busy and the technology and the medications change really quickly. So we just want to make sure everybody is on the same page there. That they know that for a person with this particular set of diagnoses, this is the standard of care that we should be doing.
[00:13:50.340] – Bill Hart-Davidson
Then the third pillar or the third circle in our Venn diagram, is really trying to address medical inequities. Inequities that are based on a variety of factors, really. In Michigan, we are attentive to factors that relate to socioeconomic as well as race, and gender in terms of care outcomes, but also geographic.
[00:14:23.810] – Bill Hart-Davidson
We talked a little bit before we started about the EP [test that checks for abnormal heart rhythms], and we have some clinics in our trial in northern Michigan. The sheer fact of the distance that folks have to travel to see their doctor or to get to a pharmacy in some of those more remote areas makes a big difference in how often and the overall relationship they have with their care provider, and that actually leads to care disparities.
[00:14:56.030] – Bill Hart-Davidson
So for those folks, that's one of the reasons why another element of our trial is the text messaging. So to get back to feedback, they get daily text messages from our service and that's where we were talking about technology playing a role. Another intervention that is communicative in nature is just letting folks know that the practice that they engage with once every three months face-to-face is also there every day.
[00:15:33.780] – Bill Hart-Davidson
The messages that we send are really reminders of that. They do have some informational content. But if I had to guess in terms of a hypothesis, what we have suggested is, it's the pulse of the feedback being steady that is as important as any of the individual message content.
[00:16:01.230] – Janice Summers
So it's the consistence and steadiness rather than sporadic and inconsistent?
[00:16:06.580] – Bill Hart-Davidson
[00:16:07.500] – Janice Summers
And it's using the technology to push information out to create that feeling of engagement?
[00:16:17.130] – Bill Hart-Davidson
Yeah. They have a real connection with, in this case, it's our study, but it's also because they're enrolled in the study in their care provider's office, the office that they go to. So now they're getting a regular message, and they can set the threshold. They get at least one a day, but we don't want to annoy them either. They don't have to do anything with most of the messages, although there are some that they have responses back to in order to check on that engagement.
[00:16:59.130] – Janice Summers
So you're using the machines to simulate that feedback?
[00:17:03.530] – Bill Hart-Davidson
Yeah. They can respond back whenever they want to, and there's someone listening. It's a real service in that way, that if they said, “I have a question about getting refills on my medication,” then that message would get to their provider, and then the next day they would get a call and say, “Yeah, how can we help?”
[00:17:30.740] – Bill Hart-Davidson
I would say…Try not to overthink this, but what does it take to make people feel cared for in healthcare? One of them is just knowing that you're there. Knowing that you're listening when they need you. Knowing that this care plan that you've created together is something that is on their minds and on our minds, and we want them to succeed.
[00:17:58.650] – Janice Summers
Now, this is very… to me sounds very human to human. Can you apply the same feedback to machine learning? How would you do that?
[00:18:10.170] – Bill Hart-Davidson
Yeah. I think that it gets a little tricky. We've tried to stay on… I'll give you a few terms of art and then we can get into where I think that line lives. So in the machine learning world, there are two flavors that people often talk about. One is supervised learning, one is unsupervised learning.
[00:18:37.120] – Bill Hart-Davidson
Unsupervised learning is…you just feed it a bunch of data. It learns on its own, and it does its own thing. Supervised learning is usually involving some element of human touch and human regulation of the quality of the outcome. It's also sometimes called human-assisted versus human replacement technology.
[00:19:10.430] – Bill Hart-Davidson
So we've always been on the side of supervised and technology that is part of a human ecosystem rather than substituting for one. I have high hopes and expectations for those kinds of hybrid systems, in part because we use a lot of them now. If you let computers do what they're good at and you let humans do what we're good at, good things can result. But I think if you ask either of us to do the other's job, that's when things start to fail.
[00:19:56.670] – Janice Summers
That's when the wheels fall off. Because there's so much going on. I think the example that you talked about, the clinical trial with the question sheet, and then you brought in the texting. Now that can be automated. That is an extension of human interaction. I can feel things. I'm still involved on the receiving side, so I can still monitor things.
[00:20:22.720] – Janice Summers
But I'm letting you know and I'm treating everyone the same. So I'm not forgetting, it's like the checklist. I'm not forgetting, people aren't falling off of the grid here. So it allows me as a care person to try and manage a lot of people in remote situations.
[00:20:42.290] – Bill Hart-Davidson
That's right. So if you think about what you would do, ideally, in a practice where you're caring for lots of people, you'd see them every day and you'd ask them how they're doing. It's not feasible for a lot of reasons. It might not be even desirable. Maybe the people don't want you doing that every day. But that kind of check-in does afford feelings of care, and the feedback that you need to make sure everything is going okay.
[00:21:14.710] – Bill Hart-Davidson
So what we're asking is, how might we supplement that? How might we get to something that approximates that that's okay and feasible for today? That's part of where the text messaging comes from. I think critical to that is we don't claim or do we present this technology as substituting for other kinds of interactions with your provider. The patient knows that it's just a robot sending a message. They know what it is. They like it anyway.
[00:21:53.270] – Bill Hart-Davidson
We designed all the messages. So the robot doesn't say anything spontaneous. It says only the things that are in the American Diabetes Association guidelines for self-management. So what it's saying is a curriculum, in a way, that we wrote. Of course, it only says things when we tell it to at nine o'clock, at noon or at six o'clock each day. It doesn't text you at inappropriate hours or anything like that. So it's got rules that we've-
[00:22:31.410] – Janice Summers
Or sends you nasty messages. “Locked you out of the bay door, Hal.” [Reference to the movie 2001: A Space Odyssey]
[00:22:34.510] – Bill Hart-Davidson
We don't want it to be a threatening presence at all. Yet what's really interesting is people can still recognize that as a genuine expression of care on the part of the people involved.
[00:22:51.030] – Janice Summers
That is interesting, too. That you talk about genuineness. Because I think it's how you write, the words you choose, and how you phrase things that lets people know that it is a genuine concern, even though it is automated. It's a machine sending you this message. It's in the phrasing. The words you choose.
[00:23:13.190] – Bill Hart-Davidson
Yeah. So we have different messages, for example. Some of them are factual, they're little helpful hints. Some of them are purely motivational. So one of my favorite ones that we put in there a lot is a variation of “Your care team is rooting for you. We're proud of your success. Hooray.” That's it.
[00:23:39.790] – Janice Summers
That's a nice one.
[00:23:41.440] – Bill Hart-Davidson
Right. That puts a smile on both sides' face. They're like, “Oh, yeah, we are. We do root for our patients.”
[00:23:53.440] – Janice Summers
Yeah, they do care. And we all know what that's like to feel like somebody's in our corner and rooting for us, even if we're out there feeling alone.
[00:24:03.200] – Bill Hart-Davidson
[00:24:04.110] – Janice Summers
So I think it helps with that emotional connection, and it helps support our actions, even though we know it's an automated machine. It's like reading memes. We like them because they help tap into emotions that we're familiar with. They give us comfort.
[00:24:18.870] – Bill Hart-Davidson
That's right. I don't think we need anything more than that. Meaning I don't think people expect or need a text messaging robot to pretend to be a human. Nobody's going to be fooled by that. Not anytime soon.
[00:24:39.490] – Janice Summers
And wouldn't they feel insulted?
[00:24:41.310] – Bill Hart-Davidson
They would. What if they find out? What if they figure like, “Am I not important enough to you?” But if you just say, “Look, this isn't a person. You're just going to get a reminder every day, but if you need us just respond back and we'll hear that.”
[00:24:58.770] – Janice Summers
Yeah. There is a human on the other end monitoring feedback.
[00:25:02.440] – Bill Hart-Davidson
[00:25:04.090] – Liz Fraley
It's opening the door for them to talk back to the care team.
[00:25:08.800] – Bill Hart-Davidson
Right. In the same way that we talked about the checklist doing. The checklist attenuates that power relationship, the text messaging service always being there on their phone. Even if they don't use it, it attenuates that time and space problem of “I haven't talked to my doctor in months.” Well, I could, though, because I get a message every day.
[00:25:36.080] – Janice Summers
And I can reply back to that message and there's a human back there monitoring replies back. So it's not a text message that says you can't reply back to this text message.
[00:25:51.350] – Bill Hart-Davidson
It gives you a message that says… It depends. It listens for some keywords, especially for emergency-related keywords, and if it hears some of those things… So there is a little bit of fancy technology going on. If it sees some of those, it says, “If you're having an acute problem, you should call 911,” so we make sure that they know that this isn't an emergency service. It doesn't substitute for that.
[00:26:20.230] – Bill Hart-Davidson
But if there is a genuine need there, as I mentioned, like, let's say it's about needing a refill for a prescription. They'll get a call back and the message will tell them that. It'll say, “You can expect to be called tomorrow by the office.” So that's pretty reassuring, and I think especially now when it can be very frustrating to have to go through phone trees and call automation. It's more straightforward.
[00:27:03.240] – Bill Hart-Davidson
On the AI [artificial intelligence] side, I don't know how you all feel about this. I prefer my robots to be partners rather than substitutes for humans.
[00:27:19.110] – Janice Summers
Yes. I'm with you on that one, because even if you think about it, we have a couple of roles and responsibilities as technical writers. That's to make the complex easy to understand for the general audience who needs to be informed and instructed on whatever it is. Technology, medicine, everything.
[00:27:38.400] – Janice Summers
But the other underlining key role for technical writers is to establish trust, and not violate that trust. Because the information, because it has to be acted on humans, has to act on something, they need to believe what we're telling them.
[00:27:57.930] – Janice Summers
And I think if you remove humans… I like the human-assisted AI and not the one that's left on its own because if I'm caught into a loop where it's just all AI, I am frustrated—
[00:28:15.010] – Liz Fraley
Pressing that pound key or zero to try and get out of it.
[00:28:18.470] – Janice Summers
Yeah. So I agree with you.
[00:28:23.450] – Bill Hart-Davidson
I'm the same way. We can relate that back to feedback, too. I always tell people that my insights for how humans come to trust each other were changed for me based on another hobby I have, which is riding bikes. When I started cycling in a group, especially if you're going fast, one of the things that you quickly pick up is whose wheel you want to be behind and whose wheel you don't.
[00:29:00.030] – Bill Hart-Davidson
The formula is very simple. The trustworthy wheel is someone who is predictable, who behaves the same way in the same situations every time. The person who you can't predict what they're going to do, that's the wheel you don't want. If they're grabbing their brakes one minute and speeding up the next minute. No one's following that person. I can tell you that right now.
[00:29:28.200] – Janice Summers
No. How can you? You can't adjust. They'll kill you.
[00:29:34.190] – Bill Hart-Davidson
So we learned to trust based on repeated feedback and I think it's that simple in many cases. Now, the circumstances under which we are monitoring that get really complicated. Part of what makes it obvious in a cycling context when you're moving really quickly is that it stops being complicated because you don't have a choice. You're just watching whose bars are moving all over the road and who's going at a consistent rate of speed. So the variables are fewer.
[00:30:13.930] – Bill Hart-Davidson
But it's the same. It's essentially the same, and one insight that we've had while we're doing some of this work with… I like the word machine learning rather than artificial intelligence because I think AI is an umbrella term and it connotes that autonomy that I'm not all that comfortable with. Machine learning is a little more reserved, and I think it's a little more accurate for describing what people do with the technology anyway.
[00:30:57.610] – Bill Hart-Davidson
But one of the interesting things is that machines learn and become reliable. Produce reliable results, I'll say. The same way, that is, they need to see over and over and over and over and over again, an example of this thing being this category or this label.
[00:31:23.930] – Bill Hart-Davidson
So if you talk about a data set for training a machine learning classifier, it's going to have some number, thousands, or tens of thousands or hundreds of thousands of examples of this equals this. That's really all that these training sets are, and it will start to form a relationship between those two things. The more examples it has of this being these features equal this label, that's how that classifier builds its own internal confidence, if you will, to label that thing as that. Well, we do the same thing. The remarkable thing is that we do it with far fewer examples.
[00:32:08.800] – Janice Summers
I was just going to say, so we do this with a lot less data. A lot less empirical data.
[00:32:16.000] – Bill Hart-Davidson
Humans jump to those conclusions but they also intuit those relationships much more quickly, at least right now. I shouldn't say quickly, with fewer examples in the data set, because the computer can process a million examples in the time I can look at three carefully. But we might form the same response, me based on a few and them and the computer needing tens of thousands. So go humans, at least for now.
[00:32:55.980] – Janice Summers
Yeah, go humans. Well, and I think too, always keeping them in the loop because I think there's a dark side to that. I have a problem with AI as well. I don't like the word artificial. When you think in terms of machine learning versus artificial, it is actually not artificially building, it's learning through data that's tangible.
[00:33:22.580] – Bill Hart-Davidson
Yeah. If you wanted to be even more accurate with that, I would say it is getting statistically relevant or statistically more consistent associations. This with that to some level of statistical measurement. All of those are human decisions. What statistical threshold you deem appropriate for that, for example, is one.
[00:34:06.990] – Bill Hart-Davidson
Another is what data you're feeding it. Your data set is, if it was created by humans, it's going to have human weaknesses in it. I think we hear that critique a lot, and I think we should pay attention to it more in this way. Whenever we see something that's being generated for us, based on a recommendation from a classifier, we should ask, how is the classifier trained? What was the data set used to train it?
[00:34:43.530] – Bill Hart-Davidson
Because that data set and how it was processed, how it was gathered, what its origins are is going to tell us a lot about the human beings who made the stuff to train the classifier to begin with.
[00:35:01.890] – Liz Fraley
Kind of funny. We talked about the checklist, and consistency through structure, through repetition, and the ability to use feedback just to give novice people a way to talk to expert people. All of these things seem very almost impersonal. You use the same checklist for everyone, regardless of who they are. And yet, you're finding a better relationship through these structures, this consistent patterning. It's interesting.
[00:35:37.780] – Bill Hart-Davidson
Yeah. I like that juxtaposition. I think you're right. We're trying to create space in almost all these instances for the people to do people things. I will say this. It means acknowledging where humans have weaknesses. In our perceptions, or in our processing capacity, in our biases and recruiting technologies that might help us make up for some of those weaknesses.
[00:36:15.390] – Bill Hart-Davidson
But then not going so far as to imagine that we can replace all of the good things that humans do with technologies that are equally limited and have those kinds of damages. So yeah, I think that's a great point, Liz. I don't know if I've solved where that balance point would always be.
[00:36:40.530] – Bill Hart-Davidson
But I do try to think of it in straightforward terms. So for example, you're right that there are ways in which a lot of the techniques I'm talking about are associated with maybe things that people think of as regressive ideas now, like behaviorism. Where you put rats in a maze and put a piece of cheese.
[00:37:09.770] – Bill Hart-Davidson
We are actually biased towards behavior in a lot of the work that I have done, rather than disposition or psychology. I want to know what people do over and over and over again because what people do is a good predictor of what they will do. In many cases, we want to change what they do, either to get them to practice more if I'm being a teacher, or practice slightly different things, or to eat differently, make different choices at mealtime.
[00:37:55.670] – Bill Hart-Davidson
So if I want to influence behavior with my writing, with my communication, I have to also think about what their communication behaviors are, if that makes any sense. So I would accept the term that I'm kind of a neo-behaviorist. I don't know if that's a thing—
[00:38:19.130] – Liz Fraley
I was thinking more, and it wasn't until you were just talking… You're sort of pulling out the stuff that's chores. Chores. You're pulling out the chores to make space for people to be people again and not focus so much on the chore.
[00:38:33.770] – Bill Hart-Davidson
Yeah. That's really true. Because one of the things that computers are really good at and that humans have a harder time doing that the checklist helps with is just remembering more than five things. We know that magic 5-7 number. But now, put it into a busy clinic, and you have 14 patients lined up in your workday and you get them back to back with no breaks.
[00:39:03.880] – Bill Hart-Davidson
You're charting and answering questions and looking up who's got what healthcare coverage for what kinds of things between every single visit and you're slowly falling behind. You're texting your kid because your kid's sick and you have to pick them up from school. Now, how hard is it to remember to do the same five or six things with every patient every single time? So that checklist just stabilizes that for you.
[00:39:36.730] – Janice Summers
The automation is a blessing. It's a necessity because we're all in information overload, and we can only handle so much information as humans, processing at one time. We can only have our cognitive awareness on so much at one time. So having the checklist helps balance that so that we're not forgetting things.
[00:40:02.980] – Bill Hart-Davidson
Yeah, and it doesn't make you less human to use it. That's the interesting thing. It allows you to be more human. You can go, “Okay, how are you?” I know I'm not going to forget the things because they're right here.
[00:40:18.380] – Janice Summers
I've got seven minutes for a meeting. This will give me a chance to be able to say, “Hey, how are you?” Because if I got to get through these things that'll give me a little extra time. Now, what happens when… Because this is very human interacting with automation. Humans interacting with machine learning. Human-supervised, as you had said. What happens if the humans are removed?
[00:40:46.050] – Bill Hart-Davidson
It can get to be a mess. There are always going to be human motives behind that, and so I would say it's rarely the case that humans are removed entirely. We have to then ask the critical question about whose interests are being served here. So that's where these ethical principles that we try to work with… They act as real boundaries to what we would introduce as an intervention in a healthcare setting because we're trying to decrease medical disparity, not increase it.
[00:41:31.090] – Bill Hart-Davidson
There are medical cures you could offer that would make some people healthy and leave a lot of people behind, and we've generally opted for a lot of those when we're talking about chronic illness. That's why we have medical disparities now.
[00:41:49.010] – Bill Hart-Davidson
We even have biases in the diagnostic tests themselves that mean that people with certain traits in their red blood cells, for example, ones that are similar to sickle cell trait, means that the standard diagnostic tool for detecting and diagnosing diabetes works less well for African Americans. That's called the HbA1c.
[00:42:20.250] – Bill Hart-Davidson
Well, that explains a fair amount of the disparity in treatment and in diagnosis of that disorder for the African American population. That was always there. That was always an error. If we get ahead of ourselves and we say, well, all you need is this automated A1C machine and everyone does it.
[00:42:41.360] – Bill Hart-Davidson
If you're in range, you're fine, and we don't stop and go, “Well, are we fine? Doesn't A1C detect a lagging indicator of the thing we're actually detecting?” It's pretty reliable unless your red blood cells have this other trait, which means that it's not reliable all of a sudden.
[00:43:04.030] – Bill Hart-Davidson
And so in that line of reasoning, you can see how on the path of automation, and with good intention, as long as the statistics look good, well, this is going to catch 80 percent. Well, what about that other 20 percent? We can actually increase some social problems that we're trying to solve if we're not careful.
[00:43:32.280] – Bill Hart-Davidson
You have to be values-driven and willing to examine what your motives are going to be whenever you're introducing these powerful technologies at scale because every one of them has the power to increase the inequity and disparity, as opposed to level it out, even if that's what you're trying to do. That's how I think about it, anyway. I know it's a bit heady, but…
[00:44:12.970] – Janice Summers
Well, I think it's important that that human factor has to stay with all of the automation and with all of the clever things that we do and all the machine learning, that we still need to keep humans at the center. That's who we're serving, but at the back end of this, we need to supervise those machines and not abdicate the responsibility and reexamine carefully every step along the way.
[00:44:51.370] – Bill Hart-Davidson
Yeah. I think that the other useful thing I would add to that is I mentioned this before that feedback is often a two-way process. The other critical thing with all of these artificial intelligence technologies or embedded smart devices, whatever we might want to call them, is they have the power to gather information and aggregate it as much as they have the power to send it. That question as well has to be asked. In whose ultimate benefit is that going to be used?
[00:45:46.750] – Janice Summers
All the data that… We're being surveyed, our smartphones are watching us.
[00:45:56.650] – Bill Hart-Davidson
I'm a fan of using technologies like a checklist, and I do think of a checklist as a simple technology—
[00:46:03.370] – Janice Summers
It is a technology.
[00:46:04.930] – Bill Hart-Davidson
[00:46:05.560] – Liz Fraley
I'm a huge checklist fan.
[00:46:07.650] – Bill Hart-Davidson
And it's got a great benefit, which is that it attenuates that power structure we talked about, the patient-provider, without gathering any creepy data about the patient. It's just a checklist. But if we move that checklist to the iPad and the phone and then ask people to interact with it, all of a sudden it becomes potentially an object for surveillance.
[00:46:38.650] – Bill Hart-Davidson
So the question is when we draw that limit. Well, for us, we want that shared decision-making benefit to be what we're… Again, it's another polestar for us. So increasing medical disparity, increasing shared decision-making means that we're going to opt for a less smart technology in the room with the patient in that instance. Because then you take what could be a net benefit of increasing trust and decreasing the power disparity and turn it into the exact opposite.
[00:47:16.710] – Bill Hart-Davidson
If I was filling out an electronic checklist in that exam room, I'd be like, where's the data going? Who am I really doing this for? This isn't about me all of a sudden. It's about you're collecting information about a member of a population that you think I'm part of. I'm smart enough to know where that data is likely to end up, but even I don't know what they're going to use it for.
[00:47:49.850] – Janice Summers
No. You don't.
[00:47:53.650] – Bill Hart-Davidson
All of a sudden, what could have been a trust-building moment becomes the opposite of that. So that's what I'm talking about where I don't know if we think all those implications through all the time, but we should.
[00:48:11.830] – Janice Summers
And if people are aware of the data that's being collected, are we informing them, are they being informed? Not everybody is going to be as wary. So they could innocently fall into a track where other data is being collected that they're oblivious to.
[00:48:34.870] – Bill Hart-Davidson
Yeah. Maybe the defining issue of the day in terms of using these technologies in our homes and in our lives is when I first started thinking about technical communication as a career, one of the interesting things is that almost all of the technology that we worked with was encountered by people in their workplace.
[00:49:09.170] – Bill Hart-Davidson
It was often the case that tech comm was synonymous with workplace writing because the home was not a place where all of these technologies were.
[00:49:22.130] – Janice Summers
Right. That's changed.
[00:49:23.870] – Bill Hart-Davidson
Man. What a massive change. Now it's almost the opposite.
[00:49:32.070] – Bill Hart-Davidson
Your home is full of devices and data streams and all these other things that you bring into your life outside of work or that merge that make your life constantly work, or if you're me, inescapable.
[00:49:56.190] – Bill Hart-Davidson
We redid this kitchen a couple of years ago, and we got a new stove. It was maybe only less than 18 months old, and the stove started doing weird things. It would just turn off randomly. It turns out— I did some Googling and found out that it had been designed in such a way that if you have the oven on and you open the door just right, it causes a rush of air to sweep up.
[00:50:36.390] – Bill Hart-Davidson
And over time, if you do that too many times, it was frying the motherboard that was in the stove that allowed the stove to connect to the internet and also controlled a bunch of things like whether the stove was on or off. I had never set my stove up, by the way, to communicate with the internet. I don't need my stove to have Wi-Fi.
[00:51:00.760] – Janice Summers
I don't need my refrigerator to be connected to the internet. I don't need to control my refrigerator from my smartphone.
[00:51:07.420] – Bill Hart-Davidson
No, and in fact, it sounds vaguely dangerous.
[00:51:12.090] – Janice Summers
[00:51:14.890] – Bill Hart-Davidson
It's fire. It's a box of fire, as I always like to say, and I don't need a box of fire connected to the internet. There's a metaphor there, I'm sure. My point is, we got rid of that stove, and I have one now that has no electronics in it whatsoever. Not even a clock, because I don't need a clock in my stove. I have a clock right above the stove.
[00:51:45.850] – Janice Summers
I used to have an old Wedgewood stove. You know what those are. Wood burning and then they have the gas conversion. No technology on that stove at all. You had to manually light the burner.
[00:51:59.770] – Bill Hart-Davidson
The box of fire. The crazy thing is you have to go out of your way now to find those devices. But I think that that's probably the thing that… Believe it or not, I guess reflecting, for folks who are identifying with these trends in their lives as technical communicators is, early in my career, I was always thinking about helping people integrate technology more. Now I'm finding I'm like, “You sure you want Wi-Fi?” I'm almost doing the opposite.
[00:52:40.240] – Janice Summers
Hasn't it switched to now it's like, let me help you control your life so the devices aren't running you? Let me help you be aware that every time you're plugging into these things, they're collecting data, and you don't know where that data is going. The whole surveillance thing.
[00:53:00.720] – Bill Hart-Davidson
One of my jobs at IBM, I worked in a group that made enterprise software and they worked on workflow design. Back then, all the rage, all the people who had service contracts with Global Services at IBM were like, how do I automate my workflows? And now, everybody is, how do I simplify all these damn automated workflows?
[00:53:25.050] – Janice Summers
You're right. How do I simplify? It's all about simplification.
[00:53:30.240] – Bill Hart-Davidson
Yeah, so we're taking all those things out.
[00:53:32.800] – Janice Summers
Yes. It's the new evolution of it, which is fine. That's the whole thing about tech comm. It's ever-evolving.
[00:53:43.210] – Bill Hart-Davidson
[00:53:44.150] – Janice Summers
Thanks for joining us.
[00:53:45.760] – Bill Hart-Davidson
I very much appreciate the opportunity, and I'll look forward to the next time we get to talk.
[00:53:53.300] – Janice Summers
Yes. Same here. Bye, everybody.
[00:53:55.380] – Liz Fraley
So long, everyone.