The Importance of Reconsidering

This blog has largely been about informed consent i.e., reconsidering things after a while and determining whether to give informed consent now, or withhold it. But reconsidering things is important in itself, because you can consent to events or dissent at one point in life, but later see the need to think again. Prejudices and knee-jerk reactions, particularly those born of fear, may form a regrettable foundation for the future. New technologies often spark these reactions. You can easily get by, guided by suspicion from of old, without reconsidering the matter. Here I deal with several changes which have forced me to reconsider my attitude toward new technology. Two news articles in recent Globe and Mail editions have caught my eye. They report on ways technology and humans can work well together. One has to do with a human-based behaviour, interpreted or explained by neuroscience. This behaviour has improved safety at underground transit systems. It focuses the train staff’s mind on a routine task by moving the body intentionally: the trainman, observing that the subway train is about to stop, stands, opens a window, and extends an arm outside the window, and points at a green arrow on a wall opposite. This helps align the train with the platform, and minimizes the gap between them. The human is like a mechanical device which augments the engineer’s ability to see that the train is properly aligned. According to the article, the second trainman will be eliminated by an automated system, leaving only the engineer overseeing the train. The juxtaposition of human-based methods (enabled by learnings from fMRIs and other devices), and their replacement entirely by technology, is a sign of the times.

Riders may have noticed these subway guards pointing out the window to green triangles stuck on station walls when the train stops. This is the “point and acknowledge” system that ensures the train is in the right spot before a guard opens the doors. Passengers risk falling onto the power rail if the train is not spotted properly to the platform, or if doors are opened on the wrong side.
The system consists of four steps – stand up, open the window, point to the green triangle, then open the doors – and taps into how the brain processes information, which helps explain why it has led to a dramatic safety improvement.
Inspired by the Japanese shisa kanko method, also known as pointing and calling, the TTC adapted its system in July, 2014.
Within seven months of its introduction, the TTC saw a 50-per-cent decrease in critical safety incidents, a rate that has since held steady….
https://www.theglobeandmail.com/canada/toronto/article-automation-will-mean-the-end-of-an-unusual-but-effective-safety/

The second article describes using technology to get an overview of all the elements in early life which can lead to criminal behaviour in Edmonton, Alberta:

The proposal is to use AI to gather information from health care, social services, child welfare and police to anticipate a congruence of circumstances and people which could result in wrong behaviour. This is to add to human-level conferences already underway: In this Hub program, a group of local service providers – police, educators, social services workers, doctors and others – meet regularly to discuss emerging community concerns.
https://www.theglobeandmail.com/canada/alberta/article-edmonton-police-create-community-solutions-accelerator-with-aim-to/

There was an attempt to adapt this program in my municipality in Ontario some fifteen years ago. It was called Wraparound. Practitioners from those varied fields gathered to discuss the effects of life circumstances together with insufficient support systems and groups, on already risky behaviour. The individuals and/or families affected were contacted by social workers and others, including volunteers, and offered an opportunity to learn ways to interdict the risky patterns. The individual and family, neighbours and support agencies (including clergy, which is how I became involved) identified strengths already present, rather than problems, and developed strategies for emphasizing and building on them. These groups were to meet regularly with the individual/family to evaluate the progress or lack of it and to continue to find work-arounds, solutions, and potential additional resources, to break the undesired patterns. The deputy chief of the municipal police convened several meetings to formalize the protocols for sharing information among agencies, and to devise protections for privacy and safety. At the next-last meeting, the Crown Attorney’s delegate came into the room, slammed a huge pile of paper files on the conference table, and said “These are the legislative blocks to sharing information in the way you want.” That wasn’t our final meeting, but it effectively ended the program locally. Only a few years later the provincial government ceased funding the pilot program, notwithstanding its demonstrated success and the thorough research and theory which underlay it. I suspect that Edmonton’s AI efforts will meet similar problems about privacy and safety. There are differences between my province and Alberta, so perhaps Alberta can find a way. The issues of privacy and AI have been considered over the years since our effort, so perhaps solutions have finally been found. While I hope for benefits, there may be undesirable, unanticipated consequences.

One concern is that too much information would be shared and stored without anyone’s consent or knowledge. Having spent seven years on a research ethics board at a nearby university, and also at a hospital system, I am familiar with the requirements for informed consent as specified in the Tri-Council Statement* This document requires that the consent process identify
• the immediate and long-term purposes for collecting the information;
• who will have access to the information;
• where and how it will be stored, and for how long;
• secondary uses of the information later on;
• the possible benefits to the researcher and sponsor, and to the contributor;
• the downsides for all three;
• how the contributor’s identity is protected; and
• to what extent is the information being anonymized and how.

The Edmonton proposal may not formally come under the Tri-Council Statement, but these are good standards, and should have a place in those experiments. As well, AI experts across the country tell me that they don’t always know everything their AI can do, nor what it is doing. So the results of using AI in the ways described in the article, may be much more than the sum of the known parts, but may not be known to the human operators and program participants. On the other hand, knowing the likelihood and location of criminal behaviour may make it possible to stop it before it occur. That, I think, is good, not only because it prevents crime but may help potential criminals change to a healthy, successful, and societally beneficial lifestyle. Rather than being a radically new thing, this use of AI can be an extension of Wraparound, which has been successful even without computer assistance. I am concerned that machines can exceed our own conceptual abilities. Perhaps this is a generational conceit. I am accustomed to the apprenticeship model: newbies are taught how things have been understood and done in the past, and why. They practice those methods, except those we already know have failed. Having learned to use the methods we have taught, newbies graduate and become responsible for developing new ways, which we presume will be extensions of what we taught — except the geniuses who intuit ideas and methods which work, but not in expected ways. In academia, we submit their new ways to peer review. This not only subjects them to our disciplines, but gives the more traditional among us the opportunity to compare and contrast our ways with the new. Our minds may be opened a bit. We need that opportunity to open up to new things AI might show us. We need the opportunity to learn from them sometimes, to catch up. But while we are catching up, there must be a pause in AI activity. We must be able to decide whether to give informed consent to what has already been discovered or done, i.e., we must be able to evaluate the recent history (see my blog post “Relearning History” https://uponreconsidering.blog/2019/03/29/re-learning-history/ ) as well as consent to the future. If we can’t do the first, we certainly have no business doing the second. Reconsidering the history of our earlier consents, spot-lights interesting ironies at the moment. As I write this we are in the process of “social distancing” because of COVID-19. Restaurants and coffee houses are shutting down indoor service. Instead, orders may be placed in advance for delivery or drive-by or walk-by service. For those of us who have always believed that drive-throughs are bad because idling cars pollute the atmosphere, we see here an unexpected value: having those drive-throughs enables the servers and cooks to continue to be employed, and enables the restaurant and coffee houses to remain in business.  These are important considerations. I don’t know that any of us who opposed drive-throughs for the sake of the environment, would have given thought to their value in a locked-down economy. And now we have foodbanks using single-use plastic bags (so very much the bugbear of environmentalists) to put out food for the needy. This is much safer for those picking up the food. Truly opportunities for reconsideration, which is the point of the blog. Reconsidering is an important value, perhaps as important as the initial informed consent. The opportunity to reconsider should always be sought, because the informed consent in the past could not possibly take into account all the possibilities of the future. Consent, or dissent, to something in the past, should not be “carved in stone.” Like all ethical considerations, there should be opportunity frequently sought to evaluate not only things as they are at the moment, but as they were at times of first decisions, and as they progressed. We should look for the subsequent events which diverged from that first decision. Watch the spread of wrong new decisions, look for the points at which the diversion ought to have been noted, and the initial path found again. Or, understand how a wrong path was diverted to the good, and identify when that should have been noted and appreciated and developed more fully; perhaps the original path might have failed or caused harm had it been kept. The Vietnam War and the military actions in the Mideast and Africa for the past seventeen years are topics for these studies. So also is the response to COVID-19 and planning for the next plague. We must avoid capture by the urgent rather than by the important (see blog “The Next New Thing” https://uponreconsidering.blog/2019/02/18/the-next-new-thing/) that we cannot find time to reconsider and ponder. Organizations may need to set up a second team to stand aside and do this for us. Reconsidering our initial informed consent is important, and must be done as we continue to use AI to assist our human endeavors. The weakness in all this is not the threat of AI’s assisting human endeavor, it is in making sure that AI is can assist in only an authorized and approved way. Because machines can decide to do things differently than would humans (think of the Go games strategies developed by AI https://www.nature.com/news/self-taught-ai-is-best-yet-at-strategy-game-go-1.22858), humans must be able to identify exactly what happens, when it happens, the up- and down-side including the ethics of those decisions, and how humans can anticipate and proscribe those decisions in the future. It must be the humans, not the machine, who decide whether those processes may be used again or be forbidden. It is not safe to simply let the machines run, and hope for no bad consequences. There is the danger that something will be hacked. This is particularly concerning on the subway (first article cited above). It’s a matter of not only informed consent to what is planned, but informed consent to the methods of monitoring the AI and having control over what is and is not done. With the COVID-19 pandemic, we are seeing new aspects of this issue: the use of surveillance technology to identify gatherings of people who should be isolated.
https://www.washingtonpost.com/technology/2020/03/17/white-house-location-data-coronavirus/
https://www.vice.com/en_us/article/epg8xe/surveillance-company-deploying-coronavirus-detecting-cameras
https://www.wsj.com/articles/to-track-virus-governments-weigh-surveillance-tools-that-push-privacy-limits-11584479841
https://www.npr.org/2020/03/19/818327945/israel-begins-tracking-and-texting-those-possibly-exposed-to-the-coronavirus
https://www.cbc.ca/news/opinion/opinion-covid-19-cellphone-tracking-containment-1.5512231 We can see the benefits. We can see the threats, especially in the hands of authoritarian politicians (at this time Netanyahu is facing criminal trial, has lost his election, and is using surveillance techniques on citizens which had previously been used only on terrorism). We have profound need for government to forestall worse consequences of COVID-19; we have profound reason to fear government use of these methods on everyday citizens. There will be much to reconsider even weeks from now. And careful thought about which of these acts, if any, receive our informed consent.

*The Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS or the Policy) is a joint policy of Canada’s three federal research agencies – the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Social Sciences and Humanities Research Council of Canada (SSHRC), or “the Agencies.”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.