Skip to content

An Interview with Distinguished University Professor Gordon Guyatt, OC, FRSC (Part Three)


Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): In-Sight: Independent Interview-Based Journal

Publication Date (yyyy/mm/dd): 2017/05/15


An Interview with Distinguished University Professor Gordon Guyatt, OC, FRSC. He discusses: Sigmund Freud, Michel Foucault, Hirsch Index, and secure placement in the annals of medical and general history; evidence-based medicine (EBM) and its definition; the three principles of EM; and what one should do with evidence as value dependent.

Keywords: biostatistics, epidemiology, evidence-based medicine, Gordon Guyatt, Hirsch Index, McMaster University, research.

An Interview with Distinguished University Professor Gordon Guyatt, OC, FRSC (Part Three)[1],[2],[3],[4]

*Footnotes in & after the interview, & citation style listing after the interview.*

*This interview has been edited for clarity and readability.*

1. Scott Douglas Jacobsen: In a list, and many others, with the most cited researchers in Canada, and in the world, with inclusion of the dead such as Sigmund Freud and Michel Foucault, that is, the ranks done by a Hirsch Index – the calculation of papers and the citations per paper to derive an individual academic’s Hirsch Index.[5] You have over 187,000 citations with a Hirsch-Index of about 217. In short, you are one of the most cited researchers, or the most cited researcher, in Canada and the 12th most cited researcher in the world, circa second week of February, 2017. Your position in the annals of medical and general history is secure. Based on the accomplishment, what does this mean to you?

Professor Gordon Guyatt: You described an evolution during my career. This electronic counting of citations was not something around until about a decade ago. It became a standard by which people are judged because you can count it. In the past, you can say, “This paper is good. It seems to have influenced people. People seem to like it. I get the impression people are using it.” However, that is different than the figures there. You can say, “Okay, here people are reading this, and they are using it, and researchers are citing it in their own work, and so on and so forth.”

It has downsides, where journals are judged this way, too. The journals are rated by their impact factor, which is how much they are cited. It goes into gaming. The impact factor is the citations per article. One way to improve your impact factor is to publish less studies. Only publish the ones going to be cited. Then you make a deal, “Okay, this type of article. It is just really a type of opinion piece. It is going to count in the denominator of my ranking.”

It potentially has negative effects as opposed to using other criteria for important research, at least important to some people. Is it well done? It is good research? Those things may still be important. Is it going to be cited? How much is it going to be cited? Sometimes, complete baloney may get lots of citations. Leading journals always publish because of the newspaper value of their articles, but perhaps even more the case because the way the journal is evaluated is on the basis of this impact factor. It has to do with citations.

Even so, it is nice to have this objective standard of the fact that work has made an impact, but I am not sure this is healthy. However, it is nice. Usually, what happens with an article is that it comes out, has 2 or 3 years of high citations, citations fall off, and then 5 years 10 years later, it is not cited. Probably, it is the same for most publications. It is gratifying for me. I have papers cited a 100 times during a year. That is a lot of citations. Some are 20-years-old and getting about a 100 citations per year.

Even if 20 years later they get 25 citations per year, that says, “It is a major test of time. People find it useful.” That is, you do a piece of work, then somebody builds on it. Then what you did before, and what people cite the paper tells you that they have built on it, particularly if it gets cited 20 years later.  The original work is still compelling enough to people that they say, “Okay, I’m citing the work that started us down this road.” The way these things work with the electronic counting is nice.

It has downsides. It is distracting. One colleague made fun of me. I was saying, “Hey! I was checking my h-factor, and it is still going up.” My colleague responded, “Mirror, mirror on the wall…”, referring to one of the queens in the fairy tale saying, “Mirror, mirror on the wall, who’s the most beautiful of them all?” It was warranted. There are downsides, but it is nice to have objective criteria. It says, “People pay attention to your work and value it.”

2. Jacobsen: The phrase, sometimes mistaken for a term, “evidence-based medicine,” (EBM) originated in a paper by you. What defines EBM?

Guyatt: In 1990, I coined the term. 1991 was the first published paper that used the term. People often don’t notice that one. 1992 was the paper that caught the world’s attention.

3. Jacobsen: You summarized its principles. Principle one, summarize evidence to help make and guide the best decisions. Principle two, hierarchy of evidence for randomized trials. Principle three, context of value and preferences for expert decision making. What else defines evidence-based decision making? As per the presentation style, what are some examples?

Guyatt: To start, what you listed was not there at the beginning, it evolved. The values and preferences stuff was not there at the beginning. We didn’t get it. People thought values and preferences were in the sub-conscious, but we didn’t get it. It had to be added. The 1990s were the EBM aspect. 5 years later, we tweaked the values and preferences. The way we characterize it now is one principle is that you need to summarize and have systematic of all of the highest quality evidence to make good decisions.

An illustration would be that in many areas one paper says, “This treatment is great.” Another paper says, “It is not at all great.” A focus on either one will result in a misleading presentation. You need systematic summaries of the best available evidence. I tell stories. The stories illustrate treatments for myocardial infarction, where there’s one treatment where – this has been superseded but – we put in a drug, clot-busting drugs, that broke up the clots that were causing the heart attack.

Turns out that these clot-busting drugs reduces mortality by about 1/4. It was 10 years after the answer came back from randomized trials before the community got it. It was before the era of the systematic summaries. Another story is about another drug. People have heart attacks. They have arrhythmias, which means abnormalities of the heart beat. It can kill them.  The drug was given to obliterate or decrease nasty-looking arrhythmias. We thought, “Okay, if you get rid of the nasty-looking arrhythmias, you’re going to get rid of the ones that kill people.”

It didn’t. In fact, there have been a number of such promising looking drugs that have ended up killing people more. When I was in training, I was giving one such drug out all of the time. The evidence said this wasn’t a good idea, but nobody systematically summarized; people were picking studies here and there. We systematically summarize the best evidence to avoid that problem. Next, we need to know what makes the best evidence.

You mentioned a hierarchy of evidence. EBM has been criticized for being excessively randomized-trial focused; in the past, that might be true, but it has evolved. Now, we have much more sophisticated system, that acknowledges randomized trials may be poorly done. They may give inconsistent results. They may not be applicable to your patient. I work as a general internist. I have a lot of people over 90. A lot of randomized trials out there. It raises questions about the extent to which I can apply the trials to those over 90.

Trials may be small and less trustworthy. Anyway, we recognize randomized trials as a good thing. However, you might lose confidence in your randomized trials for a variety of reasons. Similarly, we don’t need randomized trials to show insulin works in diabetic ketoacidosis – where people are dead pretty quickly if you don’t use it. We don’t need randomized trials to show epinephrine works in people with anaphylactic shock who are about to die. We don’t need randomized trials to show that dialysis is a good thing for people with renal failure, et cetera.

There’s an explicit formulation, “Yes, in general, randomized trials generally give higher quality evidence, but sometimes not without limitations, and in general observational studies have lower quality evidence, but not always with large and clear effects.” So we developed a much more sophisticated hierarchy. Some evidence is more trustworthy than others, but we have developed a more sophisticated hierarchy.

The third principle is values and principles. I introduce values and preferences by saying, “What do you think about antibiotics for pneumonia?” Even the lay people will say, “Good idea! Yea, antibiotics worked for pneumonia, we all agree on the evidence. Antibiotics for pneumonia.” I say, “Let me tell you about a patient. He’s 95 years old. He’s severely demented, incontinent of bowel and bladder, lives in a long-care institution. He’s 95, nobody’s been to visit him for 5 years, and he moans in apparent discomfort from morning to night. This individual develops pneumonia. Do you think it’s a good idea that he gets antibiotics?”

In North America, 95% of people say, “No.” They think this guy would be better off dead. So treating the pneumonia is not doing him any favours, if you ask most people, put yourself in the situation of such an individual, would you want to be treated? Most people would say, “No, thank you.” In North America, 5% of people say, “Yes, it is a good idea to treat the person.” So we all agree on the evidence. Our disagreement as to whether this individual should be treated has nothing to do with the evidence.

It has to do with something else. We label that “values and preferences.” So I go on with the story. I used this example repeatedly to illustrate the values and preferences. I went to Peru probably 10 or 15 years ago. I already used the story in North America many times. I went to Peru and said, “Who thinks this is a good idea to treat this patient?” About 2/3rds of people raised their hand and said, “Yes.” I thought, “Wow, something’s wrong here. This is a Spanish speaking audience. I’m speaking English, I have not communicated properly.” I go over it slowly, again. two thirds of the people still say, “Yes.”

I asked the host afterwards, “How come it is so different?” They said, “Catholic culture.” That was their attribution. I go to Saudi Arabia. 95% of the people say, “Yes, the patient should be treated.” All of us agreed on the evidence. That’s not why there are differences. It is something else. That’s what we call values and preferences. Then I tell stories of people at risk of stroke. The treatment reduce stroke but will increase their risk of bleeding. Some people say, “Yes, use the treatment.” Because there’s big values in preventing stroke. Some people say, “No.” Because they are terrified of a bleeding, and so on.

In other words, evidence never tells you what to do, whenever there’s trade-offs with their values, preferences, and judgements, those are always important in making the right decision.

4. Jacobsen: This goes to some of the earliest, or more modern, empiricists like David Hume with his is/ought distinction. You can get the highest quality evidence you can get, even with modern technology, but what you should do with that evidence is going to be culture and value dependent.

Guyatt: That is exactly right. That is exactly right.


  1. Bennett, K. (2014, October 31). New hospital funding model ‘a shot in the dark,’ McMaster study says. Retrieved from
  2. Blackwell, T. (2015, February 1). World Health Organization’s advice based on weak evidence, Canadian-led study says. Retrieved from
  3. Branswell, H. (2014, January 30). You should be avoiding these products on drug-store shelves. Retrieved from
  4. Canadian News Wire. (2015, October 8). The Canadian Medical Hall of Fame announces 2016 inductees. Retrieved from
  5. Cassar, V. & Bezzina, F. (2015, March 25). The evidence is clear. Retrieved from
  6. Clarity Research. (2016). Clinical Advances Through Research and Information Translation. Retrieved from
  7. Craggs, S. (2015, July 21). We can actually win this one, Tom Mulcair tells Hamilton crowd. Retrieved from
  8. Escott, S. (2013, December 2). Mac professor named top health researcher. Retrieved from
  9. Feise, R. & Cooperstein, R. (2014, February 1). Putting the Patient First. Retrieved from
  10. Frketich, J. (2016, July 8). 63 McMaster University investigators say health research funding is flawed. Retrieved from
  11. Helsingin yliopisto. (2017, March 23). Clot or bleeding? Anticoagulants walk the line between two risks. Retrieved from
  12. Hopper, T. (2012, August 24). You’re pregnant, now sign this petition: Group slams Ontario doctors’ ‘coercive’ tactics to fight cutbacks. Retrieved from
  13. Kerr, T. (2011, May 30). Thomas Kerr: Insite has science on its side. Retrieved from
  14. Kirkey, S. (2015, October 29). WHO gets it wrong again: As with SARS and H1N1, its processed-meat edict went too far. Retrieved from
  15. Kolata, G. (2016, August 3). Why ‘Useless’ Surgery Is Still Popular. Retrieved from
  16. Maxmen, A. (2011, July 6). Nutrition advice: The vitamin D-lemma. Retrieved from
  17. McKee, M. (2014, October 2). The Power of Single-Person Medical Experiments. Retrieved from
  18. McMaster University. (2016). Gordon Guyatt. Retrieved from
  19. Neale, T. (2009, December 12). Doctor’s Orders: Practicing Evidence-Based Medicine Is a Challenge. Retrieved from
  20. Nolan, D. (2011, December 31). Mac’s Dr. Guyatt to enter Order of Canada. Retrieved from
  21. O’Dowd, A. (2016, July 21). Exercise could be as effective as surgery for knee damage. Retrieved from
  22. Palmer, K. & Guyatt, G. (2014, December 16). New funding model a leap of faith for Canadian hospitals. Retrieved from
  23. Park, A. (2012, February 7). No Clots in Coach? Debunking ‘Economy Class Syndrome’. Retrieved from
  24. Picard, A. (2015, May 25). David Sackett: The father of evidence-based medicine. Retrieved from
  25. Priest, L. (2012, June 17). What you should know about doctors and self-referral fees. Retrieved from
  26. Rege, A. (2015, August 5). Why medically unnecessary surgeries still happen. Retrieved from
  27. Science Daily. (2016, October 26). Ultrasound after tibial fracture surgery does not speed up healing or improve function. Retrieved from
  28. Spears, T. (2016, July 7). Agriculture Canada challenged WHO’s cancer warnings on meat: newly-released documents. Retrieved from
  29. Tomsic, M. (2015, February 10). Dying. It’s Tough To Discuss, But Doesn’t Have To Be. Retrieved from
  30. Webometrics. (2010). 1040 Highly Cited Researchers (h>100) according to their Google Scholar Citations public profiles. Retrieved from

Appendix I: Footnotes

[1] Distinguished University Professor, Health Research Methods, Evidence and Impact, McMaster University.

[2] Individual Publication Date: May 15, 2017 at; Full Issue Publication Date: September 1, 2017 at

[3] B.Sc., University of Toronto; M.D., General Internist, McMaster University Medical School; M.Sc., Design, Management, and Evaluation, McMaster University.

[4] Credit: McMaster University.

[5] Webometrics. (2010). 1040 Highly Cited Researchers (h>100) according to their Google Scholar Citations public profiles. Retrieved from


In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at


© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: