Professor Gordon Guyatt on GRADE, Core Grade, and EBM
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): Oceane-Group
Publication Date (yyyy/mm/dd): 2024/12/24
*Transcript edited for readability.*
Gordon Guyatt holds a joint medical appointment and is a Professor of Health Research Methods, Evidence, and Impact at McMaster University’s Faculty of Health Sciences. He is a distinguished member of the Michael G. DeGroote Cochrane Canada Centre and the Centre for Medicinal Cannabis Research (CMCR) at McMaster. Professor Guyatt specializes in evidence-based medicine, developing and applying rigorous research methodologies to enhance healthcare practices and policies. His influential work ensures that clinical decisions are supported by the best available scientific evidence, improving patient outcomes and public health. In addition to leading cutting-edge research initiatives, Professor Guyatt is dedicated to mentoring students and professionals, fostering the next generation of health scientists. His commitment bridges the gap between scientific research and practical healthcare solutions, driving innovation and excellence in the health sciences.
Scott Douglas Jacobsen: The last time we talked was probably–I don’t know–3 or 4 years ago. I believe the lasttouchpoint for us was the red meat study. You were critiquing some general dietary health recommendations. The red meat study raised questions about the degree of risk that can be reasonably proposed to people and how much personal preferences and values play a role in whether they’ll choose to consume three servings of meat per week or so.
Professor Gordon Guyatt: Right.
Jacobsen: Regarding more recent events, you received the Henry G. Friesen International Prize in Health Research—yet another award! How does it feel?
Guyatt: Nice.
Jacobsen: Was this in recognition of your overall work in health science, or was it for something specific?
Guyatt: It was for something other than a specific piece of work. It was for my overall lifetime contribution.
Jacobsen: Have you had any updates on evidence-based medicine, especially its definition, use, and practice?
Guyatt: There’s been an evolution. We’re always trying to improve shared decision-making, but it’s challenging. Do you remember what GRADE is?
Jacobsen: I remember the acronym but need help remembering what each part stands for.
Guyatt: I am also trying to remember what each part stands for.
Jacobsen: Wasn’t it about appropriate systematic reviews?
Guyatt: GRADE stands for Grading of Recommendations, Assessment, Development, and Evaluation. It’s a framework for assessing the quality of evidence and deciding what’s trustworthy. It also helps move from evidence to recommendations or action. GRADE has been a big hit and is now used by over 110 organizations worldwide. Many consider it the standard for systematic reviews and guideline development.
However, GRADE has become too complex. Over 50 papers explain various aspects of applying it, and some of the guidance contradicts itself because of evolving changes. Some of it could be more sophisticated for many users. As a result, we are creating something called “Core GRADE.” It’s meant to simplify things by focusing on the essential components people need to know. We’re producing a series of papers about Core GRADE.
Jacobsen: What is in Core GRADE, not Core GRADE or general GRADE?
Guyatt: Well, it’s a bit difficult because it’s highly technical. We first say that methods are now available to compare a whole range of treatments simultaneously. But for Core GRADE, we’re comparing treatment A to treatment B. The more complex evidence evaluation methods are not part of our Core GRADE. We’ve identified benefits and harms, certainty of evidence, and values and preferences as key criteria for moving from evidence to recommendations.
But we’ve also identified issues like cost, resources, acceptability, feasibility, and equity may be involved. There’s a more advanced “evidence-to-decision” structure where you check off boxes for each factor. In Core GRADE, we say, “Please consider these issues.” However, we ask people to consider these issues without requiring them to fill out the entire chart, which can be time-consuming and energy-intensive. We’re trying to eliminate what you might call the “flat of the curve”—in other words, tasks that consume time and energy without significantly improving the result.
That’s an example of the kind of simplification we’re aiming for, where we say: “Think about these issues, but you don’t need to go through the whole process.”
Jacobsen: In addition to these modifications, are you developing new review methodologies or primarily focused on improving existing ones, such as GRADE or Core GRADE, or are you outside of Core GRADE?
Guyatt: Another key issue within the methods community is the ongoing tension between simplicity and methodological sophistication. What has happened to GRADE and some other areas is that there’s been an excessive focus on methodological sophistication without enough attention to keep things simple and manageable for users. So, we’ve just submitted a paper to The BMJ after going through a process of creating a simpler, yet still rigorous, way of assessing the risk of bias in randomized trials.
We’ll be introducing a new risk-of-bias instrument for randomized trials. A few years ago, we also developed a systematic approach to assessing the credibility of subgroup analyses, which is gaining traction and proving effective. These projects aren’t entirely new frameworks like GRADE, which fits under the broad umbrella of evidence-based Medicine (EBM). Instead, they’re components of the broader EBM and guideline process that aim to simplify and improve specific aspects.
Jacobsen: One of your papers was titled “Successes, Shortcomings, and Learning Opportunities for Evidence-Based Medicine from the COVID-19 Pandemic.” What were the successes, shortcomings, and lessons learned from the pandemic?
Guyatt: As a global EBM community, one of our successes was rapidly producing evidence from randomized trials. One of the key innovations was using “adaptive trials,” also known as “platform trials”—probably a better term. Platform trials involve:
- Setting up multiple centers worldwide or within a jurisdiction, following a single protocol.
- Using the same data collection forms.
- Adhering to the same ethical standards that we would follow for any trial.
But in this case, it’s for a series of trials.
So, for example, if you’re testing Drug A for a particular condition, you’ll collect the same types of data and measure the same outcomes across all sites.
And when you finish with Drug A, you don’t have to start all over again. You have all your centers signed up for a series of trials, all your data collection systems in place, your ethics approvals set, and everything ready. You move from one drug to the next. We had several of these platform trials running worldwide. As a result, we quickly identified three treatments that work for non-severe COVID-19 and three classes of treatments that work for severe COVID-19. That all happened rapidly. So, that was one big success.
The next step was quickly synthesizing the evidence from these trials. Up to 20 trials were published weekly at the height of the pandemic. Two major groups, including one at McMaster University, set up large operations to process this data. We had the resources to do this because many high-level grad students and junior faculty could handle the volume. We established this operation to process the 20 weekly trials, produce analyses, and identify what treatments worked and what didn’t.
We also incorporated network meta-analyses, which I referred to earlier, that allow for simultaneous comparisons of multiple treatments. So, instead of comparing Treatment A to a placebo or no treatment, you can compare A to B, C, D, E, and F and B to C, D, and so on. We weren’t just synthesizing data from these trials; we were conducting network meta-analyses.
The next step was to incorporate the evidence into the guidelines quickly. We streamlined the process of developing guidelines, building on work we’d already done. I’ve worked extensively with the World Health Organization on developing COVID-19 guidelines. We managed to accelerate the entire process.
We could quickly produce evidence from randomized trials, synthesize it into systematic reviews, and develop trustworthy guidelines to help clinicians manage their patients. That was a big success.
There were limitations, particularly in the public health sector. Public health responses were only sometimes managed as well as they could have been from an evidence-based perspective. One mistake that stands out is the failure to acknowledge uncertainty in decisions.
For instance, policies often shifted without explaining the reasoning: “Do this, now do that. Oh, no, do the opposite.” One significant error, in hindsight, was closing schools. It became apparent relatively early that children were at low risk. Yet, schools were closed, causing significant harm, particularly to vulnerable and disadvantaged low-income families. The cost of this decision was huge.
The question is, how could that decision have been made better? Acknowledging the uncertainty upfront helped.
Jacobsen: When did you first start writing for newspapers?
Guyatt: Oh, God. About 25 years ago—maybe 20 years. I’d have to check. It’s been long enough that I’ve forgotten exactly when I started.
Jacobsen: You tweeted or posted about avoiding paragraphs longer than three sentences on X. Why that specific length?
Guyatt: When I started writing for newspapers, I realized I needed to adjust my writing style. I had been reading newspapers all my life, but I hadn’t noticed how they were written. I decided to analyze what makes good newspaper writing. I was shocked that most newspaper paragraphs are only one or two sentences long. Occasionally, they’ll have paragraphs with three sentences, but that’s about it.
I thought, “Whoa, if I’m going to write well for newspapers, I must follow this style.” So, I started writing paragraphs that were at most three sentences, often just two and sometimes even one. Then, I realized that if this approach makes writing clearer in newspapers, it might also work in scientific articles. And, in my experience, it does.
It does make things clearer in scientific articles. That evolution of my writing significantly affected how I approach scientific writing.
Jacobsen: Do you have any tips for individuals who want to write about science but don’t need a background in it? I’m thinking of journalists and others, such as poets or writers, who want to express scientific ideas.
Guyatt: Sure. I wrote a paper more than 20 years ago specifically addressing this issue—journalists writing about health. How can journalists do a good job writing about health? Assuming they’re already good writers—that’s another issue entirely, but let’s assume the writer is good—one major problem health journalists face is that scientific findings are often oversold.
A good health journalist will repeatedly caution, “There’s much hype around this, but it’s probably oversold. Let’s be careful and wait for more evidence.” The problem is, this doesn’t make it into the newspaper. The editor will likely say, “Boring, boring, boring. Give me something exciting.” So there’s this huge incentive to declare, “Great breakthrough!” because that will make the article newsworthy. But if you write, “This isn’t such a great breakthrough,” the article often gets ignored.
It’s a tough position for health journalists, but if you want to do a good job, you must emphasize skepticism. One piece of advice: when there’s a purported breakthrough, don’t talk exclusively to the person who made the discovery. Talk to other experts in the field and see what they think about this so-called breakthrough.
And if you do talk to the discoverer, be aware of their inherent conflict of interest. They have every incentive to make people believe they made a significant breakthrough—they want invitations to speak worldwide, recognition, and more research opportunities. There’s a natural incentive to oversell the discovery. Also, follow the money. Who funded the research? Often, it’s a drug company with a vested interest in promoting the findings. There are multiple incentives to oversell.
Jacobsen: The last time we spoke, you mentioned a colleague working on something related to stroke risk. You said he might have found a way to reduce that risk. Was it Devereaux?
Guyatt: Yes, that’s right. Devereaux has done incredible work, but it focuses more on preventing complications after surgery. Specifically, he’s shown that low doses of anticoagulants can prevent cardiovascular events, including heart attacks and strokes, after surgery. That’s probably what you’re referring to.
Jacobsen: What kind of risk reduction are we talking about?
Guyatt: I don’t know off the top of my head, but it’s around a 30% relative risk reduction.
Jacobsen: There’s been much discussion about losing trust in vaccines. What do you think are the causes and costs of that?
Guyatt: One of the things I’ve learned as an evidence-based practitioner is to quickly identify when I don’t know the evidence on a particular question. I avoid launching into speculative answers. I’m not a sociologist, and I don’t know which branch of social science would be best suited to address your question. I could speculate, but I wouldn’t be better at it than anyone else.
Jacobsen: That’s a fair point. You’ve made similar points in some of your posts. You’ve mentioned that when we receive criticism, we immediately get defensive. What is a more constructive response to that, rather than feeling threatened?
Guyatt: Well, the first thing I do is label it red alert. I’m feeling defensive and likely to respond in a sub-optimal way. Generally, the optimal way to respond is to say, “You may have a point.” Someone is pointing out a possible limitation in your work, so the first step is acknowledging that.
If you’re feeling defensive, it’s often a sign that the person has a valid point. So, you acknowledge it and say, “This doesn’t mean that everything I’ve put forth is fundamentally flawed, but it almost certainly means there are some limitations.” Considering those limitations and recognizing that your defensive feelings likely mean the other person has a point is a better way to handle the situation. Quickly acknowledging when someone has a point—even if it’s one I’d prefer not to admit—has been helpful.
Jacobsen: When we discussed red meat studies, we touched on some evidence that countered traditional health guidelines, specifically relative risks. Hypothetically, suppose someone wants to live the longest, healthiest life using evidence-based medicine. What tend to be the things most supportive of those goals and values?
Guyatt: Don’t smoke! The number one thing is: if you’re a smoker, stop. If you’ve never started, don’t. That’s the most impactful step for a long and healthy life.
After that, we’re talking about lifestyle factors. The evidence for dietary recommendations is limited. The Mediterranean and low-fat diets may increase lifespan, but the evidence isn’t robust. It’s not conclusive, but it’s still worth paying attention to.
Exercise seems like a good idea, but the evidence could be better. While it’s generally beneficial, I can tell you from personal experiences—such as my biking accidents—that it can also lead to injuries. I even had a subdural hematoma once. So, while I might have said, “Exercise probably won’t hurt you,” it depends on the type of exercise you choose. It certainly can hurt you.
Jacobsen: Outside of that, is there evidence in general to pick your parents well?
Guyatt: Absolutely, yes.
Jacobsen: What’s your general assessment of the current landscape of popular health reporting? As a non-expert journalist, has there been improvement, or are things largelythe same?
Guyatt: I have yet to focus much on critically reading popular health articles, so I’m not well-equipped to answer that in detail. However, as mentioned earlier, health journalists face a very difficult position. There’s a demand for bold, eye-catching statements, even when the evidence doesn’t necessarily support them. The challenge of balancing evidence with the need for sensational headlines remains unsolved.
Jacobsen: If we take a generalized approach to evidence-based evaluation, how do standardized tests compare to high school grades in predicting academic success?
Guyatt: Completely outside of my expertise.
Jacobsen: Are there any other lessons from COVID?
Guyatt: One thing I should have mentioned earlier about the success of evidence-based Medicine during COVID-19 was how we handled journal publications. Traditionally, from the time you submit your paper to the time it’s published, months go by. And if you talked about your findings beforehand, top journals would refuse to publish your work because they wanted the scoop.
During COVID, it became clear that this was completely irresponsible. Journals softened their stance and allowed pre-publications or preprints to circulate, which helped get critical findings out quickly. However, now that the crisis has passed, we’re seeing a return to the old ways. Even though important findings should be published quickly, they don’t get out as quickly as they should.
There were all these pre-publications. Before, when you did a pre-publication, the journals would say, “No way.” Thank God they did in these situations. The problem was that money was not available to do the research. But as soon, things were back to the way they were before. We have not lost everything but temporarily lost everything during COVID.
Jacobsen: Who are the main academic opponents of evidence-based medicine and the GRADE approach? I may be framing it improperly, too.
Guyatt: There is slower uptake in certain areas. The opposition has gone underground because everyone calls themselves “evidence-based.” “Evidence-based” is evidence-based without necessarily being evidence-based in how we think about it. There are mutterings here and there, but what used to be the fundamental challenge is not there anymore.
There are areas of slower uptake. Concerning GRADE, the oncology community needs to be faster. That one occurs to me. So, it is not opposition. It is a limited uptake, with more enthusiastic uptake in some areas than others.
Jacobsen: How do you see sloganeering as a problem in reporting on evidence-based medicine? So I can clarify. You were noting how evidence-based this and evidence-based that is. The way you’re saying that I sense a certain way in which public reportage on evidence-based medicine or people wanting to use the phrase “evidence-based medicine” because of its weight can lead to misunderstandings. Not only about how it is done but also about what it truly means to be appropriately evidence-based.
Guyatt: The biggest limitation getting on for 25 years, we’ve been making a big fuss that a central core of EBM is that evidence doesn’t tell you by itself what you do, but only if it is evidence in the context of patient preferences and values. Yet, people still have trouble grasping that. They think evidence-based medicine is all about randomized trials, but it’s not. It’s about finding the best available evidence to inform a decision one is facing. People have difficulty getting that, as well.
Jacobsen: Are there areas of medicine where “GOBSAT” (Good Old Boys Sitting Around a Table) is still a methodology?
Guyatt: I need to be made aware of any surveys on this, but there are areas where it’s still likely to occur, particularly in situations where high-quality evidence is unavailable or unlikely to emerge. For example, I have gone to meetings for rare diseases. Understandably, you have kids with terrible genetic diseases. Their lives have function going down. Something comes up. “We cannot wait to find out whether it works. You have to save the kid now.” This reaction is completelyunderstandable from an emotional standpoint but presents challenges from a scientific perspective.
But if someone says, “Our values and preferences are such that we’re ready to spend $1,000,000 a year,” that’s a serious consideration. They may spend that much money to give a child something that may have no beneficial effect and could cause harm. But if they value possible and unlikely improvement, then fine—let’s do it.
However, let’s keep the same rules to avoid acknowledging low-quality evidence. They don’t like calling it “low-quality evidence.” Let’s recognize that some things are simply more trustworthy than others. GRADE calls “low-quality evidence” untrustworthy, but they want to rename it.
For instance, the nutrition community has developed the NutriGRADE approach. Essentially, they say, “What you guys call low-quality evidence, we consider good evidence.” I understand their position and am sympathetic to their dilemma, but it’s still problematic.
Jacobsen: That reminds me of something we discussed in a previous interview that is worth re-emphasizing: fraud in the medical community. While it does happen, it doesn’t happen that frequently. For the most part, when fraud occurs, it gets caught, and they are penalized. This seems to be true for academia as a whole, too. What are the key points to emphasize regarding fraud in the medical community?
Guyatt: I can’t think of anything specific at the moment. What exactly are you asking about?
Jacobsen: I’m asking about the skepticism some people might have regarding the prevalence of fraud in the medical community. You’ve mentioned before that fraud is rare and usually gets caught. Can you elaborate on that?
Guyatt: Ah, now I see what you’re getting at. Yes, I believe fraud in the medical community doesn’t happen very often. When it does, it generally gets caught. It might happen more frequently than I used to think, but still, it’s uncommon.
After digging deeper, I found that there have been cases where people have uncovered more instances of fraud than expected. However, these are usually low-impact studies that need more attention. If someone commits fraud in an area that few people care about, it’s less likely that anyone will put in the effort to expose it.
Large-scale fraud that significantly impacts medical practice or research is rare. It is also unusual for fraud to lead to changes in major medical protocols or treatments.
Jacobsen: You mentioned the NutriGRADE approach earlier. Could you expand on that?
Guyatt: The NutriGRADE approach is used in nutrition and ranks evidence differently than in GRADE. They’re more willing to consider certain kinds of evidence “good” that we would label as low-quality. This creates challenges, as their system doesn’t align with how we assess the reliability of evidence. Still, it reflects the different values and needs within their field.
Jacobsen: What is NutriGRADE?
Guyatt: I only know some of the details, but it was developed about a decade ago or so. Essentially, they say, “We’re going to move the goalposts.” For example, these observational studies that GRADE would classify as low-quality evidence, NutriGRADE calls moderate-quality evidence. They claim that their nutrition studies produce more trustworthy evidence than GRADE suggests.
Jacobsen: Would you consider NutriGRADE reliable at all?
Guyatt: When you use the word “reliable,” it has a specific technical meaning for me as a methodologist. But if you mean in a broader sense—whether it’s trustworthy—here’s how I’d explain it. Let’s say you have two identical bodies of evidence. They are the same regarding how the studies were conducted, and the inferences you draw from them are identical.
Now, in one case, you could conduct a randomized trial. On the other hand, it’s impossible to conduct one. Are these two bodies of evidence equally trustworthy? The people who can’t conduct randomized trials might say, “Yes, let’s consider this more trustworthy since we’ll never have a trial.” But that’s not a tenable position. If the evidence is identical, it should be treated the same, whether or not a trial is feasible.
Jacobsen: You are a fan of acronyms. What is MAGIC, or the Making GRADE the Irresistible Choice initiative?
Guyatt: MAGIC is a group I’m involved with, and it’s focused on improving what we call the “evidence ecosystem.” An evidence ecosystem involves several steps: basic science informs observational studies, which inform randomized trials. Then, randomized trials inform systematic reviews, and systematic reviews inform guidelines. These guidelines then inform dissemination strategies to get evidence-based information out to clinicians and patients. It’s all about making the flow of evidence more efficient and actionable.
MAGIC’s role is to improve this evidence ecosystem. For example, during the pandemic, MAGIC helped enhance the system by establishing a collaboration with The BMJ for what we call “BMJ Rapid Recommendations.” We scan the literature for new, practice-changing evidence, quickly conduct systematic reviews, assemble a guideline panel, and produce trustworthy guidelines. These are then rapidly published in The BMJ.
During COVID-19, having already built this collaboration with The BMJ and the World Health Organization (WHO), MAGIC brokered a further collaboration between The BMJ and WHO. We served as consultants and partners with WHO to make sure the evidence ecosystem worked as efficiently as possible, especially when rapid decision-making was crucial.
At McMaster, we were one of the groups involved in a living network meta-analysis, where we processed all these trials to gather the necessary evidence. This evidence informed the World Health Organization (WHO) guidelines. So, while we didn’t create the evidence from the trials, we summarized it and brought it to the WHO, saying, “Here’s the latest evidence.”
We also acted as methodologists, helping the guideline panels move from evidence to recommendations. The day WHO publishes its recommendations, they’re also published in The BMJ. This way, the guidelines reach two different audiences simultaneously. WHO’s audience includes decision-makers, particularly in low-income countries, and The BMJ reaches a clinical audience. It was the first time this type of coordinated publication had been done.
This was MAGIC fulfilling its mission: processing evidence quickly, feeding that evidence into a trustworthy guideline process, producing trustworthy guidelines as fast as possible, and then disseminating the information effectively.
Jacobsen: I saw a tweet from September 25, 2023, that said, “Every high-income country with universal public healthcare has universal public prescription drug coverage, except Canada. It is time to change that with a public pharmacare program.” Does that sound correct?
Guyatt: You’re quoting me! We should have a universal pharmacy coverage system. However, claiming that every other country has universal coverage might stretch the truth, but it makes a political point. The gist is accurate: Canada is one of the few high-income countries without universal prescription drug coverage.
Jacobsen: Can you elaborate on that?
Guyatt: It’s true that in Europe, for example, well over 50% of drug payments are publicly funded, while in Canada, a large portion—over 50%—comes out of people’s pockets. In some European countries, it’s as high as 60-70% publicly funded. Canada did something odd—we decided to pay for doctors and hospitals. Still, we didn’t include prescription drugs in our universal healthcare system. Other countries have a more balanced approach to covering healthcare costs.
Jacobsen: Why did Canada take that approach? Was there a historical reason?
Guyatt: It goes back to the 1960s, to Tommy Douglas in Saskatchewan. The initial idea was to include drugs in the healthcare system, but it was something the government said they would get around to. They never did.
Jacobsen: Which European countries that offer universal prescription drug coverage are the most efficient in terms of cost and efficacy of outcomes?
Guyatt: My knowledge here is somewhat superficial, but I haven’t seen a single “role model” system that Canada could copy exactly. Some countries do certain things better, while others excel in different areas. It’s not as straightforward as saying one system is the most efficient overall.
Whether one system works better depends on local culture or specific policies. I’m unclear about which factors are most important.
Jacobsen: Speculative question: What gaps in the GRADE approach or evidence-based medicine could theoretically be addressed in the future, either as a new methodology or something outside its current scope?
Guyatt: I need help identifying any major gaps in GRADE, but we still face big challenges in efficient shared decision-making. Clinicians worldwide are time-constrained, and figuring out how to implement shared decision-making optimally remains a challenge.
Jacobsen: Could you break that down for those who might not be familiar with the concept?
Guyatt: Sure. One example we often use involves atrial fibrillation, an abnormal heart rhythm that significantly increases the risk of stroke. We have anticoagulants that reduce the risk of stroke but also increase the risk of serious bleeding. How do you present this information to patients so they can make informed trade-offs? It’s a delicate balance. Another example is breast cancer screening—if women fully understood both the magnitude of the benefits and the downsides, many would likely say “no thanks” to screening. But we don’t always present these choices in a way that helps people fully understand what they’re deciding.
Jacobsen: Could future systems, like large language models, help make this information more accessible?
Guyatt: Large language models won’t solve this issue. We still need to improve how we present the information. The key is conducting randomized trials on different methods of presenting choices to patients, but it takes work.
Jacobsen: Gordon, thank you again for your time, sir. I appreciate it.
Guyatt: Oh, are we finished? That’ll give me a few minutes to say hello to the person who just came into the room—my 101-year-old stepmother.
Jacobsen: Take care. Bye for now.
Guyatt: You too. Bye.
—
Scott Douglas Jacobsen is the Founder of In-Sight Publishing and Editor-in-Chief of In-Sight: Independent Interview-Based Journal (ISSN 2369–6885). He is a Freelance, Independent Journalist with the Canadian Association of Journalists in Good Standing, a Member of PEN Canada, and a Writer for The Good Men Project. Email: Scott.Douglas.Jacobsen@Gmail.Com.
License & Copyright
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.
