Skip to content

On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities

2025-01-22

Scott Douglas Jacobsen
In-Sight Publishing, Fort Langley, British Columbia, Canada

Correspondence: Scott Douglas Jacobsen (Email: scott.jacobsen2025@gmail.com)

Received: January 10, 2025
Accepted: N/A
Published: January 22, 2025

*Updated January 27, 2025.*

Abstract

This interview includes a detailed conversation between Scott Douglas Jacobsen and Dr. Kristóf Kovács, a Senior Research Fellow and Lecturer at the Institute of Psychology and the Department of Counselling and School Psychology. Dr. Kovács leads the Cognitive Abilities Lab, focusing on research in cognitive abilities, intelligence, psychometrics, and their measurement. He critiques the limitations of IQ tests in assessing creativity, sensorimotor skills, or interpersonal abilities, emphasizing the need for detailed profiles for diagnostics over societal “IQ fetishism.” Dr. Kovács explores the importance of ethical and transparent research practices and provides a nuanced understanding of IQ scores and their applications. The discussion includes the historical context of IQ testing, its practical applications, and the sociological implications of the g-factor as a statistical construct.

Keywords: Cognitive Abilities, Diagnostic Context, Educational Interventions, Fluid Reasoning, IQ Distribution, IQ Fetishization, IQ Measurement, IQ Tests, Multiple Intelligences, Percentiles, Psychometrics, Sensorimotor Abilities, Standard Deviation, Working Memory

Introduction

The document features an engaging interview with Dr. Kristóf Kovács, conducted in 2025 by Scott Douglas Jacobsen, as a recommendation from Björn Liljeqvist, former chair of Mensa International. Dr. Kovács, a Senior Research Fellow and Lecturer at the Institute of Psychology and the Department of Counselling and School Psychology, shares his insights on the measurement of intelligence, cognitive abilities, and psychometric tools. Leading the Cognitive Abilities Lab, Dr. Kovács critiques the limitations of IQ tests, emphasizing their inability to measure creativity, sensorimotor skills, and interpersonal abilities. He highlights the importance of providing detailed diagnostic profiles rather than relying on singular IQ scores. The interview delves into societal misconceptions, such as “IQ fetishism,” and clarifies the statistical construct of the g-factor, noting its utility in sociological studies but limited relevance for individual diagnostics. Dr. Kovács’ work underscores the need for ethical and transparent research practices and the refinement of tools to better capture the complexities of cognitive abilities. His perspectives challenge conventional views on intelligence testing and advocate for a more nuanced understanding of cognitive profiles for practical applications, ranging from education to legal contexts.

Main Text (Interview)

Interviewer: Scott Douglas Jacobsen

Interviewee: Dr. Kristóf Kovács

Section 1: Introduction and Context: Setting the Stage

Scott Douglas Jacobsen: So, today, we are here with Dr. Kristóf Kovács. This interview is a recommendation from Björn Liljeqvist, so thank you, Björn. I interviewed with him a while ago. I have been interviewing many individuals from various groups, including Mensa. In high-IQ communities, I wanted to get a professional opinion about testing. So, I posed the first big question that people might have if they are stumbling upon this interview: How much do IQ tests measure intelligence? What is the overlap between IQ and intelligence? In other words, what is the overlap in this Venn diagram?

Section 2: Defining Intelligence: Beyond the Traditional Views

Dr. Kristóf Kovács: That is a very old question. Whether IQ tests measure intelligence is a controversial issue. I do not think it is a particularly useful question because, to a large extent, it depends on how we define intelligence. If intelligence traditionally meant some form of cognitive ability, then today, with enough research, one can find references to all sorts of intelligence.

There is a paradox I perceive here. People who are very critical of IQ tests and the concept of intelligence argue that IQ testing is flawed. Yet, simultaneously, they are quick to embrace the term intelligence. There is always an alternative concept proposed to counter IQ. The first major alternative was emotional intelligence, which, after 20–25 years of research, became a meaningful scientific construct, in my opinion. However, it does not necessarily need to be called intelligence—it could be termed emotional ability. Nevertheless, now we see references to concepts like spiritual intelligence, naturalist intelligence, and other types of intelligence.

Of course, IQ tests clearly do not measure intelligence if intelligence is defined broadly enough to include aspects such as one’s relationship to spirituality. IQ tests do not assess spirituality, emotionality, one’s connection to nature, interpersonal skills, self-awareness, or other qualities often labelled as intelligence today. Therefore, the extent to which IQ tests measure intelligence depends entirely on how intelligence is defined. Debates over definitions, in my experience, are not particularly useful.

I try to avoid using the term “intelligence” whenever possible. Interestingly, I used to work extensively with Mensa, which is probably how you found me through Björn. However, I am primarily a researcher specializing in individual differences in cognition. My academic work at the university involves a research position.

In my research, I cannot entirely avoid using the term “intelligence,” particularly in contexts related to Mensa, but I prefer to frame my research interests as focusing on cognitive abilities rather than intelligence. When we discuss cognitive abilities, there is no meaningful way to include aspects like spirituality.

Section 3: Cognitive Abilities vs. Intelligence: A Conceptual Shift

My research lab is called the Cognitive Abilities Lab—it is not called the Intelligence Lab. In my work, I consciously use the term cognitive abilities because it is plural. Intelligence, by contrast, is singular. As a researcher, discussing a range of specific abilities, such as fluid reasoning or crystallized knowledge, is far more meaningful.

Working memory or perceptual speed, and so on, are more meaningful constructs than a single general intelligence. General intelligence, in my opinion, is an index derived from various specific cognitive abilities. Still, it is not an ability in itself. For this reason, I prefer discussing cognitive abilities rather than intelligence. This approach avoids the type of definitional debates you raised. That said, I don’t want to circumvent the question completely.

IQ tests do a reasonable job if we define intelligence as cognitive ability. There’s a famous saying from Winston Churchill that democracy is the worst form of government, except for all the others humanity has tried. When I teach this or present at conferences, I often draw a parallel, saying that IQ tests are the worst instruments for measuring intelligence—apart from all the others psychology has ever tried.

Jacobsen: That’s good. A different way to frame it is from an empirical basis. If we’re examining cognitive abilities, what has emerged from research over the past century or so regarding what IQ tests measure? Also, what do the tests not measure that we know fall under cognitive abilities?

Section 4: IQ Tests and Their Purpose: Strengths and Limitations

Kovács: That’s an interesting question. If we consider creativity a cognitive ability, IQ tests do not measure it. Creativity is assessed using creativity-specific tests, but it is a much harder construct to define, operationalize, and measure with psychometrically sound instruments.

Sensorimotor abilities are another relatively underexplored area in cognitive ability testing, especially in young children. In my lab, we are conducting a research project on this topic. Our findings suggest that in preschool children, sensorimotor abilities—such as balance or other basic motor skills—are strong predictors of cognitive abilities required in school settings. Interestingly, these correlations diminish after about age seven. However, in preschoolers aged four, five, and six, sensorimotor abilities are significantly linked to skills like memory and the ability to focus, which are crucial as children begin formal education.

Sensory motor abilities and creativity are two areas that, while reasonably considered cognitive, are not measured by IQ tests. IQ tests have historically focused on educational settings and later workplace applications. The military was among the first workplaces to use intelligence tests to predict achievement or trainability. What schools and workplaces require has heavily influenced the development of these instruments.

Section 5: Standard Deviation and Interpretability of Scores

Jacobsen: People researching IQ might encounter terms like standard deviation, whether 15, 16, or other values, and lists of IQ scores—highest IQ score lists, historical figures, famous people, etc. What should people think critically about when they encounter these references? Regarding some of these popularized extraordinary IQ scores, what can we reasonably say about their accuracy? Specifically, how do high and low scores relate to rarity percentiles?

Kovács: That’s a great question. There are two parts here: one about standard deviation and the other about the interpretability of the range. The most common standard deviation is the 15-point standard deviation, which was established with the Wechsler scale. This is the standard IQ distribution you’ll find in textbooks. IQ is typically presented as a scale with a mean of 100 and a standard deviation of 15.

Here’s how it works: your raw test score is standardized, converting it into a z-score, expressing your performance in standard deviation units. Then, we assign 15 points for every standard deviation. For example, if you score exactly one standard deviation above the mean, your IQ score will be 115. If you score two standard deviations above the mean, your IQ score will be 130.

You’re right, though, that other standard deviations are in use. For instance, some tests historically used a 16-point standard deviation. However, I’m unsure if that is still true with the Stanford-Binet scales. The Cattell scale, on the other hand, used to have a standard deviation of 24. As someone who has provided feedback on IQ tests, I find this variability somewhat frustrating.

Many people, understandably, don’t realize that IQ is simply a relative scale. Without a background in statistics, interpreting it can be confusing. IQ is not an absolute measure.

For example, you can express even something like height on an IQ scale. You do not need to, since height has an absolute zero, so we use absolute measures like centimetres. IQ, by contrast, lacks an absolute zero—it’s purely comparative. Everyone is compared to the mean, and differences are expressed in standard deviation units before being translated into IQ scores. But if you really want you can express height using an IQ-style scale. In this case it becomes a relative score. For instance, let us assume that the average height for Canadian males is 175 centimetres, with a standard deviation of 6 centimetres. If someone is one standard deviation above the mean, their “height IQ” would be 115. This approach standardizes the data for easier comparison.

Jacobsen: Centimeters work—we’re Canadian and use metric and imperial measurements.

Kovács: Perfect. So, if we continue with that example, a two-standard-deviation height above the mean—187 centimetres—would correspond to a “height IQ” of 130. Of course, this is just an analogy to explain how IQ operates as a comparative scale rather than an absolute measure.

IQ scores can always be translated back to standard or z-score scores. For example, if you’re just above one standard deviation above the mean, your z-score would be +1. If you’re exactly as tall as the average Canadian male, your height in a standard z-score would be 0. If you’re one standard deviation above the mean, your z-score is +1. Theoretically, you could translate that into an IQ scale, but why would you? There’s an absolute zero with height, so you don’t need to use a relative scale like IQ.

IQ, conversely, is purely a relative scale. If you know someone has an IQ of 150 but don’t know the standard deviation being used; you can’t determine if it’s three standard deviations above the mean or slightly less than two. For example, with a standard deviation of 24, an IQ of 150 represents something different with a standard deviation of 15. People often don’t realize the importance of standard deviation in interpreting IQ scores.

Section 6: Percentiles vs. IQ Scores: Simplifying the Complexity

At the same time, there’s this strange IQ fetish in society. For example, you often hear claims from celebrities—actors or actresses—saying they have an IQ of 180. These numbers are thrown around, but they lack context. In my experience, percentiles are far more useful and comprehensible for the general public.

If you have a normal distribution of scores, any z-score can be converted into a percentile or an IQ score. Theoretically, These measures are interchangeable, but percentiles are much easier for most people to understand. For instance, if you tell a parent their 12-year-old outperforms 95 out of 100 children of the same age, they will understand what that means. Similarly, if you say, “Your child has a better vocabulary than 98 out of 100 children their age,” it’s immediately relatable.

If you tell the parent that the 98th percentile corresponds to a z-score of +2 or an IQ of 130, it becomes more abstract. If you say their child has an IQ of 130, most people won’t know how to react. Should they be ecstatic? Perhaps they read in the paper that morning about a celebrity claiming an IQ of 190, and they might feel disappointed. In reality, an IQ of 130 is excellent—it’s in the top 2% and qualifies for Mensa membership.

If I were in charge, I’d eliminate IQ scores entirely and only use percentiles. In my experience, IQ scores create more confusion than clarity. Unless someone in this field understands the statistical nuances, they often misinterpret the scores. Since IQ scores can always be converted to percentiles, the latter is more intuitive and effective for communication.

On the other hand, it couldn’t be clearer to a parent if you say, “Your child outperforms 90 out of 100 peers,” or, “Your child is weaker than 80 out of 100 peers.” That immediately highlights whether a specific area is a strength or a weakness for the child.

Section 7: Diagnostic Contexts: The Importance of Comprehensive Testing

The other question was about the range of interpretable scores. Typically, all scores are normed against a sample, usually a few thousand people. For example, in a representative sample in the U.S., you might have 5,000 or 6,000 participants, with around 200 individuals for a specific age group, such as 12-year-olds. When you compare an individual to that age group, anything beyond one in 200 is based on extrapolation.

The more you project beyond your data, the less accurate the interpretation becomes. For instance, if someone claims a child is “smarter than one in a million,” but the comparison is based on only 200 children, that projection is highly speculative. Typically, scores within plus or minus two standard deviations from the mean are interpretable. A third standard deviation can also be meaningful, especially for individually administered tests that take significant time to complete.

IQ scores are often calculated as scores derived from multiple subtests. If someone scores in the top 2% across five subtests, the likelihood of that occurring across all subtests is much rarer than 2%. To explain this with an analogy: imagine you’re looking for people who are taller than 98% of Canadians and have driven more miles than 98% of Canadians. The probability of finding someone who satisfies both criteria is much smaller than 2%.

Similarly, if someone scores very highly on multiple subtests, it provides a stronger basis for interpreting their overall IQ as being exceptional. By contrast, if someone scores high on just one test, that result is more likely to be “noisy,” with a larger margin of error.

In statistical textbooks, normal distributions are usually illustrated up to plus or minus three standard deviations because this range covers 99.7% of the entire distribution. Only 0.3% of scores fall outside this range—0.15% on each end. For example, anything above three standard deviations would represent about 3 individuals out of every 2,000. That’s why illustrations of normal distributions in textbooks typically stop at three standard deviations; beyond that, the probabilities become increasingly rare and harder to measure accurately.

Up to plus or minus three standard deviations is meaningful and reliable. I know there are groups like the higher sigma societies, but I don’t want to comment. I’ll leave that to someone you might interview from those societies. For the record, what I’m describing here is what you’ll find in standard statistical textbooks. Reliable and valid testing generally falls within plus or minus three standard deviations. Beyond that, scores become far less reliable.

I’d be skeptical of scores above +3 standard deviations and specially above +4. A score of +4 can be equivalent to one in a million. For instance, someone claiming, “My child is smarter than 999,999 other children,” raises the obvious question: how do you know?

Section 8: Multiple Intelligences and Alternative Theories

Jacobsen: These issues often tie into statistical limitations, such as sample size and whether the test was properly proctored. Then, there are potential conflicts of interest. For example, if someone takes a test designed by someone they know, the results could be biased. Setting aside those issues, we’ve covered a lot so far: definitions of intelligence, the scope of IQ tests, reframing to cognitive abilities, standard deviations, and reliable ranges. What about the context in which these tests are proctored? For example, tests developed with significant investment and large sample sizes are conducted in secure environments where answers aren’t leaked—what is the importance of those measures when trying to measure what IQ tests aim to assess?

Kovács: In short, high stakes. Suppose you want an elaborate and thorough measurement, especially when the stakes are high. In that case, ensuring the test is secure, properly administered, and statistically sound is essential. This is particularly critical in diagnostic contexts.

One high-stakes example is the death penalty in the U.S. Individuals with an IQ below 70 cannot be sentenced to death. Determining whether someone’s IQ is below this threshold becomes a matter of life and death—the highest stakes imaginable. While that’s not my area of research, it’s an extreme case where the reliability of IQ testing carries enormous weight.

More commonly, professionally proctored IQ tests are administered for diagnostic purposes, particularly in school settings. In the U.S. alone, millions of individually administered IQ tests are conducted yearly. These tests help identify cognitive strengths and weaknesses to guide educational and developmental interventions.

Section 9: The g-Factor: Index, Not Ability

A comprehensive profile, derived from a range of subtests, is so important. It provides a detailed view of strengths and weaknesses. For example, one of the most common recommendations by school psychologists is to suggest that a child be given extra time on tasks or exams. 

Imagine a child with a profile showing excellent fluid reasoning (nonverbal problem-solving), strong verbal ability, and strong spatial ability but only slightly above average working memory and average perceptual speed. This profile often leads to frustration because the child’s abilities outpace their processing speed. In other words, their strengths cannot fully compensate for the slower speed at which they process information. This kind of detailed profile allows a school psychologist to make targeted recommendations to address the child’s specific challenges.

Individually administered tests are resource-intensive, typically taking one to one-and-a-half hours of a psychologist’s time in a one-on-one setting. This level of investment is far greater than administering a group test to 30 students, so it’s generally reserved for high-stakes situations. For instance, if a child is underachieving, frustrated, or showing signs of learning difficulties, then creating a full-ability profile is worth the investment. A detailed profile highlights individual strengths and weaknesses. It is far more useful for diagnostic purposes than a single overall score.

When I teach this, I often use an analogy to explain the limitations of an overall IQ score. Imagine visiting your doctor and receiving a detailed lab analysis of your blood sample. You see values for glucose levels, cholesterol, vitamin levels, and so on. Imagine the doctor told you, “Your health IQ is 70.” What would you learn from that? You’d know you’re in trouble—only 2% of people your age are less healthy than you—but it wouldn’t help you or your doctor determine what’s wrong or how to address it.

That’s the issue with relying solely on an overall IQ score. It’s like receiving a “health IQ” score that says you’re less healthy than 95% of your peers. While that might motivate you to worry, it doesn’t provide actionable insights. Similarly, while overall IQ scores can be useful to an extent—such as for Mensa membership, where the goal is to identify the top 2% of cognitive performers—they don’t provide the diagnostic depth necessary to understand and address specific challenges.

A health quotient (HQ) might be useful if your goal is to create a society comprising the healthiest 2% of people. However, if someone is unhealthy, an HQ score won’t help them. What they need is a detailed diagnostic to identify the specific problem. That’s why we use detailed tests and invest significant resources and time to assess a child individually and create a profile of their strengths and weaknesses.

Jacobsen: These are important cautionary tales about interpreting results. What about multiple intelligences, Sternberg’s triarchic theory of intelligence, and the g-factor? While there’s no general consensus, what is the prevailing view?

Kovács: These are all controversial topics. Regarding multiple intelligences, I think Howard Gardner’s work critiques the educational system more than a true theory of individual differences. Gardner has never shown much interest in rigorously measuring these intelligences. Essentially, his theory advocates focusing on children who might not be conventionally “smart” but excel in areas like social skills or the arts. It’s an example of extending the concept of intelligence, which is valuable in its own way. However, Gardner hasn’t developed reliable assessment tools for most of this proposed intelligence.

Whether we should call someone “intelligent” for having exceptional interpersonal skills despite not being conventionally smart is a matter of perspective. I’ll leave that judgment to others. As for the g-factor, that’s closer to my area of research. My work focuses extensively on interpretations of the g-factor, and I’ve published on this topic. We have a framework called the Process Overlap Theory, which explains the g-factor without requiring the assumption of a general intelligence or overarching ability. Naturally, I’m biased because this is my research field. Still, I see the g-factor as a summary or index score of separate cognitive abilities.

The g-factor is statistically advantageous in many ways. While it doesn’t represent a single ability, it’s a latent construct useful for certain purposes. For example, suppose you’re conducting large-scale sociological research and want to study how cognitive functioning predicts income. In that case, the g-factor is a highly effective tool. In that context, it doesn’t matter whether someone excels in working memory, perceptual speed, or vocabulary—the overall level of cognitive functioning matters.

However, the utility of the g-factor depends entirely on your purpose. For diagnostics, the g-factor is not particularly helpful. Like the HQ analogy—it provides an overall score but doesn’t tell you much about specific strengths or weaknesses. If your goal is to diagnose and support individuals, identifying patterns of cognitive strengths and weaknesses is far more informative. On the other hand, if you’re studying broad trends, such as the relationship between cognitive functioning and socioeconomic outcomes, the g-factor is invaluable.

If you want to predict someone’s salary based on their cognitive abilities, overall scores or indicators like the g-factor are very useful. However, I don’t see the g-factor as a proxy for a single “general intelligence.” Instead, it’s an index score calculated from various distinct abilities.

Jacobsen: That’s a very interesting perspective. I hadn’t heard it framed as an index at a sociological level rather than as a generalized commentary on a larger sociological construct. Viewing it as an index aligns with your emphasis on cognitive abilities about different factors. That makes the research clearer, too.

Kovács: Exactly. I’m glad it makes sense.

Section 10: Final Reflections: Caution and Clarity in Assessment

Jacobsen: Any final important things people should remember when they look at scores or assessments?

Kovács: That topic would take over a minute to address, so I’ll leave it at that for now. If that’s okay with you, my part is complete. I look forward to seeing the transcript.

Jacobsen: Excellent. 

Kovács: Thank you for your time and patience. 

Jacobsen: I truly appreciate this conversation.

Kovács: Thank you so much. Cheers!

Discussion

The interview between Scott Douglas Jacobsen and Dr. Kristóf Kovács provides a detailed exploration of how modern psychology understands and measures cognitive abilities. Dr. Kovács challenges the traditional notion of “intelligence” as a singular construct, emphasizing instead the pluralistic nature of cognitive abilities such as fluid reasoning, crystallized knowledge, perceptual speed, and working memory. By moving beyond a single “IQ” score, he advocates for a more nuanced view that can guide targeted educational and diagnostic interventions. A recurring theme in the conversation is the distinction between intelligence as a broad concept and IQ scores as comparative, standardized metrics. Dr. Kovács underscores that IQ testing, while not perfect, remains one of the best available tools for evaluating cognitive performance—reminiscent of Winston Churchill’s remark about democracy being the “worst form of government except for all the others.” The interview critiques the widespread fetishization of extreme IQ scores, highlighting that many of these extraordinary claims lack robust statistical grounding, especially beyond three standard deviations from the mean.

Another significant thread is the question of what IQ tests fail to measure. Dr. Kovács points to creativity and sensorimotor abilities as cognitive functions often overlooked in conventional testing. Additionally, the conversation addresses multiple intelligences (e.g., emotional or spiritual intelligence) and how broadening the definition of “intelligence” can move us away from precise measurement, potentially conflating distinct skill sets under one umbrella term. The importance of standardized, proctored testing environments also features prominently. High-stakes scenarios—such as determining if an individual’s cognitive functioning meets legal thresholds—demand rigorous procedures to ensure both validity and reliability. Dr. Kovács illustrates how a more detailed cognitive profile, built from a series of subtests, can offer actionable insights. By examining strengths and weaknesses, educators and clinicians can better tailor interventions for individual needs.

Ultimately, the conversation highlights that while IQ tests serve as valuable predictors in large-scale sociological research—such as forecasting educational or occupational outcomes—their utility in diagnosing and guiding individuals hinges on deeper, more granular analyses of cognitive abilities. Dr. Kovács calls for a balance between recognizing the broad applications of IQ tests and acknowledging the complexity of human cognition, urging educators, psychologists, and policymakers alike to interpret scores with both caution and context in mind.

Methods

The interview with Dr. Kristóf Kovács was conducted in a semi-structured format on a date prior to its publication on January 10, 2025. Scott Douglas Jacobsen coordinated this conversation after receiving a recommendation from Björn Liljeqvist, former chair of Mensa International. Questions were designed to elicit detailed responses about IQ measurement, cognitive abilities, and the practical implications of test usage in educational and diagnostic settings. The session was recorded with the informed consent of both parties to ensure accuracy in transcription. Post-interview, the recording was transcribed verbatim and subsequently organized into thematic sections to align with the central topics covered, including the definition of intelligence, the role of standard deviations, and the limitations of IQ testing. This thematic organization aimed to provide readers with a coherent narrative, linking empirical research to real-world applications. By employing a semi-structured interview technique, Jacobsen allowed Dr. Kovács the flexibility to elaborate on specific areas of his expertise, while ensuring the conversation remained focused on key issues of interest to high-IQ communities, educators, and psychologists. This methodological choice facilitated a balanced dialogue, blending guiding questions with open-ended discussions that illuminate the complexities of measuring and interpreting human cognitive abilities.

Data Availability

No datasets were generated or analyzed during the current article. All interview content remains the intellectual property of the interviewer and interviewee.

References

(No external academic sources were cited for this interview.)

Journal & Article Details

  • Publisher: In-Sight Publishing
  • Publisher Founding: March 1, 2014
  • Web Domain: http://www.in-sightpublishing.com
  • Location: Fort Langley, Township of Langley, British Columbia, Canada
  • Journal: In-Sight: Interviews
  • Journal Founding: August 2, 2012
  • Frequency: Four Times Per Year
  • Review Status: Non-Peer-Reviewed
  • Access: Electronic/Digital & Open Access
  • Fees: None (Free)
  • Volume Numbering: 13
  • Issue Numbering: 2
  • Section: E
  • Theme Type: High-Range Test Construction
  • Theme Premise: “Outliers and Outsiders”
  • Theme Part: 33
  • Formal Sub-Theme: None
  • Individual Publication Date: January 22, 2025
  • Issue Publication Date: April 1, 2025
  • Author(s): Scott Douglas Jacobsen
  • Word Count: 3,622
  • Image Credits: Photo by Ben Mullins on Unsplash
  • ISSN (International Standard Serial Number): 2369-6885

Acknowledgements

The author thanks Dr. Kristóf Kovács for his time and willingness to participate in this interview.

Author Contributions

S.D.J. conceived and conducted the interview, transcribed and edited the conversation, and prepared the manuscript.

Competing Interests

The author declares no competing interests.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Scott Douglas Jacobsen and In-Sight Publishing 2012–Present.

Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen is strictly prohibited. Excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Supplementary Information

Below are various citation formats for On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities.

  1. American Medical Association (AMA 11th Edition)
    Jacobsen S. On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities. January 2025;13(2). http://www.in-sightpublishing.com/high-range-28
  2. American Psychological Association (APA 7th Edition)
    Jacobsen, S. (2025, January 22). On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities. In-Sight Publishing. 13(2).
  3. Brazilian National Standards (ABNT)
    JACOBSEN, S. On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities. In-Sight: Interviews, Fort Langley, v. 13, n. 2, 2025.
  4. Chicago/Turabian, Author-Date (17th Edition)
    Jacobsen, Scott. 2025. “On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities.” In-Sight: Interviews 13 (2). http://www.in-sightpublishing.com/high-range-28.
  5. Chicago/Turabian, Notes & Bibliography (17th Edition)
    Jacobsen, S. “On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities.” In-Sight: Interviews 13, no. 2 (January 2025). http://www.in-sightpublishing.com/high-range-28.
  6. Harvard
    Jacobsen, S. (2025) ‘On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities’, In-Sight: Interviews, 13(2). http://www.in-sightpublishing.com/high-range-28.
  7. Harvard (Australian)
    Jacobsen, S 2025, ‘On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities’, In-Sight: Interviews, vol. 13, no. 2, http://www.in-sightpublishing.com/high-range-28.
  8. Modern Language Association (MLA, 9th Edition)
    Jacobsen, Scott. “On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities.” In-Sight: Interviews, vol. 13, no. 2, 2025, http://www.in-sightpublishing.com/high-range-28.
  9. Vancouver/ICMJE
    Jacobsen S. On High-Range Test Construction 28: Dr. Kristóf Kovács on Accuracy in IQ, Intelligence, and Cognitive Abilities [Internet]. 2025 Jan;13(2). Available from: http://www.in-sightpublishing.com/high-range-28

Note on Formatting

This layout follows an adapted Nature research-article structure, tailored for an interview format. Instead of Methods, Results, and Discussion, we present Interview transcripts and a concluding Discussion. This design helps maintain scholarly rigor while accommodating narrative content.

 

Leave a Comment

Leave a comment