Skip to content

Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7)

2024-04-01

Publisher: In-Sight Publishing

Publisher Founding: March 1, 2014

Web Domain: http://www.in-sightpublishing.com

Location: Fort Langley, Township of Langley, British Columbia, Canada

Journal: In-Sight: Independent Interview-Based Journal

Journal Founding: August 2, 2012

Frequency: Three (3) Times Per Year

Review Status: Non-Peer-Reviewed

Access: Electronic/Digital & Open Access

Fees: None (Free)

Volume Numbering: 12

Issue Numbering: 2

Section: A

Theme Type: Idea

Theme Premise: “Outliers and Outsiders”

Theme Part: 30

Formal Sub-Theme: None.

Individual Publication Date: April 1, 2024

Issue Publication Date: May 1, 2024

Author(s): Scott Douglas Jacobsen

Word Count: 6,790

Image Credits: None.

International Standard Serial Number (ISSN): 2369-6885

*Please see the footnotes, bibliography, and citations, after the publication.*

Abstract

Bob Williams is a Member of the Triple Nine Society, Mensa International, and the International Society for Philosophical Enquiry. He discusses: satisfactory retirement in 1996; how standardized tests were not widely utilized for nuclear physics job admissions; microfiche as a valuable research tool; entering workforce in 1966 without testing; transition from male-dominated colleges to coeducation; early 90s intelligence research material; Richard Lynn’s work in Mensa Research Journal; influential books on intelligence research; statistical methods for high sigma tests facing challenges; challenges to psychometric g including alternative intelligence models; Network Neuroscience Theory exploring brain networks’ role in intelligence; intelligence decline trends observed in developed nations; statistical methods not applicable in intelligence studies; the validity of high sigma IQ tests; constructing culture-fair tests for high sigma ranges facing practical and theoretical challenges; AI advancements and intelligence measurement; DNA analysis and intelligence estimation; AI conversational agents estimating human intelligence; fear of controversy may hinder certain research topics; respect for disciplines may be affected by controversial research topics; unaided smart kids in education; “woke” in context of left-leaning educational policies; potential avenues for measurement, exploring animal studies and leveraging AI technologies; concept of “magic multipliers”; decoupling of familial environment (FE) from general intelligence (g); ethical considerations of reproductive technologies, particularly in context of assisted reproduction and genetic screening; potential development of artificial general intelligence (AGI) based on our understanding of brain structures and processes related to intelligence; and integration of modern network models with existing theories of intelligence, signaling potential direction for future research in this field.

Keywords: admissions, challenges, conferences, diffusion tensor imaging, intelligence, interviews, libraries, microfiche, myths, networks, psychometric g, research, standardized tests, statistics, twin studies.

Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7)

Scott Douglas Jacobsen: How was the retirement in 1996? Were standardized tests of note utilized in admissions for particular jobs in workspaces requiring nuclear physics? I have used MicroFiche in some research at one of the libraries in a postsecondary institution here. It is still a good resource. I’m pro-MicroFiche, but a minority-user!

Bob Williams: I entered the workforce in 1966.  There was no testing, just a face to face interview.  The thing that is interesting (to me) about the outcome of this is that hiring people largely on the basis of the degrees they held resulted in a fairly homogeneous group of people who ranged from bright to very bright.  In 1966 we were still in an era in which a much smaller fraction of men went to college/university and a still smaller fraction of women went.  Of the women who did attend college, most were in colleges for women (including some very well known schools with respected academics) or went to colleges for teachers, which was a subset of the former.  By the time I retired women were a majority in some colleges and the colleges that previously admitted only men were open to women.  I think by then colleges for women were admitting men and the real, women only, colleges were headed for change or closure.

I am surprised that MicroFiche still exists!  I love being able to locate papers and books with a computer and often obtain the found document instantly by downloading it.

Jacobsen: The period between the 1990s and 2003/04 of joining and attending conferences of the International Society for Intelligence Research. What were the first realizations in this independent research for you?

Williams: Back then, good material was not only more difficult to find, but there was much less of it.  In the early 90s I subscribed to the Mensa Research Journal.  It was mostly filled with reprints from various sources, but occasionally had a direct submission.  I recall seeing Richard Lynn’s work there and reading about his ideas about the evolution of intelligence.  They presented him with an award for his intelligence research contributions.  At about that time, I joined the International Society for Philosophical Enquiry and met Miles Storfer.  I bought his recently written book from him (he carried them around): Storfer, Miles D. (1990).  Intelligence and giftedness: The contributions of heredity and early environment. San Francisco, CA, US: Jossey-Bass.  Then a big one arrived:  Herrnstein, R. J., & Murray, C. (1994). The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press.  By this time, I had found and read enough material that I already knew the material they reviewed, so the interesting part was the new analysis of the National Longitudinal Survey of Youth data.  A few years later, the most cited book in the history of intelligence research publications arrived: Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.  I had already read some of Jensen’s papers and some references to his work in various other sources.  By the time I met Jensen in 2004, he had become my passive mentor.

My realizations were, first that I had to learn some statistical methods that I had not previously encountered, and second that the science of intelligence is inherently messy.  Coming from a physics background, I was used to things being precisely measurable and repeatable.  The niche of intelligence within differential psychology was much like mud wrestling.  I quickly learned to appreciate the challenge of extracting meaning from data that was full of confounds.  It is a fascinating challenge and I think it is rewarding, particularly when most of the real meat of the science is hidden to a much greater extent than happens in physics and chemistry.

In the innately fuzzy world of life sciences there are studies that we cannot do for social or practical reasons, but someone finds a brilliant way to extract the information from natural experiments.  For example we cannot inflict a famine on an experimental group, but since real famines have happened (such as the Dutch famine during WW2), it is sometimes possible to find data that relates directly to those events.  Besides the Dutch data, there was the interesting question of how to determine if head sizes had changed over time.  If you want to consider a long time, direct measurements are impossible, unless they were performed and recorded (they were not).  In this case, Rushton found Army data on the number of military helmets that were issued by size.  Yes, he found an increase.

Jacobsen: Were there points of collaboration?

Williams: Yes, a few.  Most of the material I published was solo, but there were a few papers where I was a coauthor.  These were all publications in academic journals.  I have published much more in the private journals Noesis, Gift of Fire, Vidya, and Telicom.

Jacobsen: Let’s call this the exploratory years or something friendly like this, what were the major realizations upon entering the field at the time? What were the first myths dispelled?

Williams: I don’t recall having heard and believed any of the many popular myths that persist about intelligence.  There were lots of new things to learn that I had not previously encountered.  Learning how the twin studies and adoption studies were conceived, executed, and reported was important and impressive.  Both Robert Plomin and Thomas Bouchard initiated these somewhat challenging studies.  I met Bouchard in 2004 and recall having asked him enough questions to have been a pest.  He was very helpful in explaining things that few people understand.  For example, I learned that it was true that twins have a statistically lower intelligence than singletons and that the issue of the heavier twin being more intelligent was true, but had been solved by prenatal care.  I also learned that the attacks against some researchers were much worse than I imagined.  Among those who really suffered (in the time frame you mentioned) were Nyborg and Brand, both of whom lost their jobs.  Jensen took more flack than anyone, but he seemed unfazed by it.  In fact, he told me to watch for the upcoming paper he did with Rushton.  He said that he expected it would cause “quite a stir.”  [Rushton, J.P. and Jensen, A.R. (2005). Thirty Years of Research on Race Differences in Cognitive Ability. Psychology, Public Policy, and Law, Vol. 11, No. 2, 235-294.]  After the paper came out, I asked him if there was any notable reaction to it.  He said “no,” and seemed disappointed.  It led me to suspect that he was looking forward to another rant from the left, which did not happen.

Jacobsen: Now, to those first realizations and myths taken away by truths, what ones have remained true?

Williams: I wish I had a list of such myths that involved me, but as I explained, there were none.  I was disconnected from the field of intelligence research until my interest developed in the early 90s.  When I became interested, I was lucky (or careful) to ease into the new field by following the real experts.  The job was one of reading books and papers and those generally do not get far off target.

There was one common belief that was disproved to the surprise of everyone.  One of the things that was consistently reported was the correlation between brain size and intelligence.  When structural MRI became available, the correlation was found to be about r = 0.40.  That was challenged by a meta-analysis that showed a somewhat smaller correlation coefficient, but then it was shown that the meta-analysis consisted of a large number of studies that used low quality IQ tests.  When only high quality tests were used, the old number turned out to be correct.  But that was not the surprise.  The surprise appeared in this paper:

Erhan Genç, et al. (2018) Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence; Nature Communications 9:1905.  It can best be appreciated from this figure from the paper:

The explanation with the figure explains what was found.  Genç was using diffusion tensor imaging for this work.  I have had the great pleasure of getting to know him a bit.  His most recent work combines brain imaging with polygenic stores.

Jacobsen: After the exploratory years and the interaction with individuals who wrote papers and books on the subject of intelligence, what first struck you about the professional community of intelligence researchers? Some see intelligence as the most important human trait.

Williams: Of course, intelligence is not only the most important human trait, but it is even more.  Detterman expressed this perfectly:

Detterman, D. K. (2016). Was Intelligence necessary? Intelligence, 55.

“From very early, I was convinced that intelligence was the most important thing of all to understand, more important than the origin of the universe, more important than climate change, more important than curing cancer, more important than anything else. That is because human intelligence is our major adaptive function and only by optimizing it will we be able to save ourselves and other living things from ultimate destruction. It is as simple as that.”  

As for the professional community, my impression was that the researchers were brighter than I expected and some were strong mathematicians (statistics).  I also found that they were open to having a non-psychologist asking a lot of questions. 

Jacobsen: What have been the most significant challenges to psychometric g as the definition of intelligence and as a psychological construct in the past? How have those been met with sufficient time and evidence?

Williams: The two well known challenges to g theory are Gardner’s multiple intelligence model and the emotional intelligence construct.  Both are wildly popular among laymen and shunned by researchers.  Both models contend that g theory is incorrect, but both are based on arguments in which g is present.  For example, of the multiple intelligences claimed by Gardner, most are just statements of factors that are linked to the one and only g.  Most book authors feel obligated to mention these models, then explain that they are not sound.

Jacobsen: What remain challenges to psychometric g?

Williams: There are some new models that are being discussed, but the literature that I have seen does not show a fully constructed model for any of them.  Instead, they mention aspects of recent research that point to other model configurations.  One of these is Network Neuroscience Theory.  Relatively recent technologies, such as Diffusion Tensor Imaging, have made it possible to see and study brain networks.  The characteristics of networks have shown that they are indicators of intelligence.  The brain is, per this research, organized as a small-world network.  This means that there are dense local networks (anatomically localized modules) that communicate with global networks.  The modules have the advantage of close proximity within the small network, making them fast and efficient for related tasks.

If the brain suffers focal injury, a module can alter its function to help compensate for lost modules in the damaged volume.  This results in a more robust brain that can deal with trauma (to some extent).

Much of this is similar to the way we use networks for information movement between computers.  It is my understanding that one of the difficulties is the wide range of structural differences between people.  This is yet another demonstration of the messiness encountered when trying to use neurological data statistically.  It can be done, but requires a lot of separate observations, followed by good statistical analysis.

Anyone wanting to find and read material on this topic should begin by searching for papers by Aron K. Barbey.  I have read his work for years and always found it to be outstanding.

Jacobsen: Regarding “IQ improvements for each generation is at odds with a substantial amount of data showing that real intelligence has been declining for a long time in virtually all developed nations,” what regions of the world have the strongest data and have the weakest data? What is the reason for the gap in depth of data?

Williams: Intelligence studies tend to start in Western Europe and North America, then are extended to other locations.  One obvious reason for this is that there are more intelligence researchers in those two locations and it is much easier for them to do local studies.  In the case of intelligence decline, there are multiple specifics that apply:

•    The dysgenic effect was identified and described in The Bell Curve in 1994.  Richard Lynn published a book on it in 2011, then Woodley and Dutton published another book (Wits’ End) in 2018.  The Bell Curve included only a small box on the topic, but the two books from Britain were focused on the decline.  So, virtually all of the book-level work was British; this shows as a dominant factor in the Wits’ End (2018).

•    Since the cause of the dysgenic effect is the negative correlation between IQ and fertility rate, the effect would be muted–probably to zero–in very low IQ nations and breeding groups (e.g. sub-Saharan Africa and Australian Aborigines).

•    Since the effect size is small, it was easily masked by gains in the Flynn Effect (these are non-g artifacts).  In order to study the actual changes over time, it is necessary to have data that goes back for over a century.  Such data can be found in Britain and possibly a couple of othe nations.  So, we cannot learn much about other nations, from direct data.  These are discussed in Wits’ End.

•    The findings from the 1870s onward can be extrapolated to more recent reports, which now include essentially all developed nations.

Jacobsen: When there are gaps in data, are there statistical methods used to fill those gaps if they exist?

Williams: Not in this case.  Per my comment above, the cause and effect has been established by data, largely from Britain, that goes back to Galton.  Once the process has been shown by a variety of independent measures, we are left to accept the default hypothesis (that the same thing happens consistently) until something is identified to point to another outcome.

Jacobsen: If so, how do those statistical methods work?

Williams: I haven’t seen any attempt to do more than demonstrate that the fertility rate is negatively correlated with IQ.  There was some discussion of the role of increasing mutation load as a cause of the dysgenic effect.  That topic died, probably due to the realization that tens of thousands of SNPs are the genetic basis of intelligence.  With tiny effect sizes, accumulated mutations would take a very long time to show an effect.

 

One interesting and related area of research is the study of past civilizations by using polygenic scores.  I have comments on this a few answers down.  It may eventually be possible to use polygenic scores to make statistically reliable estimates of the changes in mean intelligence (for a given location) over time.

Jacobsen: What might be a hypothetical test with the ability to tap into 1-sigma and 6-sigma g? In theory, if the data continues to follow one after the other in a convergent direction, then we should have high-range tests with potentials for large properly controlled samples of the general population without compromises to the test. Chris Cole, a longstanding member of the Mega Society, and his team have been working for years on an adaptive test – cheat-resistant. David Redvaldsen’s recent norming of the Mega Test and the Titan Test show test scores legitimate up the one in a million level, but barely, and nowhere near many of the claimed scores of one-in-a-billion or more. Those remain false, but seemed true in an earlier time and the newer norms seem more reasonable given the newer spate of testing devoted, mostly independently, to the high-range. It is a testament of the contribution of Hoeflin to high-range testing to get above 4-sigma tests, but shy of 5-sigma.

Williams: There are two parts to my belief that measurements above 4 sigma are not informative: 1) norming is impractical; 2) the construct of intelligence and its measure (IQ) are difficult to impossible to defend.  There is also a problem of demonstrating that high sigma tests can be compared over the same range.

As we all know, IQ is measured relative to a group of real people who are selected to statistically represent the full population.  Typical professional IQ tests are designed to cover a range of ± 2.5 sigma, which is adequate to reach the 99th percentile.  Some professional IQ tests (the WISC 4 & 5 Stanford-Binet 5, and DAS2 are the ones I am aware of) claim extended scales.  They claim to use developmental markers instead of norming group data. Obviously, this restricts the scales to children.  The largest adult norming group I am aware of is 8,000 for the Woodcock-Johnson.  Some tests have considerably smaller groups and presumably take a hit in the error bands for that reason.  To test at 4 sigma, you would need over 31,000 people in the norming group in order to hopefully have one datum.  It is easy to see that even at 4 sigma, the cost of dealing with a huge norming group would be prohibitive.  The process effectively reaches an unbearable cost with very little return. [If Item Response Theory is used, norming is not required, but the need for a large reference group does not vanish.]

Now, let’s deal with construct validity and predictive validity.  As we go beyond 4 sigma (and possibly before reaching it) we have to ask if the construct of IQ is the same as it is at lower levels.  Because of Spearman’s Law of Diminishing Returns (SLODR – if we accept it as fact), we expect that very high intelligence becomes heavily influenced by group factor residuals.  [group factors = broad abilities, these are Stratum II in a three stratum model]  In other words, the thing that we are doing at the usual levels is using a tool that had enough g variance that it can be used as a proxy for g, but SLODR tells us that g contributes less and less to the variance in intelligence as we move to high levels.  Although the analogy is not perfect, you can think of this as being similar to the change of state of a solid as it is heated and becomes liquid, and then goes to a third state as a gas.  The properties of the same element in each state cannot be meaningfully compared.  In the case of measuring above 4 sigma, there is the likelihood that most of the variance is not g variance, so it is necessarily variance in the residuals of broad abilities, after g is factored out.  Here we have a case of measuring where there is not a single g that is accounting for the interindividual differences, so different people may score very high on any of the group factors.  In the CHC model, these factors should be present:

  • Gc __ breadth and depth of acquired knowledge
  • Gf __ fluid reasoning – reasoning, form of concepts, solve problems
  • Gq __ quantitative knowledge
  • Grw __ reading and writing ability
  • Gsm __ short term memory
  • Glr __ long term memory
  • Gv __ visual processing – think and recall with visual patterns
  • Ga __ auditory processing – process and discriminate speech sound
  • Gs __ processing speed – clerical task speed

If g has already reached near saturation, factors such as Gf and Gc (top g loadings) probably will not turn out to be the source of most variance.  Just guessing, I would expect Gq, Gv, and Ga might turn out to be dominant.  If someone scores at a level taken to be at 5 sigma due to a very high Gq, would it make sense to say that he is equally smart as someone at the same 5 sigma level who made it on the basis of a high Ga?  To me, the reason intelligence is meaningfully measurable over the usual range, is that it can ultimately be reduced to one single factor (g).

If we ignore all of the small details and have a test that specifies rarity up to 6 sigma, there must be real world measures that confirm that the test is differentiating something that happens differently as a function of IQ in the very high range.  The sorts of things that work in measurable ranges are similar to these: income, SES, job status, number of patents issued (engineers), age at tenure (professors), scientific publications, major awards*, having a role in work that is domain changing, etc.  If outcomes cannot be statistically predicted for different levels (ie: 5 sigma vs 5.5 sigma) then the test is not meeting the requirement of predictive validity and must be classified as an ethereal exercise.

* Examples from the awards received by Feynman:  Putnam Fellow · Nobel Prize in Physics · Albert Einstein Award · Oersted Medal · National Medal of Science for Physical Science · Foreign Member of the Royal Society.

Since I have already made this answer long, I will not expand much on the various other items that relate to difficulties in measuring above 4 sigma, but I will list some of the things that have to be resolved if a test is to be useful at any level:

•    Is it invariant with respect to breeding groups, sex, and age?

•    Is it properly and confidently age corrected so as to meet the definition of IQ?  [I think this is an important one.]

•    Is it subject to Flynn Effect artifacts?   Are they properly handled?

•    Is the g loading of the test known? [Requires testing a large group.]

•    Is the reliability coefficient derived from sound measurement?  Is it 0.90 or higher?

•    Is construct validity established by comparison between its factorial structure and that of a major comprehensive test (WAIS or Woodcock-Johnson)?

•    Are the broad ability factors balanced, so that the test is not unduly weighted by a small number of factors? [This impacts the factor loadings of the test.]

•    Is the test administered by a qualified person (psychologist)?  If not, how is the use of new and powerful artificial intelligence prevented?

[These and similar items were discussed in my article, High Range IQ Tests  — Are They Psychometrically Sound?  Noesis?  #207,  February  2021.] All of these things are difficult to satisfy and are usually quite costly.  It may be impossible to actually demonstrate some, or most of these for ceilings above 4 sigma.

Jacobsen: How could we use techniques for translating regular gold-standard tests like the WAIS and SB to make culture fair tests up to a 6-sigma range?

Williams: Given my long answer (above), I believe that the problems I listed are unlikely to be resolved unless something startling appears from AI.  The surprises that are coming from AI are more than a step up, they are dramatic.  The particular study that I think illustrates how AI can do things that were not only unexpected, but also not understood by researchers:  Banerjee, I., Bhimireddy, A.R., Burns, J.L., Celi, L.A., Chen, L.C., Correa, R., Dullerud, N., Ghassemi, M., Huang, S.C., Kuo, P.C. and Lungren, M.P., 2021. Reading race: AI recognises a patient’s racial identity in medical images. arXiv preprint arXiv:2107.10356.

This x-ray analysis, based on AI, demonstrates that something totally unforeseen might happen that changes how intelligence is best measured and understood.  One area that I am watching is the analysis of genome wide association studies, using AI.

Jacobsen: If g is largely innate while still susceptible to environmental blunting, can we estimate the contexts of g for ancient civilizations and peoples, as a general comparative metric in current times, so making a within-species general comparative metric across times? People likely encountered more bodily traumas and malnutrition in the past, for instance. Modern Western types, in most cases, tend to be well-fed, pampered, and comfortable in contrast with ancient humanity.

Williams: There is IQ work ongoing now, based on DNA samples from ancient groups.  The first paper I encountered on this topic: Intelligence Trends in Ancient Rome: The Rise and Fall of Roman Polygenic Scores; Davide Piffer, Edward Dutton, Emil O. W. Kirkegaard; OpenPsych July 2023; DOI: 10.26775/OP.2023.07.21.  There is a long video interview of Piffer by Kirkegaard that discusses this topic in depth.  I assume readers can find it with a search engine.  Piffer mentioned that DNA data is pouring in from various ancient groups and that there is ongoing work to analyze it via polygenic scores.  There are some obvious limitations, such as not being able to identify insults to the DNA that might have reduced individual intelligence.  As the sample sizes increase, the confidence levels of this work will improve, but even now, the results are useful in tracking intelligence over wide time intervals.

Jacobsen: In the future, could we use artificial intelligences mimicking various general levels of intelligence of people to do wordplay and that converse with human interlocutors to estimate g in the tested human? It would be a step away from a direct brain scan estimate, but it would be cheaper and more output oriented.

Williams: I assume that AI will advance from the already impressive performance (certain applications) to reach levels that will be startling.  AI should be able to learn from various data sets, such as the norming data for the Woodcock-Johnson that has been made available to researchers.  It would seem to be a natural fit for the use of Item Response Theory.  AI should be able to determine Item Characteristic Curves, or something similar, but which is developed from within the AI system.  I wouldn’t be surprised if it is eventually able to make good estimates of intelligence by simply examining discussions by various people, either in video or text format.  We already do that when we watch someone who is either obviously dull or obviously brilliant.  It would be interesting to see what a trained AI system would, perhaps in a few years from now, observe from videos of Joe Biden, Kamala Harris, Christopher Hitchens, and Sabine Hossenfelder.

Jacobsen: In theory, could we use such a system to establish what a human general intelligence – whatever the culture and native tongue – would likely produce as output in conversation if intent on showing the real general intelligence, even if we have not found such an individual through regular testing channels with a psychometrician? There is popular theatrical commentary on an LLM with an IQ of 155 for verbal intelligence. Stuff like this. However, I mean a real correlation matrix extended or extrapolating based on live human input and incredible amounts of data and deep learning, ANNs. So, “Human with a cognitive rarity of 1 in a 1,000 sounds like this on either side of the curve. 1 in 30,000 sounds like this. Therefore, based on these sophisticated algorithms and extrapolations, the 1-in-10,000,000 person should sound like this.” It would reverse the sample size problem to an artificial sample size solution in a way. An artificial constellation of language used to determine where someone sits in cognitive rarity with the ANN constantly learning, improving with each additional human interlocutor. It would be a narrow band artificial intelligence with this specific purpose, especially good with the large amount of correlation with g and verbal ability, e.g., like Hogwarts’s Sorting Hat minus the magic.

Williams: Yes, I agree with the likelihood that AI will be able to match behavior or language to a given specification.  It would be the reverse direction of the prior question.  I think that it would have a lot of leeway for a given level of intelligence, since we already know that you can name a percentile and find a wide range of behaviors at that level.  AI should be able to match the intended IQs of fictional characters that are described as input.  

I have doubts that this sort of thing would retain meaning when the end of the range of the definition of intelligence (pre my prior comments) is reached.

Jacobsen: Do newer generations of intelligence researchers feel a tinge of fear for asking particular research questions when seasoned researchers encounter “careers ruined, people losing their jobs, physical threats, physical attacks, vandalism, denied promotions”? I sense a chill among both conservatives and liberals, oddly less amongst centrists, in sociopolitical contexts. Both use cancellation as a tactic. That’s not new. Lots of us have experienced it. I don’t care about it much, personally. The advancement of knowledge is the key part. For the advancement of a field with key impacts, it raises legitimate, serious concerns about the advancement of research in the terms of the potential for rapid developments for benefit for humanity as a whole, especially the floor of societies who benefit from smart, dedicated people with ethics bent towards general humanitarian efforts. Identification and nurturance efforts matter. You noted this in the last part.

Williams: I see two things happening.  The first is that some researchers are fearful of discussing anything that might lead to a hot topic or even allow someone to claim that they have commented on one.  The fear is what I assume went on when the Roman Catholic church punished Galileo in 1633.  Other scientists could see that there were serious hazards to be faced in the pursuit of truth.

The second thing is that wording becomes so delicate as to be silly.  Blunt comments don’t happen, even when they would express a point more accurately.  Besides having to dance around what is being written, the comments are now followed by lots of extra boilerplate, such as pointing out that any group can have bright people and that IQ tests are not deterministic.  I must admit that I have fallen into this protective kind of language (at least when I write something that could cause blowback).

Jacobsen: What will happen to respected disciplines where international standing matters with individuals selected in such a manner?

Williams: So far, we are in a mode of having some people who are willing to take on dangerous topics and those who will not.  Although there are only a few researchers who are willing to research race and sex differences, they seem to me to be doing good work.  I don’t think their work has actually harmed the reputations of the nonparticipants, I have seen examples of people feeling as if they were unfairly grouped with the not-woke researchers.

Jacobsen: Truly intelligent kids will use their intelligence in one way or another. What will likely happen to these smart kids without guidance and support?

Williams: A case can be made that not supporting bright students will result in them not reaching the levels of performance that would more likely be reached with support.  As you observed, bright students will pursue their interests, despite barriers from school administrators and politicians.  Douglas Detterman, founder of ISIR and Intelligence, wrote a good article pointing out that 90% of the variance in educational outcomes is due to the individual students (intelligence).  The remaining variance is split between teachers and schools, with teachers accounting for 1 to 7% of the variance.  This is one of those things that lots of people will want to challenge, but Detterman has the research findings on his side.  

I can’t imagine what the consequences will be if the present rate of irrational policies in education continue to increase.  The people who are driving things, such as equal outcomes, apparently have no idea of the magnitude of the bell curve range.  Yet, they are pushing to really have college educations for every child of every ability level.  Economically and practically, this is insane.

Jacobsen: How are you defining woke here?

Williams: “Woke” has become the tag for the left, with all of the policies that they push (socialism and irresponsible spending on things that are waste).  In the things I have been discussing, I use “woke” in reference to policies that relate to education, such as the canceling of gifted programs; the failure to recognize student achievement out of fear that a nonachiever might feel bad about his failure; school administrator embarrassment over the suggestion that a student is brilliant; etc.

Recently Thomas Jefferson High School for Science and Technology was denied the use of tests for admission.  The student body has typically been about 70% Asian, 20% White, and 2% Black, with the balance consisting mostly of Hispanic.  The school board ruling that they could not use tests was challenged and went through the state justice system.  The school lost.  Then it was appealed to the Supreme Court but was not accepted, despite their willingness to rule against Harvard for similar discrimination against Asians.

The links below are largely redundant.  They report the court’s choice.

https://virginiamercury.com/2024/02/20/supreme-court-wont-hear-thomas-jefferson-admissions-case/

https://reason.com/volokh/2024/02/20/supreme-court-refuses-to-hear-case-involving-use-of-race-neutral-means-to-facilitate-anti-asian-discrimination-at-selective-public-high-school/

https://www.edweek.org/policy-politics/supreme-court-declines-case-on-selective-high-school-aiming-to-boost-racial-diversity/2024/02

https://thehill.com/regulation/court-battles/4478329-supreme-court-racial-discrimination-challenge-tj-high-school-admissions/

Now the school must admit on the basis of race, not ability.  They are in a bind.  If they maintain their former standards, they will have to fail most of the quota students.  If they are afraid to fail them (most likely), they will have to either provide an easy option for them or simply award diplomas for attending classes.

Jacobsen: An assumption: censorship of research tends to make people – of all stripes – become creative and then pursue different means by which to explore the original subject matter. Smart, creative people are forced to get more creative and use their intelligence more. With a discouragement and a reduction in focus on general intelligence and on IQ in formal tests, how are intelligence researchers pursuing paths for measurement of intelligence if at all? I am making a historical extrapolation as if it will happen or has already happened, potentially a bias to be optimistic about researchers and intellectual pursuits. (I’m sorry!)

Williams: At the last ISIR conference, one of my friends wondered out loud if animal studies could be used to show the things that are so obvious among humans, then use the findings as comparisons to human behaviors.  Curiously, we already have a very wide range of intelligence in dogs that is quite similar to the range seen in people.  There are border collies at the top and Afghan wolfhounds at the bottom.

I think the twist that might not be anticipated by the anti-intelligence faction, is AI.  [Mentioned previously.]

Jacobsen: What were magic multipliers? The term “magic” tells a bit of the story.

Williams: It came from this paper: Dickens, W.T. and Flynn, J.R., 2001. Heritability estimates versus large environmental effects: the IQ paradox resolved. Psychological review, 108(2), p.346.  In the paper, Dickens and Flynn described their imagined explanation for how imagined environmental effects could cause large impacts on intelligence.  Their argument was reminiscent of the “butterfly effect” which was used in the discussion of weather.  With no supporting data, the authors invented a process that they claim could convert tiny unobserved environmental effects into large factors that impact intelligence.  After the inane model was offered, there were no publications showing anything that could possibly support the model.  I called their model “magic multipliers” because that describes their invention.  To me, this is much like inventing a story where Noah builds an ark and stocks it with two of every species, so that the flood story can be supported.

Jacobsen: Why did Plomin stop giving updates every 2 years?

Williams: Probably because the SNPs were found.  I don’t recall that he ever spoke to ISIR after the breakthrough that he details in Robert Plomin – Blueprint: How DNA Makes Us Who We Are, Penguin Books Ltd., 2018, ISBN 9780241282076.

ISIR honored Plomin with the Lifetime Achievement Award in 2011.  He spoke to ISIR in 2013 (Cypris) but I did not attend because of the very remote location.  I recall (sitting a few feet away) that he received the Distinguished Career Interview, but I am not sure of the year.  By 2018 the new age of genetics arrived.  Besides Blueprint (above) there is a related paper that is worthwhile: Plomin R, von Stumm S. The new genetics of intelligence. Nat Rev Genet. 2018 Mar;19(3):148-159. doi: 10.1038/nrg.2017.104. Epub 2018 Jan 8. PMID: 29335645; PMCID: PMC5985927.

Jacobsen: If the FE is decoupled from g, as in not a JE, how much is the decoupling – complete, or is it on a sliding scale depending on context?

Williams: My take, as of today, is that the decoupling is close to total, but there are suggested FE causes that should show some g loading.  One example would be a decrease in mean family size.  If this were to happen (it obviously has happened at the high end), it should be largely due to smaller low IQ families.  That would cause a real gain in intelligence, which would probably be little more than a recovery of the already lower mean due to the negative correlation between IQ and fertility rate.  Besides just hitting the low end of the IQ spectrum, there is also a small birth order effect.  A reduction in family size would mean fewer children born with high birth order numbers.  These children are statistically less intelligent than their older siblings.  I don’t think either of these have been demonstrated to show a FE.

It is a bit frustrating to see the large number of references to the FE accompanied by comments that the population is becoming more intelligent.  The opposite is happening.  People simply do not understand that the FE is a time and location effect that can be positive or negative at any given observation; that it is not always up; and that it is rarely (or never) a Jensen Effect.

Jacobsen: Are societies giving screening of gametes for parents with reproductive issues, single parents with means who select surrogates or sperm donors based on verified characteristics, or individuals who want to know risk factors associated with their reproductive capabilities in genetics alone, making an ethical decision in conscious, evidence-based, reasoned reproduction in a non-totalitarian, democratic fashion? Is this likely to become widespread? It’s, in a way, a more precise form of how individuals engage in sexual selection in the first place happening for millennia.

Williams: That takes in a lot!  It is my understanding that IVF usage is large in some nations and varies down to zero in many nations.  I am not familiar with the policies of the nations where IVF is most prevalent.  I looked at the web and found that the US has 1.7% of all infants born through Assisted Reproductive Technology, whereas Denmark has an estimated 8 to 10% conceived through ART.  That strikes me as a relatively large fraction.  It seems that IVF or ART might be used more in the future, but by educated people.  It is difficult for me to imagine it as equally attractive for low IQ families.

Jacobsen: Once we get the structure and networks and processes most likely connected to g in the brain, what would this mean for the development of simulations of this in computers, artificial g?

Williams: It is difficult to rule anything out for the future.  The rate of development of computer technology remains high.  The expected diminishing returns are being crushed by new technologies.  We already see optical technology that claims to offer petabytes of storage on an optical disk that is the size of the old ones we have mostly discarded.  [Using that kind of storage may be another matter, but we keep thinking of barriers that fall.]  And we have been seeing research in quantum computing for some time.  It seems to be real and progressing towards ultimate implementation.  With what appears to be unlimited speed and storage, plus AI, getting to the point of using brain structures and processes in computers may be a matter of time.

Some time ago, I read a paper [Jung, R.E. and Haier, R.J., 2007. The Parieto-Frontal Integration Theory (P-FIT) of intelligence: converging neuroimaging evidence. Behavioral and brain sciences, 30(2), pp.135-154.] that discussed what the brain is doing with information that gives us the neurology of g.  The answer, in part, is that the brain carries out an information integration process, that is either g or is strongly related to g.  In 2007, there was limited understanding of networks, as compared to today.  I have not seen a merging of modern network models with the Parieto-Frontal Integration Theory, but I think there are papers that attempt to update the P-FIT model.

Bibliography

None

Footnotes

None

Citations

American Medical Association (AMA 11th Edition): Jacobsen S. Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7). April 2024; 12(2). http://www.in-sightpublishing.com/williams-7

American Psychological Association (APA 7th Edition): Jacobsen, S. (2024, April 1). Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7). In-Sight Publishing. 12(2).

Brazilian National Standards (ABNT): JACOBSEN, S. Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7). In-Sight: Independent Interview-Based Journal, Fort Langley, v. 12, n. 2, 2024.

Chicago/Turabian, Author-Date (17th Edition): Jacobsen, Scott. 2024. “Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7).In-Sight: Independent Interview-Based Journal 12, no. 2 (Spring). http://www.in-sightpublishing.com/williams-7.

Chicago/Turabian, Notes & Bibliography (17th Edition): Jacobsen, S “Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7).In-Sight: Independent Interview-Based Journal 12, no. 2 (April 2024).http://www.in-sightpublishing.com/williams-7.

Harvard: Jacobsen, S. (2024) ‘Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7)’, In-Sight: Independent Interview-Based Journal, 12(2). <http://www.in-sightpublishing.com/williams-7>.

Harvard (Australian): Jacobsen, S 2024, ‘Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7)’, In-Sight: Independent Interview-Based Journal, vol. 12, no. 2, <http://www.in-sightpublishing.com/williams-7>.

Modern Language Association (MLA, 9th Edition): Jacobsen, Scott. “Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7).” In-Sight: Independent Interview-Based Journal, vo.12, no. 2, 2024, http://www.in-sightpublishing.com/williams-7.

Vancouver/ICMJE: Scott J. Conversation with Bob Williams on Practical and Impractical Intelligence Testing: Retired Nuclear Physicist (7) [Internet]. 2024 Apr; 12(2). Available from: http://www.in-sightpublishing.com/williams-7.

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Based on work at www.in-sightpublishing.com.Copyright © 2012-Present by Scott Douglas Jacobsen and In-Sight Publishing. Authorized use/duplication only with explicit and written permission from Scott Douglas Jacobsen. Excerpts, links only with full credit to Scott Douglas Jacobsen and In-Sight Publishing with specific direction to the original. All collaborators co-copyright their material and may disseminate for their purposes.

From → Chronology

Leave a Comment

Leave a comment