Loading...
Loading...

Editorial note:
We recorded this episode on March 4. This is an important detail for contextualizing the timeline of events discussed in this episode, as well as our understanding of the matter at the time of recording.
Since we recorded this episode, the paper discussed has been officially retracted. We will provide additional updates on this story as they become available.
Episode summary:
Join Eric Trexler and Eric Helms as they dissect the chaotic rollout of a controversial study on LDL cholesterol and lean mass hyper responders, explore issues in science communication, and debate the integrity of research in the age of social media.
If you're in the market for some new lifting gear or apparel, be sure to use code "MRR10" at elitefts.com for a 10% discount
Chapters
00:00 Intro
09:18 Social media dust-up
17:27 Revisiting the "Lean Mass Hyper Responder Study"
23:25 New controversy surrounding the study
33:14 Investigations regarding data integrity
41:43 Historical Context: NUSI and Its Implications
50:55 Steelmanning the argument
55:22 Potential issues with the "citizen science" movement
01:01:50 The death of expertise and the future of science communication
What's up, everybody? Welcome back to Iron Culture, presented by Mass. It is me, Eric
Trexler joined as always by the Dr. Eric Helms Helms. How are we doing this morning, tomorrow
morning in New Zealand?
Hey, first, thank you for recognizing the time zone difference and accepting that even
if it is not the most dominant preferred and financially respected time zone in the world
that New Zealand does have a time zone. And it's an especially important thing because
this morning, my wife, unofficial doctor, Lyon, successfully defended her PhD in geology
in my enemy university, the Auckland University, or University of Auckland, I should say.
I bet when you guys have football games, it is just, I mean, the budding of heads in that
football rivalry. You've talked up collegiate sport in New Zealand enough that I know how
intense it gets.
I only recently found out that actually there is such a thing as AET sport and there
is a very little known university league where the, it's basically the equivalent of
like the chess club, you know, kind of science, you know, that there are some of them and
they reached out to us a couple of years ago to, for our institute, the research institute
they're like, hey, do you guys have any postgraduate students who might be doing strength conditioning
related stuff and can help the AET sport teams? And I was like, we have AET sport teams,
but yeah, it is then entirely different world. People don't understand in either place,
like people from the US are like, what do you mean there isn't university teams that
aren't massive and like, who do you support? And I get questions, yeah, Americans all
time, like, oh, so, so like, what teams is AUT good with? And I'm like, that's kind of
thing, bro. And then outside of the US, everyone's like, what do you mean 90% of the university
is funded by the athletics department and that it basically cost the GDP of a small nation
to, you know, to do this. And then you have, because, yeah, like, when you're, when you're
football, yeah, when you're football team alone is a $100 million enterprise annually.
Yes. Yeah, that's, that's a different kind of thing. Yeah. So like, it's not like university
towns are not a thing in other countries or, or, or places, but the, but the, the economies
of scale are a little different, typically. And it's, and then they're even more confused
in the sport and exercise world. So like, oh, that's really cool. Then you must get the
opportunity to work with these D1 athletes. And we're like, oh, no, no, no, no, we don't
get to touch them. We do extra credit for the anatomy of 101 class students, you know,
to, to come across. And, and then you also find out that's probably why, like, formal academic
sports science took off after it occurred in, and, and you're up in, and the UK, which
is probably had arguably the first sports science program. If you kind of look into the history
of it, not exercise science. Anyway, I'm doing great because I'm so proud of my wife.
She crushed it. And now we have one real doctorate in the house because sport and exercise,
you know, equivalent of a solid associate's degree in a real science topic. And no, I'm
so proud of her. It's amazing. I rarely post things about my personal life, but this,
you know, I'm overflowing with, with, with love, happiness, joy and respect. So I had
to go on the gram. So I'm doing well. And I think, I think that's the biggest news
in my life right now. What's going on? You know, well, I'm conflicted because I have
to hold two things at once. Number one, I'm very happy for your wife. Very, very pleased
that she's made it to this huge, huge accomplishment in her life. On the other hand, one of the
things I like to do is any of my close friends or associates, I try to think of what are
the things that I have that they don't have? Things I can hold over them to reinforce
my belief of superiority. And one of the things that had been working for me as it pertains
to you is I had a wife with a PhD, and you didn't. Right. And now that whole armor that
I built around my ego, that has shattered. And now I'm very concerned about how I'm going
to navigate our relationship where I no longer have this kind of leg up on you in my
very regressive perspective. Trucks, I think you're forgetting that literally last year,
I tried to take a shot on goal and failed. I went up for an attempt to get my pro card
in classic bodybuilding, once it opened up and up in BF, and I felt short. I am only
a one category pro in natural bodybuilding. You are still the Deon Sanders. Now, one
I have to say that the person who beat me for that pro card went on to then win the
World Championship, which is why I interviewed them to uplift my stock. Yeah, the better
they do. But they're not sure if any of the people, yeah, I'm not sure if any of the
people I beat for my pro card necessarily won any WNBF World titles. But I haven't checked
up and I certainly won't. I don't need to be looking into that kind of stuff.
And nonetheless, amateur pro, there's a hard line. And as we know in natural bodybuilding,
who is a pro is independent of organization or the circumstance of which they want it
better than all amateurs. That is 100%. One of the benefits of having such a clearly
defined objective sport with criteria, leagues, funding. In fact, they're working it into
the NCAA structure as we speak at the moment. They're just worried about it drawing too
much of a crowd and taking away from football. They got to uplift these smaller programs,
you know. So yeah, I hear you. But no, I relate to the idea of my wife also foolishly
chosen actual science for her PhD. Her PhD is in psychology. And I happen to believe
I can't prove this because I'm a little bit intellectually outgunned. I believe she
uses it to manipulate me emotionally. So at least you don't have to navigate that
in your household. Whenever I interact with her, I have no idea what's going on. Am I
being manipulated? Is this some kind of psychological testing or conditioning that's being being
subjected to without any kind of consent? I have no idea. So I live my life in a constant
state of paranoia. Unfortunately, Trex. My wife prior to her, she's
a late stage career academic in this field. She's on paper. She's in her 40s. She looks
like she's 28 and always well. But so her career before rocks was animals. And she was
actually a trainer at the Auckland Zoo. I'm a lot of things for the primates as a primates
you keep her. And then she's a dark trainer before that. So she actually explicitly learned
how to train and manipulate through positive reinforcement. So I'm fine with it. You know,
the behavior of primates. And considering we're both intellectually outgunned, her ability
to effectively train a, you know, pygmy marmoset is probably just as effective on me if not more so.
Yeah. So she looks at you and she basically sees a binobo who needs to be just a little bit of
guidance, right? Just a little bit of nurturing some of your behaviors and getting them to where
they need to be. So very, very healthy situation. But no, seriously though, huge regrets your wife
super exciting. And I hope you have a wonderful celebration plan because that is your job.
100% all right. Helms before we get into it, we need to do all of our promotional activities,
right? So if you like the show, do make sure that you are supporting it in any number of ways.
Make sure you like, rate, subscribe, review, tell a friend, share a link, post it all over social
media. Post a story when you're listening and say, wow, this is a wonderful thing I'm listening
to give a five star rating. That's really the only kind of rating that we're currently in the
market for. Those are the only ones we're accepting at the moment. Of course, if you want to support
our friends, use the code MRR10 that stands for mass research review 10. Use that at elitefts.com.
Next time you're looking for some lifting gear or lifting apparel, that'll get you a 10% discount
off of your order. And of course, you're thinking, wait, MRR10 is the code for iron culture. Well, duh.
Mass research review is what makes this whole thing go. It's what's funding the whole operation.
So if you like iron culture, you like the mass research review. And if you really want to support
us, the number one way to do it is head over to massresearchreview.com. Check it out. Take a look at
some of the free articles. We got some great stuff. And if you really love it, become a member of
the mass community. We publish a new issue the first of the month every single month. Never been
late since 2017. And we bring you the best freshest hardest hitting most useful evidence related to
exercise nutrition. You name it. If it's fitness or health adjacent, we cover it. And I think
you're going to love it. Helms what I forget. Nothing. In fact, I just want to shout out the most
recent free article that is on massresearchreview.com. And that is the cover story for our recently dropped
issue that Dr. Lauren Collins, a simple wrote titled the peptide problem. And I want to address
that there might have been a bit of a social media kerfuffle in which I took
many of the L. In fact, some of the comments were Helms taking a rare L here. Helms,
it's clearly a shill trying to mislead his audience. And on and on and on and on. And I spent
most of the morning we posted that essentially just having a tax on my credibility for hitting
except collaboration. I love that. Yeah. You neither not to get too too too much inside baseball
here, but you neither wrote the article nor made the post. And then yeah, the whole comment section
was like Helms has jumped the shark. His credibility is tanked. It's all over. When will he just go
away? You know what? I am the the the consummate of, you know, influencer who just drops clickbait. You
know, in, in, in defense of the people who attacked me, which is a ridiculous thing. I don't,
I don't need to be doing. I can understand where they came from. Like we, we had talked about it
on on front page fitness. We had read the article, you know, it was written two weeks prior to this
post. And our headset was all like everyone knows what peptides were talking about. It's the ones
that have not gone through FDA approval, which you can only get through compounding pharmacies,
which are for research purposes only, which have no human data, except for like a case report of
people saying, yeah, I did that a year ago. I feel great. Yeah. And even like, yeah, go ahead.
Like during the peer review process, because we review each other's work before it goes to publication
really, really thoroughly. And I even left a comment in the article saying like, in this part
right here, you should probably state the specific peptides we're talking about here, because obviously
insulin is a peptide. It's a miracle peptide, right? Like, you know, you know,
semagglies, all these weight loss drugs, the GLP1 agonist, you know, so even within our
conversations, like you said, we were so far past the like heming and hawing of how exactly are we
going to frame these very delicately in the article? But yeah, go on. Yeah. Like I think we were
operating from this place of everyone knows what peptides were talking about. And then some of
the first comments were like, man, what an idiot. Don't you know that GLP1s are peptides? No
human data, clearly blatantly false clickbait. And I was like, God damn it. We didn't mean insulin,
didn't mean semagglatide. What are you talking about? Don't you? Yeah. No, everyone listen. Yeah.
Everyone woke up saying, wow, I guess Helms is shaming people type one diabetics for using insulin.
What the hell? Yeah. Honestly, it was a good reminder. And there was even a one comment like, wow,
Helms always talking about being careful with this language. And he drops this one. And I was like,
okay, you know what, pot, kettle black, I'm good with it. Like, this is a use out of learning
experience. So I spent the morning responding to these comments saying, yes, to be clear,
we didn't mean that. And then we changed the caption, which may be realized that no one reads
captions because the comments kept coming. And then they were claiming that we were just trying
to do this to get people to just, you know, chill, like, oh, you dropped a little hint,
didn't tell us enough, made it rage bait. And now we have to sign up for your research review.
Last time I checked, people don't rage bait into paying you a $30 per month subscription.
So just to really clear the air, what did we decide to do? We released the article for
free, so you can read the whole thing. That still did not alleviate it, tracks. I don't know if
you realize this. My prediction was, okay, if they're only reading one slide of a carousel,
and not the caption, they're definitely going to read a $4,000 article, though,
if they go to an external link. Unfortunately, that very logical premise was not shown to be true.
So I started responding to comments just by saying, hey, thanks for the feedback.
Have you looked into the full article that we released? I think it'll clarify which specific
ones we're talking about in parentheses that we also mentioned in the caption. And unbeknownst to
me, unpredictably, and as surprised everyone, people continued to not read the caption, nor the
full article, and simply just attack me for being a shill on the, just the first image on the
carousel. So yeah, so Holmes, could I, if I worked really hard, could I win you over to the thesis?
If I stated, if I laid it out very academically and very thoroughly, could I win you over to
acknowledging the idea that frequent social media posters may, in the aggregate, be high in
disagreeableness and relatively low and conscientiousness? You know, it does mostly comport with my
experiences. So I'm amenable to that. I would need to see the data that you already presented if you,
you know, yeah, so to go. So the reason I bring it up, I, unfortunately, I'm a coward,
but I did have half of mind, and it's also hard with algorithmic, you know, stuff. I wasn't
sure how much visibility I would get, and if it would be funny, or if it would just be a total flop,
I had half of mind to actually post in that thread and say, by the way, everybody, like,
totally unrelated, I don't even know why I'm bringing it up, but also this month, we have an article
about why sometimes on social media, totally reasonable people end up getting into these very
disagreeable and contentious discussions that really could be as simple as, hey, what do you mean by
peptide? Oh, glad you asked. Obviously, not the ones that have gone through FDA approval,
this new wave of peptides that are being obtained in wellness clinics and online pharmacies,
for which we have minimal data regarding safety and efficacy. And then the person would say,
oh, thank you for clarifying. Good day. Yep, that's the way social media works.
People who use the principle of charity assume the thing that makes sense, not the thing that
allowed all you an asshole, or a moron, or disingenuous, or sharp. Yeah. So I had to really like,
I did. So let one last thing. I got several people who made a very, very similar comment,
and they started with the premise, which I like, because they're trying to canvas the potential
options. What could be happening here? Two options. Either one, you're on morons, and you don't know
what a peptide drug is, or two, you're blatantly lying to your audience. I'm like, sweet.
Is it possible? Is it third option? I guess not. So which one do I choose? Or is it both? You know,
two options. Neither one is good. You're an idiot, or you're a jerk, or maybe you're a jerk who's
an idiot. Yeah. I would say next time you're in that conundrum, choose that you're a smart jerk,
because I think at least you could frame a redemption arc down the line versus like, if you're
just fundamentally stupid, that's going to be really tough for your credibility. There's been a few,
there's been a few cancellations and fitness lately, and I've been amazed at the goal. I'm not going
to get into names and who and all that stuff, but there's a lot of people who nowadays, like,
the internet moves so fast and is so treacherously aggressive that they'll like literally just,
like, lay low for 72 hours, and then come back and be like, you know, guys, I've done a lot of
thinking. And I've decided like the things I've been doing for the last like 11 years straight
publicly, I don't want to do that anymore. I'm just going to be a straight shooter and just like,
you can trust me now. So anyway, are we cool picking up where we left off? And they expect the
audience to be like, yeah, I mean, it's been 72 hours. I mean, honestly, what a commendable way
of owning up to your mistakes. I'm yeah, I applaud you, sir, for the this drastic change you've made.
Yeah, I saw one person who changed their facial hair, waited 72 hours, and then came back,
and they're like, all right, like we're good, right? I've undergone a lot of deep work mostly
with an electric shaver. Yeah. All right. So anyway, helps. Are we ready to dive into the real
juicy substantive content for today? Yeah, as I understand it, you have been watching this space
for almost a year now, because I think it was back in May of last year, where we addressed the,
um, honestly, borderline, comical and entertaining train wreck of retraction, internal pre-publication
retraction versus pre-print, then public re-analysis, science communication, snafu,
just traffic jam of nonsense of what was dubbed the lean mass hyper responder study.
And for those who don't remember, and this is me recalling, because I actually haven't been
paying a tremendous amount of attention to this, because it's a little more in the public health
domain versus the shame the public for not being as jacked as you are domain where I operate.
So this was looking at basically active, healthy people on keto who were, quote, unquote,
lean mass hyper responders, egregiously misrepresentative name, which basically means, hey, I'm healthy,
I'm fit, and I have really good blood markers, except for high HD, a high LDL, and that's probably
not a problem. So let's look at the plaque accumulation. Oh shit, it looks like it is a problem,
but it's probably not a problem because in my Facebook group, no one has reported that they died.
Is that a good summary of where we were at and we left off last time?
So yeah, where we left off last time, and if you're a mass member, you can jog your,
a mass member, you can jog your memory going back to an article I wrote back when the study first came
out volume nine issue six. So a couple of years ago at this point, I think something like every
year ago, but yeah, June of last year. So I wrote an article that addressed it, and then of course
in episode three, 28 of the Iron Culture, I talked about, do some of the chaos with how this study
rolled out, right? And so yeah, I talked about, there was not like a, there wasn't like a
retraction element, at least not one that comes to memory, but it was basically like the details
of the paper started leaking on like podcasts and Twitter posts, and then an actual copy of the
paper leaked that was like already formatted for the journal. And then people were getting asked
question like authors were getting asked questions about the paper. And they're like, I've never heard
that value in my life. And I'm going to make sure it gets corrected before this gets published or
like I haven't seen the proofs. And it's like, but they're they're floating around the internet.
It was just really chaotic. And I don't mean like, listen, chaotic stuff can happen to people that
are very, very buttoned up. And so like, I'm not saying like that it's necessarily their fault,
but the, I had never seen a more disorganized rollout of just communication of research findings.
Right. There's drafts that were floating around to people saying that can't be the real draft,
because I haven't seen the real draft. And it's like, well, it's formatted for the journal. I don't
think someone's like, you know, going around mimicking that. But it was just really, really chaotic.
And yeah, there was a lot of tumult, a lot of argument over what the study actually found. And so
in a nutshell, basically, they kind of came out with the paper and said, hey, these findings
generally suggest that, you know, there are these lean mass hyperresponders. The fact that you have
high cholesterol doesn't seem to be massively predictive of plaque accumulation, which the
oretically would mean it's not particularly predictive of the downstream consequences of that,
you know, like major cardiovascular disease and cardiac events and things are specifically
basically LDL cholesterol, right? Not just total. Oh, yeah. Yeah. Specifically LDL cholesterol that,
I mean, we're talking about many folks in the paper that are over 200, right? Yeah. So we're
talking about double the limit of what is considered high for like, yeah, and higher than total
cholesterol. Yes. Yes. Yeah. So yeah, we're talking about, I think I don't remember the exact
inclusion criteria, but basically to participate in the study, you had to have remarkably high LDL
cholesterol. And some of them it was like sky high. So the general, the way the results were kind of
framed was like, yeah, it looks like the term that they used in the paper is plaque begets plaque.
And they're like, your initial plaque at the beginning of the study seemed to be predictive of
how much you would accumulate after one year, but you know, it looks like, you know, it looks like
generally speaking, these really high LDL values in these special lean mass hyperresponders don't
really seem to be of tremendous concern in terms of the progression of plaque accumulation and
atherosclerosis, a beautiful term. But in any case, you know, there was multiple folks who had
you know, written commentaries and even submitted them to the journal and said like, hey,
we're looking at the results here. First of all, the primary kind of stated primary outcome in the
pre-registered research report, they pre-registered their protocol on clinical trials.gov. And so
that's great. Like pre-registeration is awesome. However, their primary outcome that was stated in
that pre-registeration wasn't really clearly reported in the paper. There was a lot more emphasis
on secondary and tertiary outcomes. And when you actually look at the percentage increase from
baseline, especially occurring over a single year in these individuals, a lot of people who are
really focused on cardiology as it relates to the diet, they were looking at it and they're like,
this is like a really high-risk group. If you look at the rate of progression that's being reported
over one year. So basically, that was kind of where we left off was that the authors were like,
hey, this looks like a win for us. And then most other people were interpreting it in a fundamentally
different way. And the rollout was really chaotic. But Helms, I am pleased to report that there is a
new revelation. Now, I want to get out ahead of this. And some people are going to say, Eric, you
don't seem to have all the facts dedicated to memory. And this is Eric and me, the third person,
Eric. Trexler, you don't seem to have all the facts committed to memory. You seem to be flying
by the CD or Pants. You seem to be generally jumbled, discombobulated and disorganized. But I want to
give myself the benefit of the doubt. The stuff's new. The stuff is like kind of emerging on Twitter
and YouTube over the last 48 hours. So I'll be honest, there are details that I like would like to
have that I don't currently have yet. But here is kind of the general premise of what's going on
here. So shortly after the researchers received a lot of pushback and they said, bro, if you look at
your data, which were the data in question were processed using something called clearly, it's like
an AI kind of software algorithmic thing that kind of looks at the scans and you know, calculates
the plaque accumulation and calcification and things of that nature. So they use this technology
called clearly and those clearly scans were looking at this plaque progression. And basically as
people were starting to say like, hey, these clearly scans don't really seem to support your
conclusions very much. They said, well, we kind of agree. And now we're a little bit skeptical
of the clearly scans. And so and so what has happened over the last 48 hours is there's been
this kind of flurry of activity where some of the original authors of the paper, particularly the
ones who are more social media engaged have big audiences, things of that nature. They've come out
and said, okay, so here's the deal. We analyzed these scans using two other methods and one of
them they're saying was our one of it's called Q angio and it was I actually did find it is not
listed in their pre registration on clinical trials.gov, but it is listed in their protocol paper,
which was published in a journal in June 2024. So I would have loved to see that in the pre
registration. Ideally, you don't want to pre register until you have all the details buttoned up,
but they're like, hey, this was the primary analysis approach anyway, which then leads me to wonder
why did you all why did you publish a paper a year ago and say after we publish the findings,
we will get around to doing the primary method of analysis for these scans, right? So something
happened where they said we're going to do the clearly first or we're going to do the clearly
instead. It's not really pun intended. It's not clear to me how that decision was made that we're
going to go with clearly for these scans instead of the Q angio that was listed in this protocol
paper. And so just so people know it's a very normal thing, which is a great thing. Very transparent
is to pre register what you're going to do either as some kind of like a pre print or in clinical
trials.gov, but basically saying here's our document of what we're going to do and how and why.
And then you also can publish a protocol paper where you go into a lot of detail about the
method so that then when you're writing up the findings, you can just kind of cite it and say like,
well, if you want to really dig into the details, it's over here. And so these are both
kind of forms of pre registration in a way. But yeah, there are some, I wouldn't say necessarily
contradictions, but some areas where the protocol paper has a little more detail than the clinical
trials.gov pre registration. But in any case, it is true that they can say, well, hey, we're just
doing the primary analysis. But again, theoretically, the primary analysis should be the main analysis
that's reported in that first paper. But they're saying we did that. We did the clearly analysis
in the first paper. We lost faith in that. Now we've done two other versions of the analysis,
Q angio and one other. And they don't agree with the clearly analysis. So what we did was we were
able to have of the 100 participants in the study who did the clearly analysis, we were able to
get eight of them to independently kind of redo their clearly analysis. So a subset of eight out
of 100. And they're saying that basically something happened where they thought the original
clearly analysis was going to be blinded. And they have come upon the information that it was not
blinded. I'm not clear on how that information became became available. I'm not really sure
of the ramifications of what that means in terms of how theoretically, if it's a repeatable AI
algorithm, you would think that the blinding while good would probably not be that critical. But
I to be candid again, I don't really understand how this clearly scan works, right? So I'm not going
to come out here and point fingers and make all sorts of bold claims. I don't have the details
or the information yet. This is like journalism work. So then science work. I don't understand how
that information became available that somehow these scans were were unblinded. I don't understand
why there's only eight reanalyses for the clearly scans rather than I don't know most if not all.
My understanding is that they asked clearly to re-analyze all of them and that didn't happen.
I don't know why. So a lot of this is hearsay. A lot of it is I think Helm's the key message here.
Very messy. Yeah. Why so messy? That's one big question. But in any case, they're saying
the original analysis of 100 scans, but 100 people with two scans pre-imposed. That original
analysis does not seem to be supported by the Q&GO approach, the other approach that they used,
and then the reanalysis of eight of the 100 scans. So it sounds like they are posturing. I believe
that what I've gathered is that there's an expression of concern at the journal that publish
the original paper. I believe that they have said they wrote it and said we are concerned with
the validity of these scans. We are going to, I don't know Helm's, write a correction, retract
and try to republish, try to publish a fundamentally different paper entirely saying forget about that
other one. It's not really clear to me. There's a pre-print that I just saw that was uploaded,
I think, in January of 2026. So relatively recently, and they're looking at data from this study
using the Q&GO results. And it makes absolutely no mention of clearly scans, at least my quick
read of it before recording. So Helm's the short version of this is I do not know what's happening
with this paper, and that in and of itself is a very damning thing. It doesn't sound like they,
internally, know what's happening with the paper. I mean, normally when you write a retraction,
there's two ways this happens, right? You have concerns about another paper, and you ask the
journal's investigation, and you might make a recommendation for a retraction depending upon
how you do that, but ultimately it's kind of in the journal's hands, right? And as we talked about
before, unfortunately, nobody wants that to happen. It's not in the best interest of the university,
nor the journals, and yeah, it's going to take an infinitely higher amount of effort and way
more time to retract the paper. And even then when it does happen, the retraction is not well read
or cited compared to the original paper. Much of the damage has been done. So there's that whole
problem in and of itself. The other probably less common, more responsible way of doing a retraction,
and I've seen this in many, many forms. There was actually a accommodating resistance
meta-analysis that came out years ago. The authors themselves said, hey, we made a mistake here.
We're pulling this. We would like to correct it and retract it. This one, it sounds like an
expression of concern on yourselves and a request to re-analyze. So it's, hey, journal,
we would like you to investigate us because I don't know what we're doing.
Paraphrase, of course, in a sense, we're wearing our journalist hats here. But yeah, that seems
to kind of be where it's at is basically the authors themselves have said, we are concerned about
the veracity of the data we published. And basically the expression of concern, as I bring it up here,
basically just says the editors of the journal, JACC, wish to inform readers that concerns have
been raised regarding the integrity of the data and or analyses presented in this paper. These
concerns are currently under confidential review in accordance with the journal's editorial
policies and the committee on publication ethics guidelines. While this process is ongoing,
the editors believe it is important to alert readers to the existence of these concerns.
So it's, I'm just, again, Helms, I'm just being totally candid, not being snarky,
no ulterior motive here. I'm extremely confused about what the plan is here because there's this
expression of concern, which like, I don't know what the investigation would need to, if the
researchers say, you know, we don't think our data are accurate. In theory, you think that
be quick, but it's actually not always the case. But I'm really confused by this preprint that I
stumbled upon today, which is dated January, 2026, which does not cite the paper with a clearly
result. It basically, unless I'm missing something, it seems to operate in a universe where that
original analysis never happened. Well, I think the, what it sounds like to me is you tell me about
this, Trex. And I think we should also just make sure that in the confusion around this paper,
people who maybe didn't listen to a prior episode to read that, Mass article aren't totally
confused as to what the hell we're talking about. So I'm going to do two things. If you do me
permission, I'm just going to give a little bit of context and I'll tell you what I think is going
on here on the partial authors. Okay, first context for everyone is just like, okay, what are we
talking about here? The original study premise in hypothesis was, hey, people following a long
term ketogenic diet who are active and have otherwise really good looking blood work and metabolic
panels, except for having high LDL. Maybe that's not a problem because they have low triglycerides,
great blood markers, all the other risk factors. We don't think LDL is an issue in those areas. Maybe
it's a, rather than a causal mechanism of arthroschlorosis, rather it is a correlate or it's a moderator,
a mediator, right? So they're probably fine. Here's our paper and everyone went, actually we think
the actual scans of people's hearts using ultrasound or rather their arteries indicate this is a
problem. And they went, well, now I don't trust the scans. Lots of weird stuff going on online,
which is what kind of where you are now entering dear listener, if you don't know what we're talking
about. And clearly the method of analyzing the ultrasound data at large for a hundred feet.
I think it is a computed tomography rather than an ultrasound. Oh, excuse me. Sorry CT scans,
my bad, of the arteries. Now that's in question. And it's the end scene you've now caught up.
So now what I think is going on here, if they've got a preprint and they're questioning their own
data, it sounds like they want to get the old article pulled, right? Because it looks bad. And I'm
just going to call a spade a spade. You know, you've been very diplomatic. I don't have any faith
in these folks actually wanting the scientific process or truth to come out. I think they have an
extreme bias, whether they come by it honestly or not, it doesn't matter. But that's just what I
think. So I think they're just trying to make the old paper go by. And they're happy to throw what
they did under the bus and take the L, but they're going to blame it on the analysis. And they're
going to justify it with this end of eight. And now they're trying to get a new preprint that's
out now and they know, hey, you know what, if you just do a preprint, it sort of doesn't matter if
it doesn't go through peer review these days, you know, one downside of the open science movement.
In this case, maybe they're going to publish it. Maybe they won't. And I mean, obviously it's
better if you get a respected journal or even a semi-respected journal. And that becomes the, you
know, quick look over here and result. Now here's where I think they maybe and I'd love to get
your thoughts here. You can control the narrative to a degree as a large scale influencer on Twitter
and on social media. And even if it doesn't go your way, like you said, you get the electric
shaper out, you don't, you remove Twitter off your phone for three days, you're back, you're good,
start over, slate clean, attention spans, you're fine, new audience, maybe even better. Now you get
the new crowd who is, who likes it when people get canceled or as anti-canceling. We see it happen
all the time. But when you actually put in a retraction request to a journal and they start the
process of an independent review, they're not just going to review the one thing. They're going to
bring in independent reviewers who look at the whole process. And I mean, some of these people
are actual academics, some are not so much, some are attached to universities. But there's a lot of
things that could be critiqued and reported. And the journal could come out and say some things that
maybe don't look so good on the record. And I am kind of like someone watching a train wreck,
maybe eagerly, or rather, that sounded terrible, watching a car race. When I don't think about
the actual humans driving it, you kind of watch and you're like, oh, it's just going to be an
exciting crash. I wonder if they haven't set themselves up for potentially a bit of an unintended
crash on the proverbial racetrack. I listen, I don't know. And like Helm's, I know the way on
presenting this sounds a little like overly wishy-washy, not making a huge definitive stand. And the
reason is Helm's, I don't know, man, I want to see what the company says that did these so clearly
as the company owns this algorithmic technology. I want to see what they say when if they say,
no, the story you've heard is not true. Or the story you heard is true. And we, you know,
so that's going to be really big is hearing what they say to the general premise that not only do
the original authors appear to lack confidence in the data, the analysis of the scans,
but they're saying they also lack confidence in the process of how the data were handled,
a matter of blinded versus unblinded analysis. And so it'll be really interesting because now like,
like you said, like, that's kind of a big thing to accuse somebody of. And so we may see that the
company comes out and either says, you're right, sorry, we messed up, or they come out and say,
that's quite a claim. And our lawyers were very interested and that you said that. And we'd like to
dig more into that with rounds of discovery and, you know, get, you know, get plenty of billable hours
racked up here. I have no idea how this go hell goes homes. And so basically what I wanted to do
in this episode is basically walk down memory lane and remember how much of a mess it was
to get this study published in the first place because I'm very comfortable saying that definitively.
The initial rollout of the data was a mess, which is crazy because the way this normally works
tells us you write it up, you get it to the journal, they publish it. Then you begin talking about
it. Look at the thing we did, right? So the rollout was an absolute mess. It was very chaotic. And
believe it or not, the current kind of flurry of activity in the last 48 hours is more of a mess.
And I have way more questions. What's the journal going to do? How is this, um, clearly the company that
owns this, um, you know, analysis technology, how are they going to defend or not defend themselves
against these, um, you know, people questioning their, their, their technology.
Yeah, there are so many, uh, loose ends here. And, and I'll be the first one to say very candidly,
this is not my primary area of research. I'm sure people are, you'll listen, people who are
into this topic are into this topic. They're going to go through this podcast. They're going to hear,
what they may not listen to this podcast. If it crosses their desk, they're going to say,
these, these idiot bodybuilders sound like they don't know much about imaging of the heart. And,
you know what, I'm not a heart imaging guy, right? So when we're talking CT versus ultrasound,
you got me, when we're talking about Q and G versus clearly, you got me. But what I do know, Helms,
what I do know is the scientific process, how you're supposed to do this stuff in a buttoned
up way. And so I actually don't think it's, um, unless you're, unless you're alleging fraud,
which is a big F word Helms, unless you're alleging fraud, um, I don't think it is vindication for
a researcher to say, Hey, I used a black box technology. I don't understand. And I don't think
it actually went through proper validation. And I just have no idea what the data were that I
published. That's not an excuse. It's saying I trusted this new tool and had no idea if it was
valid or not. So I don't know how they're going. And I'm not saying that clearly hasn't been
validated. I don't know. I'm not a cardiac imaging expert. But you can't just say as a researcher,
you know, all that stuff I published, I guess now that I've decided after publication,
I'm going to look into it. I don't think we should have done that. I think we should have done
the other two ones, um, that we may or may not have identified as the primary analysis approach.
But then it wasn't clearly wasn't the primary analysis approach, because it's being done
second or third in line. It's just really chaotic, Helms. And so like I said, I'm not a heart
imaging expert, but I do know a thing or two about getting a paper across the finish line doing
the analysis in a thorough way and communicating the results of research. And this is honestly just
a dumpster fire. Well said. And you mentioned taking a trip down memory lane. And I would encourage
people on the same topic of how scientists intended to be done. And at what point you critique it,
and at what point you look back and go, well, why didn't you do that originally?
Let's just go a little further down memory lane to remember a thing called Newsy.
Remember that one? Mm-hmm. Yes. You know, this was the,
an independent group that formed that said, yeah, you know what? We really think the,
you know, carbohydrate insulin level is legit. We think the cause, the obesity epidemic is
the rise in carbohydrates. And we think if we really look under the hood and do proper controlled
experiments, we're going to see the ketogenic diet coming out on top and is the solution to all
of our woes. And you know what? Just to show how committed we are to this, we're actually going to
get NIH, former NIH researcher, Kevin Hall, metabolic ward research. And we're going to compare these
suckers in the most controlled way afterwards. And we've spent more than a year contracting this
funding yet. And you know how funding goes because both of us have been through this process.
It's a intense back and forth to make sure that everyone involved thinks that the protocol
that you're doing will answer the research question that you put forth. And then you go, okay,
we're all happy independent researcher up a board. Let's get her done. And then what did they do
in the data came out and they didn't like it? They tried to throw Kevin Hall under the bus.
And they said, well, why didn't you do X, Y, and Z? And the answer is M efforts because you told
me to do this. And I did it. And you didn't like the data. And now I am a biased jerk, you know,
who which couldn't further from the truth. And knowing Kevin Hall pretty well, the guy is not even,
like, he's not part of like big RD, like nutrition pyramid carbohydrates. He is as unbiased
researcher as they come. He's a data man, you know, and just he will go wherever it leads. And I
always tremendously respected him. And you know, the reason why he's not at NIH anymore is because he
didn't feel he could do unbiased research anymore. So if you thought, like, oh, yeah, well, the
government didn't allow Newsy to show the truth. This is a guy who just departed the ways with
the government because he didn't feel like the government allowed him to research the truth now.
So I think that historically, you know, you know, you have like history tends to to
depend on clear picture and distance. Kevin Hall looks good. And I don't know what's going on
Newsy these days, but this seems to be repeating the past where you have a vested interest, in my
opinion, elements of it at least, or you have a vested interest, a biased group who is doing
quote unquote, citizen science. And even in this case, contracted it out. And then just say, well,
you know what, we're going to throw that out of the boss and it's sort of throwing
themselves into the boss in my opinion, which is sort of what's happening here too.
Yeah. And just to clarify a couple of things, first of all, when you say that you think there's
bias involved, it's really important to understand that like, bias does not always mean conspiracy,
right? It doesn't mean it doesn't even, it doesn't even always mean siongial bias.
Totally. Yeah. It can be long hypothesis. Yeah. I mean, it can be tough when you are,
you know, when you have been a vegan for 41 years and you study plant based diets, right?
Because and you think like that is something that is, you know, maybe the reason you adopted
that diet is very meaningful to you personally, right? And you know, sometimes I'll read papers
from specialized institutes that are fully committed to a particular branch of science.
And they're studying that branch of science. And it's like, well, what are you going to do if your
results said this doesn't work, right? And not to throw it under the bus because it's not unique.
But like, for example, I've seen studies on, you know, IRVDIC herbal remedies published out of
the so and so college of IRVDIC medicine. And it's like, you really can't, it's going to be tough
to be the professor of IRVDIC medicine, whose every third paper is saying, oh, by the way,
that, you know, traditionally very important remedy we use doesn't work, right? Like, that's tough.
It's not to say no one would do it, but it is tough. So I just want to be clear, like,
bias, we all take them into our research. The question is, you know, I think the scary ones are
the ones that are number one, just like absolutely fully entrenched or even financially reinforced
or number two, the ones we don't know. Like those are the ones that are really scary as a researcher.
But in any case, that's important. The second thing I wanted to mention is I wasn't sure if you
were doing this as an illusion to my mass article, but in my mass article, I actually talked about
Newsy. And I said, like, hey, there is a parallel here where at one point, Newsy, I think with somebody
slapped a number on it and said, this is a $40 million enterprise. And basically they funded a
couple of these little studies, the hall one being the one that got the most interest. And yeah,
the kind of chaotic aftermath and science communication breakdown basically led to a point
where I was looking at some journalism that described it in 2018 as nearly broke. And this
is something that really, I think it was founded in 2012. So within six years, it was described as
nearly broke from being a $40 million enterprise. And I think by 2021, it was formally dissolved.
So it was this big kind of like, hey, let all these idiot, you know, carbohydrates,
chills have been wrong forever. We're going to get the money together. We're going to do it the
right way. We're going to make sure that they commit to a protocol that's going to be fair
so that we can actually see once and for all that this hypothesis works. And they approved the
protocol to the best of my knowledge. And then when the data came out differently, they said,
what was wrong with that protocol we approved? And of course, they didn't frame the question that
way, but that was the question. And so yeah, you're, you're, it does have those parallels of like,
hey, you know, we know that lean mass hyper responders are going to be just fine despite this sky high
LDL, you have LDL 300, who cares? It's fine. And we're going to call our shot. We're going to
pre-register our methods. And we're going to report the data, you know, clearly and, you know,
do everything above board. And we're going to finally show you guys that this is how it works.
And to their credit, like pre-registering the protocol, good move, you know, publishing the protocol
paper, good move. And even frankly, the initial analysis, like, I don't really have a, I mean,
I have an, I have a belief of what the evidence support, like I would have been very surprised
if it's a good idea to have your LDL in the 300s for a couple decades. So I had an expectation
because there is science on this topic. But I don't have a big like, you know, I, I don't have
a horse in the race here that I'm really pulling for in a meaningful way. And so I think even with
the initial rollout while it was chaotic and hectic, kind of came out and said, okay, given that
this is, first of all, observational, not randomly sampled, basically recruited. Hey, are you a
lean mass hyper responder come out? There's a very selectively recruited group only one year,
which, I mean, hey, that's a long time to do a study in this context with imaging and stuff. But
we know atherosclerosis is a long term, you know, it's a slow progression, right? So there
are all these limitations. And even with all that, you know, the paper comes out and says,
it kind of looks like a high risk group, even though they fit the criteria for lean mass hyper
responder up to that point. I'm like, you know what? That's pretty much how it's supposed to go,
right? Is you, you call your shots? Even though there were some deviations from the registered
protocol, it seemed like they, they were not particularly flagrant necessarily, although I would
love to see the primary outcome highlighted more clearly. But in any case, where it breaks down is
now the data don't support the hypothesis and then what? And now it's turning in on, well,
what if the protocol should have been different all along? And that's the key parallel with the
new C stuff, which again was ironically enough from the same general, not the same individuals,
but the same general camp of low carb, high fat, ketogenic is the way to go. So we are seeing
a pattern in that, in that realm. Yeah. And I think I want to make two real brief points and
a bit of a steel man case, throwing a bone and maybe silver lining before I think we talk about
the real issue here of around public science communicators, if I want to use a, a nicer term
as one of them myself. But maybe people who are helping produce science who aren't the
scientists, leading the charge on the communication. And maybe some things we really need to think about
in the future as we move into this era where that is the means by which science communication
occurs. It's happening on YouTube, it's happening in press releases, happening on Twitter. It's not
like it used to be where we have a different set of problems where media would get it wrong or
be controversial. Now as people actively involved in the process, who weren't actually actively
involved in the research, you know, or at least they have a primary motive of using the research
to build cloud promotion, et cetera, or something like that. But first, these two points, silver
lining slash throwing a bone here. The kind of the LDL denialism has led to some useful things.
I could be wrong here, but before this kind of repeated attempts to downplay LDL because it's
a pretty repeatable outcome of having high LDL following a high fat diet, even if it's low carb,
even if your healthy, all this stuff happened, I don't remember there being much focus on
like lipoprotein fractions, like APOB and stuff like that. And it did lead to
now a well-accepted consensus in the cardiovascular health community of saying, no, it's actually true,
like if you have high LDL, but you have high HDL, total cholesterol under control,
everything else looks good, triglycerides are low, and you take a look at the specific fractions
which are more likely to lead to plaque accumulation, like APOB, and that's lower in the normal range.
Your risk profile is actually substantially lower. So LDL is maybe better represented in terms
of risk by looking at say your APOB or APOBA, which is kind of another version of looking at HDL,
like ratio, and that might be the best thing second to actually looking at your scanning your heart.
So that's been a positive thing that's come out of this. And the other thing I'd say is there's
even a world where you could do this observational analysis with this pre-selected sample where you
just have a slightly different research question, which would be, hey, LDL does seem to predict
pretty consistently in various contexts problems of the heart. I wonder if these other
modifiable risk factors such as being healthy and exercising and having a relatively robust
exercise protocol and a diet that could potentially be healthy broadly as it includes hope,
I mean, if it's pure conical or maybe not, but you could look at these people and they're just
following like a healthy ketogenic diet. And by that, I mean, it includes fruits, vegetables,
micronutrients, all that good stuff. You can definitely make a pretty high fat Mediterranean diet,
for example, right? It doesn't have to be all saturated fat, although I'm not sure that you would
see really high LDL in that case. But let's just say we took a group of people who they had all the
other modifiable risk factors except for high LDL. Is that sufficient to offset plaque accumulation?
This isn't an RCT, it's observational. And we want to see because we have this data on independent
risk factors from epidemiology and short term data, this could be useful. And we go, hey, look at
that. They do have high LDL, but because of all these other activity levels, we don't seem to
see a progression of plaque or unfortunately, which probably the data seems to reflect better.
There is a high accumulation of plaque. Therefore, LDL really, really is something you must
control is not a mediator. We now know not in a pure causal way because we can't do that with
this study design. But we have all these other, the presence of these other modifiable risk factors
and they were insufficient, at least in this group of 100 dudes to, or they were they meant only
or is it mixed from example? Honestly, it doesn't matter from my point of view, but in this group of
people to actually lower their progression of plaque development. I think that would be a
really valid thing to look at, right? So, yeah, I just want to put that out there.
Coming in here with some clarifications, 59% of the sample was male, average age was 55,
and the mean value for LDL cholesterol was 254. Sheesh, yeah. That's really high, dude.
Yeah, the mean, now keep this in mind. The mean value was 254 with a standard deviation of 85,
but also keep in mind for the inclusion criteria, you couldn't sign up unless your LDL was over,
was at least 190. So, like, we know there were people that were pushing, you know, 300 plus
easily. Yeah, like way up there. If you talk to standard deviations, dude. Okay, wow. Yeah.
All right. So, I just wanted to put that out there, trying to do our due diligence of,
there are ways that this is still useful data, even though it seems to be a doneter fire, as you said.
But let's talk about not just the data. Let's talk about tracks. You and I both purport to be
researchers, as well as science communicators. Not everyone does those in equal parts. So,
I'm really not trying to throw any shade, but I would say that there are more and more and more people
who are maybe on the promotional social media science as my brand tip first, and then science
second, and they may be they're not attached to a university, maybe at most they make a lot of
money and they decide to make my own lab or fund something like Newsy or whatever, and they then
interact with the scientific apparatus. But we see stuff like this happen more often than
as ideal. And I think the problem is the perception from the public, because this can poison the
well, in my opinion, and we've seen that happen in a lot of places, especially around COVID,
that there is a willingness to, you know, discard science and to view it in a light that could
potentially harm society. And I think we see this, you could say stuff about RFK, the dietary
guidelines, the current anti-science kind of sub-movement within exercise science, and I will say,
I kind of get it when stuff like this is happening, right? Yeah.
Yeah, I mean, I think this kind of saga and others like it, I think highlight kind of two big things
that are, I don't know if you want to call them issues, or if you want to call them misconceptions
that lead to problems, but here's the kind of two things that come to mind as we shift focus
away from the subject matter and into the kind of communication meta-narrative, right?
So number number one is, I think there's a misunderstanding about
when we look at the science, what actually is the selling point? Or in other words,
what are the barriers between, you know, any old person and doing good high quality science,
right? So I think one of the misunderstandings is that if only a person with, and this is going
to sound so conceited and ivory tower blah, blah, blah, whatever, but I think the misconception is
if only a person had access to the measurement devices, or if only a person had access to the
money for funding a study, or if only a person had access to the data from a study, then that
technically is just science, like you go from non-science to absolute gold standard of science,
and I really implore people to recognize as this stuff goes, um, continues to happen,
that science is a philosophy, science is a set of skills, and most importantly, science is
a toolkit that is honed over training, you know, like, and I don't want to be in the position
of acting like a gatekeeper, but I do want to say that if we start to see consistently that
the rollout, the execution, the second guessing protocols, if this starts to be a hallmark
of some of this more like citizen initiated science, and it's not good for science in general,
if we decide that the way this is going to go is sloppy rollouts, second guessing, um, saying,
well, we analyzed it three ways and our next paper is just going to focus on the two, and we're
going to pretend the first one didn't like there, there are elements of kind of basic first
principles of science, um, key foundational elements of how you do it thoroughly and appropriately,
how you communicate it, what the processes are, I think it's worth noting that the execution
really matters here, and it's not quite as simple as if every person just had, you know, an ultrasound,
then they could be doing any, any level of hypertrophy research they wish to do, right? There are,
and again, it's not gatekeeping, it's just like, okay, you've lifted weights a few times,
you read lots of articles, are you now prepared to be the best trainer in the world? Well, no,
you're going to have to like figure out how it works and acquire the skills and get some experience,
and hopefully get that experience under the watchful eye of someone who's pretty good at it,
right? And so I think that's one thing that we have to really keep in mind as this kind of thing
develops more, right? And so that's the one thing, the second thing that I think is an issue is,
as much as it sucks, science communication is generally worked in a slow, rigid process,
and it's probably slower than it needs to be, it's probably more rigid than it needs to be, but
here's the nice thing about that process of do the work, analyze it thoroughly, write it up,
everybody interrogated internally, submit it to a journal, get it peer reviewed, and now it's
published. The nice thing about that is that process creates a document that in theory gives
you the information that you need to interpret that data. What on my biggest gripe of all this
stuff going on right now is now science communication in this context is happening on Twitter and
YouTube. Twitter and YouTube move fast. The content of a tweet, the content of YouTube videos
decided by one person. This is what's on my mind, and this is how I'm going to frame it. I don't
have all the data or all the information that I need to actually really understand what's happening
with this whole saga, this whole back and forth, and I think that's a microcosm of the problem.
We are kind of saying here, these are like the general tools that seem to be associated with
science, go do it, communicate it as you see fit, do things out of order. We're totally fine if
the results turn out a certain way looking back at it and say, well, what if we measured it the
wrong way? That's normal science, but like Helms, when a researcher in theory, what you're supposed
to do is measure a thousand times cut once, make sure that you feel really good about the measurements
you're taking, and then after you interpret your research, it's published, maybe new stuff comes
out. It's okay to second guess, maybe there's a better way to measure the stuff we used to measure,
but it's a slow, it's a deliberate process, and that decision of having faith in your measurements
is not tied to whether or not they support the hypothesis you were testing. Perfectly both said,
correct, I'm going to try to type on this, because you made some excellent points and I want to
help the listener understand. The first thing about gatekeeping, the efforts by the scientific
community to remain independent, be transparent, and ultimately democratize the scientific process
have been tremendous. It's called the open science movement, and the efforts that academics have
had to try to maintain independence in both private and public institutions from both private
and public influence, whether that is an administration that wants to do things a certain way,
which is shown through the public university or the funding mechanisms, or the money that might
be involved in a private institution and their potential financial incentives, you can find
researchers consistently trying to remain independent so they can do good unbiased work.
Science does not work, there's many issues with it, we've talked about them on iron culture and
the most vocal proponents of this, and people who have issues with it are scientists themselves.
They have issues with the journals, they have issues with universities, they have issues with
funding mechanisms, and they are constantly critiquing one another, right? So that is a separate thing,
democratizing the scientific process and ensuring independence from saying that it does not
require expertise, and this is something these are happening in parallel, they're two separate
problems, the death of expertise, as well as trying to break down some of the institutional barriers
to access and understanding of science by the public, especially in the fields of health,
right, or anything related to it, and those actually have been at odds because you'll have people
who are focused on one deny the other, which is kind of what we're seeing here happen around
COVID and it largely comes down to mistrust, and the mistrust is the second point I want to make,
and there is some blame, and this is structural blame. Scientists are not trained to be science
communicators, and what we have right now is a lot of academics have been around a long time and
we're moving into a very new era of science communication, or even the media training that does
exist in a university that's fairly accessible to the academics there, those people are out of touch,
and I hate to say that, but as someone who you and I, we've self-trained to be science communicators,
and maybe we're good at it because we are personal trainers, then coaches, and our field specifically
is about helping people understand these topics, and I was a trainer before I was an academic,
right, I have these skill sets, and I also understand the social media landscape to a degree,
I can see how far behind we are, but there is now a skill set of effectively science communicating
on the existing modern social media information age platforms, and there's the skill of doing good
research, and they're separate, and that means that we need to have a hand-in-hand collaboration
process. That's not really happening, if there's a temp set, we're getting there, it's going to grow,
but these are examples of it going really poorly, right? The third point I have is finally,
like, what do we do about that, and what should it look like, and what is the result of these kind of
old mechanisms and scientists not being trained in this, and kind of having a narrow view of what
their purview is? If you look at kind of forward-thinking universities, they have changed the
processes by which they promote in higher people. There are many universities that are now acknowledging
that impact is more important than traditional metrics of the number of publications you get,
or the quote-unquote impact factor, and there's things like ultometrics, which literally means
alternative metrics of looking at how publication doing, which incorporates its social media engagement
and posting. So currently now, an academic is being charged with the skill set that they have
not traditionally have, of not just doing good science and publishing it, and these journals that
are kind of separate from the world view of the public, but also ensuring that it gets viewed
through the alternative metrics on social media and the stakeholders and the public in the case of
health science, and all of this related field, including sport exercise nutrition, are being taken
up and used. So now, we have a mismatch between the skill set of scientists and the actual metrics by
which they are hired and rewarded, and there's going to be a lag time here. And if you look at the
way that funding occurs in university structures, someone comes and says they approach a researcher,
a university group, what have you, and I've become far more engaged with this than I ever want to
now as the co-director of the Institute I'm in, and they go, hey, do this research for us,
and there's a pretty good effort that the universities involved with, they get their legal team
involved, to make sure that the science itself is rock solid. The methods are done well,
that you're stating openly any types of conflicts of interest, you're managing any problems,
and there's independence in the research process, and maybe the actual publication. And then the
scientists go like this, tracks, wash their hands of it, and they walk away because science
communication has not traditionally been their job, and these contracts don't protect against
the dissemination or the communication of it, just the actual publication itself. And that is what
needs to change moving forward. These contracts need to actually start including, that okay,
if we have a company, or someone who is a social media personality, or has a YouTube channel,
or is an independent foundation with its own media arm and ability to disseminate and communicate
scientific findings, that the contract states that in those posts, interviews, podcasts, even tweets
that they must involve the initial researchers for, say, X amount of time, they need to approve
the content, the interviews need to be done with them, and to actively engage the people who
understand the nuances in that content creation. And there, of course, that'll be a negotiated
process because you don't want to kill your engagement, you don't want someone who has an
social media training, droning on about, you know, a complex linear mix model for 30 minutes,
when you want to tell someone, and let's say everything goes well, what they need to do with
their nutrition or training. So I get it, but we're seeing the costs of when they're not involved.
We're seeing the misrepresentation, the confusion, and sometimes the loss of faith in science.
So how did I do tracks? I feel like I had a pretty good grasp on, on maybe where we need to go
moving forward, because I can look at this from a high level and say, I know how this could be
done right, and to say in it. Yeah, and it's, I think it's important, like, you know, I,
again, begrudgingly was like, okay, I don't want to engage in like cliche, ivory tower,
gatekeeping, but also like we might want to keep an eye on the gate to some extent of like,
how we're going to, you know, what we're going to frame as being scientific inquiry and treat as
kind of similar levels of investigation. But I like the fact that you kind of acknowledge,
and I totally agree that what we need to have is a symbiotic relationship of people who are
very skilled communicators, which most scientists are not, and people who are very skilled scientists,
which most communicators are not, professional communicators, right? You don't, you'll find a
lot of people who just as like a, on the side, oh, I happen to be an extremely skilled researcher
despite no training in it or, you know, having dedicated any of my time to it, right?
So we need to have the symbiotic relationship where researchers are doing research very thoroughly
and transparently, communicators are playing a huge role in disseminating that information
while working in collaboration with folks who can say, you're a great communicator.
I know very thoroughly what happened in the study. Let's team up and communicate those findings
in a rigorous, uh, in, in clear way. Um, and Helm's just to, um, I feel like I expressed many
feelings in this podcast and many thoughts without a clear narrative. I do want to just kind of leave
one last fragment of one of the reasons why I'm so jumbled and just like, this is not going well,
how this is kind of panning out. So I'm looking at, um, you know, one of the people who was an author
on this paper we've been talking about, right, with a clearly scans and all that, um, came out with,
a blog post that is, I can't read the whole thing. Um, it's pay-walled on sub-stack.
But, uh, they write about this particular project. It's somehow lumped into these clearly scans,
and these revelations in the last 48 hours. But the quote is, while I've long been aware of the
shortcomings of the scientific process and the meritocratic failures within academic medicine,
my experience with this project was without exaggeration, the most disillusion of my career.
He says, um, what unfolded here, uh, defied, disregarded, and wholly disrespected, um,
those ideals of, um, aspirational ideals of research in ways I find difficult to fully articulate.
I recognize that to many, particularly within academic circles, the facts I have recently
and will continue to describe as the story unfolds further may strain credibility or even sound
conspiratorial to be candid. If I were hearing them as a, as an outside observer, I'm not sure I would
accept them at face value either. So there is also a thumbnail on his Twitter where it says the
people's era of science. And there's like an army of masked doctors that look very nefarious and
good. It's kind of being framed as a somehow we did this study. And yet there is what seems to
be framed as a conspiracy of like the medical science apparatus that we are fighting against. And
there's also part of this tweet with the scary army of, um, establishment medical doctors. It says,
and special thanks for those behind the scenes who are nothing short of warriors for intellectual
integrity and scientific truth, especially when it's inconvenient or even personally dangerous
than tagging some of the co-authors on the paper. So the narrative's there, right? It's, uh,
it's us versus the establishment. Uh, it sounds like a conspiracy theory, but it's just a legitimate
conspiracy. We are the freedom fighters trying to make everything right in science up against
tall odds, but we're just brave enough and crazy enough to be the ones to pull it off.
Shit, Helms. All that because you, because you have decided that you don't believe your data anymore.
Like give me a break, dude. My eyes have rolled back so far in my head they may never return.
They're going to come back around. That's the only hope. Yeah. And then you're going to see the
world upside down. So yeah. But like, and you understand why I'm like, no, no, dude, like this is a
chaos. It's, and, and, and it's exactly the point I was making that, um, the death of expertise
and the democratization of information and access to science, they need to be seen as separate.
And this is exactly the post that confuses them and actually adds to the problem.
And I'm going to try to demonstrate this by analogy. And I'll leave the listeners.
Dear listener with this is the final piece from Helms.
Let's say we established that the healthcare system in a given made up country was maybe not
functional because getting surgery would not be covered by most insurance. Or even if it did,
it would still leave you bankrupt. And we decided, will you know what the solution is?
Is we just need to make it so that anyone can do a quick online registration
to become a doctor and then they can do surgeries themselves. So don't worry, you can't afford
surgery. Just ask your wife, your partner, hell, your dog. If it's a certain breed in some states
can go online, sign up. And now you are a legitimate surgeon and you can just rent some space
for far cheaper at your local clinic and you can do heart surgery on, on your friends.
That would be an example of confusing the two. And that's kind of where we're getting to here.
We can't trust the established experts. In fact, their very expertise is what makes them
untrustworthy. I've heard that narrative related to the PhDs. So therefore, we need to drain
the swamp, clean them out, and you need to become your own health expert. Or in this case, your
own scientific expert, you can trust the data. You don't need to have training from people who
have done research. We can like these do not need to be conflated. We can absolutely do everything
in our power to make it so that health information, research, knowledge is disseminated at the
public as best as possible. We remove jargon as much as possible from the scientific publications.
And that we have a healthy marriage between science communication and researchers. And then we also
upskill researchers in science communication and science communication in science communicators
in the research process. So that that handoff looks less like a bungled, you know, like high five
thing between two very white guys and actually has a really clean set of predefined
baton passing so you can win that race and everybody benefits. But yeah, I would just really urge
people, please, we don't need to kill the idea of expertise to democratize information or health.
Yeah. And just to solidify your point, the title of that
uh, sub stack article I was reading from it's called is medicine captured a deep dive into how
pharma funding shapes medical narratives, how the status quo perpetuates itself and why science
isn't the meritocracy we imagine. So yes, you are correct. So Helms, I think in a nutshell, I just
want to clarify, listen, we talked about a lot of weird stuff that I don't have answers to in
this podcast, which is generally not a good way to start a podcast is by saying here's some things
I don't really understand or know about. But we're just going to have to hold on to our seats
and see how the stuff shakes out. So in future episodes of Iron Culture as as more details emerge
from this kind of emerging story, we'll we'll do brief updates, nothing major. But um, the short
version is we are kind of in a new era of of science and science communication where what used to
be a wall between the two is really not a wall anymore. There's a lot of reaching across,
not through collaboration, but by through saying, hey, maybe I'll put that hat on instead.
And I think the collaborative approach is probably where we utilize each other's strengths versus
leading to a little bit of chaos. And like you said, ultimately, potentially some things that
undermine trust in the science itself, which is completely the opposite idea of good science
communication, right? We should be always skeptical of any individual piece of science. But
what we don't want to do is rattle faith in the concept of science. And so that's why execution
matters here. Would you say perhaps that it is about being skeptical not cynical? I listen,
I learned my lesson. I'm not putting cynical or skeptical in the same sentence ever again after
you slapped my wrist for accidentally putting them on a spectrum where they don't belong. But yes,
you you are correct. We should be skeptical and not cynical. And yeah, the kind of don't trust
science. We need to do it ourselves approach while I appreciate the spirit and I appreciate the
embrace of rolling up our sleeves and getting involved in science. I think that we need to make
sure we're approaching that in a nuanced and grounded way. Don't do heart surgery on your
partner just because they can't afford to have a real surgeon do it, folks. That's a good point
that that's very, very good advice. So Helms, I think that probably does it for this episode. Anything
to say to the good people before we sign off? No, just, you know, make sure that you recognize
our ulterior motives. Just need to make money head over to mass research reviews sign up,
head over to elite FTS and use the code MRR10 and make sure to give us a five star rating,
thumbs up likes shares and do things that help me perpetuate my financial motives and none other
motives that I could ever possibly have. Yeah, I just want to say that to that question from the
blog post, is medicine captured? Yes, and it's been captured by Eric Helms. That's right.
Perfect. All right, folks. Thanks so much for listening. As always, we appreciate you,
and we will be back in seven days with yet another episode.
