티스토리 뷰

반응형

(Recommended)Popular Videos : [Veritasium] 출판된 대부분의 연구 결과가 거짓이라고?

 

This time, I will review the popular YouTube videos.

These days, even if it's good to watch on YouTube, sometimes people skip it or don't watch it if it's too long.

When you watch Youtube, do you scroll and read the comments first?

To save your busy time, why don't you check out the fun contents, summary, and empathy comments of popular YouTube videos first and watch YouTube?

(Recommended)Popular Videos : [Veritasium] 출판된 대부분의 연구 결과가 거짓이라고?

https://www.youtube.com/watch?v=42QuXLucH3Q

 

 

Playtime Comments : [Veritasium] 출판된 대부분의 연구 결과가 거짓이라고?

bl*******:
at 2:24 he chooses there to be 10% true relationships and 90% false, this seems a bit arbitrary to me and it does seem to effect the final result.

Wi************:
Kudos for a very clear and insightful analysis! I have a math Ph.D. and taught stats for many years. Here's one of my favorite puzzles I liked to ask my class (I have never found it in a book): There is a Dread Disease (DD) which, on average, 1 person in 100,000 gets. There is a test for DD with a false positive rate of 1%. In other words, on average, for every 100 tests which come back positive, 99 of them are correct.. You get tested and the result is positive for DD. How worried should you be? This problem is most easily solved not with formulas, but with a picture like the one you show e.g. at 9:07. Keep up the good work!

 


 

Top Comments : [Veritasium] 출판된 대부분의 연구 결과가 거짓이라고?

Ro******:

Anyone who reads articles online about "new research" needs to watch this


Mi******:
Excellent video. I once constructed an "experiment" for a stats class using winning lottery number combinations, gleaned from reported winners of previous contests in our state. The purpose of the "experiment" was to show how easily we could manipulate the outcome, and create false conclusions ("lucky numbers," so to speak), simply by adjusting the sample size. By increasing the sample size, we obviously reached the conclusion that the chance of any number being chosen was equal to that for all other numbers -- i.e. they are random. Despite this, I was amazed to find that some of my students really believed it was possible to predict future winners based on past results, even after we completed the project. The will to believe there must be some sort of hidden relationship between the various numbers occasionally overwhelmed reason -- which suggests something about our willingness to believe objectively foolish propositions ... and conspiracy theories.

Sa*******:

This is why statistics should be a mandatory course for anyone studying science at university.
Knowing how to properly interpret data can be just as important as the data itself.


Mr***************:

Publish or perish ... and quality goes to the drains


hz***:

Veritasium is being kind here. One professor I knew admitted that he intentionally witheld small but critical details in his paper for a particular sample preparation technique to make it virtually impossible to reproduce. (In nature nanotechnology no less) The reasoning goes kind off like this:
If people can reproduce it easily they produce their own samples and perform their experiments. The result: One citation for this prof.
If people can't reproduce it easily they need to have the sample produced and delievered by the prof. The result: He gets an authorship on any resulting paper.
Publish or perish is real. Papers get your research grants and research grants get you money and personnel to produce more papers. I think any discussion on this subject should not assume entirely innocent behavior. The incentive structure of the publication system does not reward reproducibility or strong scientific ethics, but in some ways punishes them.
In my opinion we need a complete overhaul of publications as a whole. Right now we are using a 19th century approach to sharing information, ported to honestly rather poor websites which we call journals. I think platforms like Github give us a clue how collaborative development should be done in the modern day. Not that reforming the process is going to be easy.


ps********:

It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.


Pe******:
As a researcher, I find those numbers very conservative, even when I'm 4 years late to the video.
And I also feel like there's a reason missing for the false-positive results category which is a deviation from the main objective. Some true positive results shouldn't be considered as such when you make an in detail analysis of their methods, statistics and final findings just for the pure reason that, mid-study, some parts of the objetive were changed to accomodate the findings. This is also an issue that pisses me off, especially in my research field where there's such a huge mix of different scientific areas that it's next to impossible to verify anything at all in detail because everyone just pulls the results their way.
And as some people mentioned here, some people do withold critical pieces of information for citation boosts. If people can't reproduce something from a study, they can neither be proved wrong by the paper's information alone (as long as it checks out in theory) nor can they be denied autorships and citations from other papers which effectively boosts their 'worth'. The fact that researchers are evaluated using citations/autorship numbers is also one of the leading problems as to which false-positives exists in such large numbers (I don't believe false-positives are only ~30% for a damn second, but this is my biased opinion) and why some papers, even though everything checks out in theory, can never be truly reviewed on a p2p manner on the practical results sides of things.

Anyone who works in research knows there's a lot of... misbehaving on most published works, regardless of the results. Therefore I have to disagree with the fact that researchers are fixing some of the problems.
We can sift through p-hacked results. We can't, however, sift through p-hacked results if the objective is mismatched with the reported findings (if someone told me that was involuntary, I'd believe them because I know how easy it is to deviate from it) nor from a paper which withholds critical information. And the worst part about it is that this is further fueled by higher degree's thesis such as masters or Ph'Ds where it's mandatory to cite other people for their work to be 'accepted' as 'valid'.

You have to attack published works with a very high level of cynicism and with some time and patience on your hands if you're even dreaming of finding a publish work that remotely fits your needs and actually shows a positive result on most scientific areas.

MK***:

First hand experience: A lot of researchers in medical and physiology simply don't have a good understand of statistics...


Cu***:

I like your take-away message: the scientific method is not perfect, but it's the best tool we have to reach knowledge!


Co********:

An engineer with a masters in nuclear engineering, a mathematician with PhDs in both theoretical and applied mathematics, and a recent graduate with a bachelors in statistics are all applying for a job at a highly classified ballistics laboratory. Having even been given the opportunity to interview for the job meant that each candidate was amply qualified, so the interviewers ask each the simple question, "what's one third plus two thirds?"

The engineer quickly, and quite smugly calls out, "ONE! How did you people get assigned to interview me!?"

The mathematician's eyes get wide, and he takes a page of paper to prove to the interviewers that the answer is both .999... and one without saying a word.

The statistician carefully looks around the room, locks the door, closes the blinds, cups his hands around his mouth, and whispers as quietly as he can, "what do you want it to be?"


to**************:
“An uneducated man believes what he is told , an educated man questions it”

La*********:

P values only have credibility if the experiment is designed well. P values are just a tool or indicator, you can also show positive correlation between loss of pirates and global warming if you wanted to. It's about idiots who read these papers and don't analyse them, and it's about random, disreputable journals publishing them.


da********:
we should open up a journal for replication studies only

Ch********:
Still waiting on an update. Is there a massive re-verification of study results going in the Social Sciences?

Ba*******:
You should update the video. Pentaquarks are indeed real now.

J*:

Meanwhile us mathematicians are laughing at all the accuracy errors you scientists make


Ae*****:

148% of people don't really understand statistics.


Er****:

Why would anyone give this a thumbs down?

Spent most of my life in research, painful yet true....


Pe***********:
Sadly these incorrect published studies cause people to distrust all of science entirely.

Be*************:
Somebody should make a journal that is dedicated to publishing replicated studies.

Mi*******:
I have an hypothesis. I think getting in car accidents decreases your chances of dying from cancer



...but increases your chances of dying in a car accident.

Kr*****:
a p-value less than 5% being good enough, is something I always wondered about, it seemed arbitrary to me but I ignored it, But after 3 years you answered the question that was always in the back of my head.

Fe***********:

Hey Derek, thanks for the video. I consider your channel to be one of the best science channels on youtube (if not the best one) because it touches exactly a neglected point in the academic world today: intuition. It's not only about technical experiments but also the reasoning behind and the understanding/intuition involved. This specific video now shows exactly one of the big flaws in the scientific world today besides the "publication competition" and touches also a very sensitive point that I saw in some very good world known universities: lack of skepticism. Unfortunately people seem to get more and more ready to research what is already given and less prone to question "known truths". In that sense, my intention with this message, is to raise you the question about the whole climate change story. I saw your old video about it and also agree in many terms with it, but I still have the strong feeling that this whole hyisteria is much more political and ideological as it is scientific. This video here has a very interesting point that would be very interesting to mix with the climate change topic as maybe some of the "known facts and studies" may also fall in to this research problem. I'm not saying the climate is not changing or the CO2 is not raising, nor that one doesn't affect the other, but as a skeptic, i think there are still many fundamental links missing or at least lacking scientific proof in order to be so sure about the future of our world, especially when we still don't entirely understand and dominate the climate system. Anyways, congratulations for the video and I leave you with this suggestion for another video or maybe some thoughts about it. Cheers.


Ma****:

The xkcd "Jelly Beans" comic deserves a mention. I'm so glad it became popular because it illustrates the whole issue so well, and in just one frame. It should be required reading for the whole world!


Po*************:
P values of 0.05 are a joke.
Look, I'm going to sound biased, and that's because I am.
This is a much bigger problem in fields like Psychology than in fields like Physics. The emphasis on constant publication and on positive results is still a massive problem. Researcher bias is still a massive problem (although still, not as much as in Psych/Sociology). The existence of tenure helps a little since researchers become able to research whatever they want rather than what the system wants.
But we aren't claiming world-changing discoveries with P=.05. Derek brushed right past this like he was afraid of sounding biased but I'll repeat: 5 sigma is a 1 in 3 million chance of getting a false positive purely by chance. Every physicist "knew" the Higgs had been discovered years before we finally announced it and started celebrating. But we still waited for 5 sigma.
I did some research with one of my Psych professors in my freshman year. She was actually quite careful outside of the fact that her sample sizes were pathetic. We went to a convention where we saw several dozen researchers presenting the results of their studies, and it was the most masturbatory display I could have imagined. There were some decent scientists there, no doubt, but the majority of them were making claims too grandiose for their P-values and sample sizes, confusing correlation with causation, and most of all failing to isolate variables. If a freshman is noticing glaring problems in your research method, your research method sucks.
The next year I had a Physics prof. who had a friend of mine and his grad students run an experiment 40,000 times. There is no comparison. We need a lot more rigor in the soft sciences than we have right now. Mostly because science. (But also because they're making us all look bad...)

Cl****************:
inb4 Someone misconstrues the message of this video into "Scientists are lying about global warming."

Lu************:
For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.

Va******:

Research shows lots of research is actually wrong
spoopy


Sy****:
I'm taking a science research class and this is literally what I was thinking about with like 90% of my peer's projects.

Al***********:

Just wanted you all to know that the pentaquark, since 2019, is considered a real thing again.


cr********:

@Veritasium Thanks for sharing this very interesting video. I'm an MD and while we studied all these criteria for statistical analysis, unbiased scientific study design, etc., this particular phenomenon was never mentioned (that I recall) at my medical school. Interestingly, it corresponds to a concept we use in evaluating testing modalities called Positive Predictive Value, meaning the likelihood that if someone tests positive for a disease that they actually have that disease. Unlike Sensitivity and Specificity (the false positive and false negative rates), which remain the same for a given test regardless of disease prevalence, the PPV is hugely influenced by the prevalence of a disease in a population. This corresponds very closely with your idea that the validity of a study with a statistically significant difference with p<0.05 decreases with an increasing number and ratio of studies investigating untrue hypotheses.

I think it's important to note that many studies have much more stringent p-values. I've seen numerous studies and meta-analyses that have p<0.001. For many of the most important topics in medicine, we have excellent studies and have been reproduced numerous times that guide our practice.


Ya******************************:
Missed Assumption:
Did the random generation from the software result in the same number of left and right screen choices? If it really generated random numbers, the results generated must be compared to the results produced by the random number generation routine being used.
This intensifies your conclusions!

Yu***********:

The very fisrt day I started googling for how to write my first paper, this video appears. Good job. The more people know, the better the results may be!


12****:

"I don't want to be too cynical about this."

That doesn't sound like the Derek Muller we've all come to love.


Ja*********:
The lack of incentives for replication studies is obviously the biggest problem. The fact that some of those "landmark" studies were only attempted again recently...

Hopefully, as people become more aware of this (it's happening), all those journals will change their mind about replications. They should release a separate issue for them, even.

Ch****************:
I've been a world-class AI researcher for almost three decades now. I have personally, during this time, witness much deliberate scientific fraud, including rigged demos, fake results, and outright lies. Additionally, numerous colleagues have admitted to committing scientific fraud, and I've even been ordered to do so myself. I have always refused. I will not, as a scientist, report results I know or suspect to be misleading. My family and I have been severely punished for this. So I recently returned to mathematics, where true and false still seem to reign. And lo and behold, instead of abusive rejection letters, written on non-scientific grounds, I get best-paper nominations. PS: don't believe any of the current hype around AI.

ca*************:

This reminds me of in college trying to find trends in data by any means possible just to come to a conclusion that would result in a good research grade.

I think when your motivation becomes solely about money or grades (or whatever other comparable unit u might think of), you lose sight of the actual purpose behind what you're doing. In my case, because of my fear of getting a bad grade, i twisted the research process to show results that would impress my teacher, but which ultimately were false and useless. This video made me realize how many systems (in education, business, science) are actually structured for their participants to waste their time pursuing arbitrary goals rather than the ones which are actually valuable. If we could make it so a thorough and honest process would be rewarded just as well as one that has a flashy result then we would have a lot more true value being generated via these systems.

This has been on my mind in school recently so I'm really curious to hear what others think if anyone wants to reply. Great video!


Ne*************:
one of the reasons why i stopped for research after masters is the same. I saw 90% of the academic crowd is selecting the values that support their already assumed results and ignore the others. and i totally agree with you bro bcoz reseachers cheat p values. If i can do others can also and so i do not believe blindly on research references if someone gave on his statements. most of academicians i found just want to increase their publish count bcz it gives revenue to university and even increase their credibility(so called).
The main culprit is the show off of people being called a researcher and pressure for reaching to some useful result of your years of research. NO one want you to fail and so you cant afford it.

 

 

[Veritasium] We gathered comments about popular videos and looked at them in summary, including play time, and order of popularity.

It's a good video or channel, but if you're sad because it's too long, please leave a YouTube channel or video link and I'll post it on this blog.

 


 

[Veritasium] Channel Posting

[Veritasium] 2017년 개기일식

[Veritasium] 3 Perplexing Physics Problems

[Veritasium] 4가지 혁명적 수수께끼 : 해답편!

[Veritasium] 5 Fun Physics Phenomena

[Veritasium] 6단계 분리 법칙의 과학

[Veritasium] Electromagnetic Levitation Quadcopter

[Veritasium] Engineering with Origami

[Veritasium] Explained: 5 Fun Physics Phenomena

[Veritasium] Facebook Fraud

[Veritasium] Gyroscopic Precession

[Veritasium] How Does a Transistor Work?

[Veritasium] How Trees Bend the Laws of Physics

[Veritasium] Ice Spikes Explained

[Veritasium] Inside the Svalbard Seed Vault

[Veritasium] Is America Actually Metric?

[Veritasium] Misconceptions About the Universe

[Veritasium] Musical Fire Table!

[Veritasium] My Video Went Viral. Here's Why

[Veritasium] Parallel Worlds Probably Exist. Here’s Why

[Veritasium] Should This Lake Exist?

[Veritasium] Spinning Tube Trick

[Veritasium] The Absurdity of Detecting Gravitational Waves

[Veritasium] The Bizarre Behavior of Rotating Bodies, Explained

[Veritasium] The Infinite Pattern That Never Repeats

[Veritasium] The Most Radioactive Places on Earth

[Veritasium] The Truth About Toilet Swirl - Southern Hemisphere

[Veritasium] Turbulent Flow is MORE Awesome Than Laminar Flow

[Veritasium] Veritasium Trailer

[Veritasium] Why Apollo Astronauts Trained at a Nuclear Test Site

[Veritasium] Why Life Seems to Speed Up as We Age

[Veritasium] Why Women Are Stripey

[Veritasium] Will This Go Faster Than Light?

[Veritasium] World's First Car!

[Veritasium] World's Longest Straw

[Veritasium] 슬로우 모션을 통한 레이저 제모의 과학

[Veritasium] 시간 대칭성을 무너뜨리는 입자

[Veritasium] 어떻게 중성자가 모든 것을 바꿔놓은걸까?

[Veritasium] 유리는 과연 액체일까?

[Veritasium] 체르노빌 근처를 걷다

 

 
반응형
해당 링크를 통해 제품 구매가 이루어진 경우, 쿠팡 파트너스 활동 일환으로 인해 일정 수수료가 블로거에게 제공되고 있습니다.
댓글