The color combination here struck me as unusual but interesting--coral, cobalt, and brown leopard print.
From shethoughtshecould.com |
Seems like a promising way to try my new short-sleeved cobalt blue cardigan as we transition into the fall season. (A less stubborn person would have substituted their long-sleeved cobalt blue cardigan given the 54 F high temperature!) I had a lot of options but decided to stick somewhat close to the inspiration photo with a coral top and leopard scarf. Not sure why I went with the striped flats, but what the hell. It's Friday. I did end up wearing my black denim jacket to and from work, but my office continued its warm streak so I had the fan on all day.
*Coral short-sleeved T (Walmart), $5.00/wear
Cobalt blue short-sleeved t-shirt cardigan (thrifted, Lane Bryant), $2.62/wear
Brown leopard scarf (Kohls), $2.08/wear
Skinny jeans (JCP), $0.66/wear
Pink/tan/black striped flats (Payless), $1.14/wear
Outfit total: $11.50/wear
And to match my outfit, a hungry Tan rabbit. This striking color pattern is produced by the at gene, which produces different colors on the top/sides and the belly (note: the otter coloration is produced by the same gene), and the ww gene, which makes the belly a bright orange/red color.
In other news...Tam sent this article about how a group of patients, journalists, and scientists have exposed deep problems with a famous study finding that people with chronic fatigue syndrome will improve or "recover" by doing more exercise--a finding that became a treatment recommendation at many reputable hospitals/facilities. (This despite the fact that post-exertion exhaustion is a hallmark of the disease.)
I was irritated by the university characterizing Freedom of Information requests for access to basic study data as "vexatious" (which basically means it would be too much of a pain in the ass to provide the information--"a disproportionate or unjustifiable level of distress, disruption or irritation"), though I find the terminology amusing. In that request, the person asked for mean and standard deviation for one measure for each of the four experimental groups at each of the four time periods of measurement during the study--that's 32 numbers. The data was shown in a figure but the figure was too small/low resolution for the reader to extract the numbers. Um, how is this a vexatious request? That's like very, very basic stuff that should have been published in a visible way to begin with. The authors must already know what those numbers are to have made a (poor) graph with them, so it's not like any additional calculation is needed (and it's already aggregated so there isn't an issue with providing individual level data that could be misused).
If you would like to dig into the basic problems with the study's design, this is an excellent source. Some of the issues include:
--Recruiting people who don't necessarily have CFS while excluding those with severe CFS. Apparently there's a whole lot of technical shit around how CFS is identified/defined that they were kind of like "eh, whatever" about. So they were generating recommendations for treating those with a disease that their participants didn't even all have. I got the impression that they ignored the significance of fatigue in response to exertion as a defining characteristic of the disease, which, you know, might be problematic in a study trying to show that exercise helps people with this specific disease (and not just people who are generally fatigued/depressed/etc.).
--Starting the study with a certain set of criteria for success and then lowering them dramatically later, to the point that the same score could qualify you for inclusion in the study AND count as "recovery." There is uncertainty as to how much access to the data the authors had before they changed their success criteria--it's very possible that they revised the criteria in response to the data in the study, or they might have generally lost faith at some point and defined success more broadly to increase their chances of getting the result they wanted. In either case, bad. And how they can count people as "recovered" who are bad enough to qualify for the study boggles the mind. The authors apparently responded that they didn't mean "recovery" the way that people generally use the term recovery. Paging Dr. Dumpty...
--Using measures with serious floor effect issues (which the article characterizes as a ceiling effect, which confuses me a lot; I asked Robert what he'd call this effect--since he was trained in stats but not experimental design--and he said "censored data," which is legit). For example, a bunch of people start out feeling really bad so on the initial questionnaire they use the lowest response category. Then after treatment, half the people feel better and half feel worse. The ones who feel better move up to a higher response category, but what about those who are doing worse? They can't give a lower response than they gave before--they're already at the floor. So overall, the scores would show an improvement on average, whereas in reality, there's no net change (the ones who improved and the ones who got worse canceled each other out).
--They also changed up some of the statistical tests from what their original study plan called for. At least some of these tests make as much sense as the original ones, but it's odd that they didn't stick with their original analysis plan. It raises the question of whether carrying out the original tests resulted in non-statistically significant results or what.
So yeah, the recommendation that people with a disease featuring post-exertion exhaustion should start exercising to recover? Bogus.
I can't help but think about how many people in the psychology community have pointed to these kinds of "gold standard" medical studies as a model for how psych research should operate. For example, publicly laying out what the interventions will be specifically, what measures will be used, what the relevant criteria are, what kind of statistical tests will be run, etc. in advance so that there will not be the temptation to fudge things when the data come in. However, this study shows how that model is no guarantee that the researchers won't fudge things anyway. I'd argue that in some ways, having the appearance of a gold standard study while being shoddy underneath is even worse because people are more inclined to believe the results.
4 comments:
I really like the tan rabbit. Beautiful coloring.
I agree, a gorgeous bun.
I feel bad for people with diseases like chronic fatigue that a lot of people don't really believe in, and that seem somewhat easy to fake. The study made me angry because that idea – that it's in your head and you can get better by starting with gentle exercise and working your way up – is exactly the commonsense response a person might have. So it's very detrimental for that seem to be confirmed. I'm sure a lot of people saw those results, it confirmed what they already thought, and they'll never see the arguments against it that came out later.
That's a good point, Tam--it is very harmful to people with the disease who are like, Fuck NO That Doesn't WORK! to have everyone around them thinking, Science says they just need to exercise and they will recover.
Post a Comment