American military veterans have a suicide problem.
Some have theorized the reason is deployment-related trauma.
Leveraging the random assignment of new soldiers to units with different deployment cycles, Bruhn et al. found that was wrong.
Deployment did not increase suicides.
Looking only at violent deployments (ones with peer casualties), there aren't noncombat mortality effects either.
What explains veteran suicide rates?
The reason seems to be that the proposition is wrong: veterans do not have increased suicide risk.
This may seem surprising, but it's not. Their suicide rates are elevated over the general population because most of them are young White men. That group has a suicide issue.
There are good and bad parts to this observation.
On the one hand, it means that there is not selection of suicidal people into the military.
On the other, demographic selection makes this problem into one that agencies like the VA will probably not be able to fix on their own
because it's not a soldier problem, it's a young White male problem.
I don't know how this can be fixed, but presumably tackling opiate use would help.
Soliman (2022) found that DEA crackdowns on overprescribing pharmacies resulted in fewer local suicide deaths.
Soliman also found that sanctioning specific doctors affected opioid-related mortality more generally without impacting suicide rates. Effects were generally larger for males than females and they were larger for people aged 30-49 than those aged 15-29 or 85+. No race data.
Kennedy-Hendricks et al. found that Florida's pill mill crackdown reduced opioid overdose mortality considerably.
Their supplement contained details on the characteristics of the people who died from opioid overdoses, but I wasn't able to access it.
Why do people who live in cities tend to have liberal immigration attitudes?
Using data from Swiss movers, it appears the answer is selection!
People who held those attitudes up to seven years prior to moving didn't change up to six years later.
And this replicates!
Using data on Germans who moved to neighborhoods in the top quartile of foreign population, the same finding cropped up again: the people who live in those places held their attitudes long before they moved.
Later work showed this held for other attitudes as well, and it worked out for people who moved to or from cities.
People who move do not become more or less likely to want to join the EU or support the radical right, nor do their immigration attitudes shift.
TL;DR: nice, careful paper in desperate need of a sibling control to address residual confounding.
It doesn't seem they'd have a large enough sample size or enough variation in place of birth in sibling pairs unless they could get Scandinavian register or American Census data.
Other sorting work looks at things like "Do peoples attitudes to immigrants systematically differ after they move to a city?"
Those can deal with residual confounding. But we have no prior information about kids when we're analyzing the effects of their birthplaces.
Most of the confounding in sorting designs is still familial confounding because the most common type of relevant confounding is genetic, not attributable to family environmental effects.
So you can handle it by observing a person prior to, say, a move across neighborhoods.
There are intelligence differences between groups, but here's something useful: because of universal screening programs, we know Hispanics are disproportionately underidentified for entry into gifted programs.
Many gifted students of all races aren't identified.
Smart kids tend to have smart parents. Those parents tend to be more involved in their kids' educations, and they push their kids into gifted programs and good schools. But this isn't universal and it's augmented by parents' socioeconomic status.
Teachers notice very smart kids, but there's some bias. For example, if a very smart kid comes from a family that doesn't speak English, that kid's English experience may be limited, making them seem less intelligent than they are.
The EEF (U.K.) and NCEE (U.S.) have strict preregistration and analysis plan requirements for educational interventions they fund: it's hard to cheat when you take their money.
So, what do their results look like?
They find very small effects of educational interventions.