Hi all, I'm looking for some suggestions on what the best quantitative methodology may be for a study I am thinking of doing.
I want to compare crime rate between periods of times to see if an intervention implemented by the county impacted crime rate. I am going to compare the number of calls for service and crimes in 2020 to the same in the first six months of 2021. There are about 25 different call types I will be comparing. What kind of regression analysis would be best for this? Thoughts? Thanks!
What software do you plan on using ?
Personally, I would compare the first 6 months of one year with the same 6 months of the next year. Rather than 1 year with a half a year.
THere are helpful guides online like Laerd, minitab has there own guide etc
But it all depends on the software you use.
You are asking a causal question about the effect of an intervention. You need to think carefully about design. A simple before-and-after comparison is likely to pick up the effect of existing time trends. More pernicious, if the policy was implemented in response to a rising/falling trend in crime rates and there is a several year lag in the effect of the policy, you might end up getting the complete opposite answer (i.e. erroneously finding that the policy increased crime).
Difference-in-difference and synthetic controls are both common methods for estimating the effects of policies. The first uses similar/nearby counties which didn't implement the policy as a baseline for the existing trend in crime rates. The second constructs a counterfactual county also based on similar counties which didn't implement the policy.
I would strongly consider looking into those methods before worrying about your specific outcome measure.
This is interestingly close to the crux of how the general public see data vs how researchers see data.
In many glossy lifestyle magazines you'll see 'eating more (or less) red meat reduces cancer risk'. This often refers to a study in which there's a correlation. But as per the cliche collelation does not equal causation; the way a researcher views this data is skeptically (what if people who eat more red meat are also more predisposed to smoking?) .
It sounds a bit like you've been set up with a PhD tied to this intervention to evaluate it. From my experience, this might well mean you have a board of public sector stakeholders who won't go so far as saying they want you to show it did, but probably will be more critical of evidence that shows it didn't. You will thus learn early on navigating these waters as a researcher, which is not an easy task, but a valuable skill. If in the same situation, my first line would probably be to explain that, due to the cofactors, it's not possible to empirically show it 'worked'; but the significant value of research would be in the qualitative, critical realist approach of understanding that for the small sample who it did, how and why this happened. It's not that dissimilar to the idea of managing expectations as a consultant; what they might want is a golden seal that empirically it's fantastic, but to keep your integrity intact you need to work along the lines of researching the positives (which will exist, is a reasonable thing to do, and will placate them), whilst avoiding saying it's possible to evaluate it empirically or, especially, that such evaluation will give them the result they want.
You can trivially look at openly available crime statistics to show if it dropped or rose during the intervention. This might placate stakeholders who want this form of empirical evidence it 'worked', but scientifically it's bad evidence (which is often enough for politicians!). I'd sincerely doubt you can look at the macro-level and empirically reach a conclusion that holds up scientifically. Realistically, it's unlikely a statistically significant number of offenders accessed the intervention, never mind reacted to it.
If you really want to understand if this intervention worked, qualitative really seems the only viable route to reach meaningful conclusions. Thing there is the intervention might not have changed 1,000s of lives, but if it cost £100k and kept a single person out of prison for 10 years, it's actually much more than cost-efficient! This of course is less clean, easy, and perfect than a simple ANOVA of 'yes it did p=', but if it was possible to assess behavioural interventions so cleanly and iterate them we'd be a zero crime, carbon-neutral planet.
Masters DegreesSearch For Masters Degrees
An active and supportive community.
Support and advice from your peers.
Your postgraduate questions answered.
Use your experience to help others.
Enter your email address below to get started with your forum account
Enter your username below to login to your account
An email has been sent to your email account along with instructions on how to reset your password. If you do not recieve your email, or have any futher problems accessing your account, then please contact our customer support.
or continue as guest