Close Home Forum Sign up / Log in

Best Methodology?

D

Hi all, I'm looking for some suggestions on what the best quantitative methodology may be for a study I am thinking of doing.

I want to compare crime rate between periods of times to see if an intervention implemented by the county impacted crime rate. I am going to compare the number of calls for service and crimes in 2020 to the same in the first six months of 2021. There are about 25 different call types I will be comparing. What kind of regression analysis would be best for this? Thoughts? Thanks!

N

What software do you plan on using ?
Personally, I would compare the first 6 months of one year with the same 6 months of the next year. Rather than 1 year with a half a year.
THere are helpful guides online like Laerd, minitab has there own guide etc
But it all depends on the software you use.

You are asking a causal question about the effect of an intervention. You need to think carefully about design. A simple before-and-after comparison is likely to pick up the effect of existing time trends. More pernicious, if the policy was implemented in response to a rising/falling trend in crime rates and there is a several year lag in the effect of the policy, you might end up getting the complete opposite answer (i.e. erroneously finding that the policy increased crime).

Difference-in-difference and synthetic controls are both common methods for estimating the effects of policies. The first uses similar/nearby counties which didn't implement the policy as a baseline for the existing trend in crime rates. The second constructs a counterfactual county also based on similar counties which didn't implement the policy.

I would strongly consider looking into those methods before worrying about your specific outcome measure.

A

This is interestingly close to the crux of how the general public see data vs how researchers see data.

In many glossy lifestyle magazines you'll see 'eating more (or less) red meat reduces cancer risk'. This often refers to a study in which there's a correlation. But as per the cliche collelation does not equal causation; the way a researcher views this data is skeptically (what if people who eat more red meat are also more predisposed to smoking?) .

It sounds a bit like you've been set up with a PhD tied to this intervention to evaluate it. From my experience, this might well mean you have a board of public sector stakeholders who won't go so far as saying they want you to show it did, but probably will be more critical of evidence that shows it didn't. You will thus learn early on navigating these waters as a researcher, which is not an easy task, but a valuable skill. If in the same situation, my first line would probably be to explain that, due to the cofactors, it's not possible to empirically show it 'worked'; but the significant value of research would be in the qualitative, critical realist approach of understanding that for the small sample who it did, how and why this happened. It's not that dissimilar to the idea of managing expectations as a consultant; what they might want is a golden seal that empirically it's fantastic, but to keep your integrity intact you need to work along the lines of researching the positives (which will exist, is a reasonable thing to do, and will placate them), whilst avoiding saying it's possible to evaluate it empirically or, especially, that such evaluation will give them the result they want.

You can trivially look at openly available crime statistics to show if it dropped or rose during the intervention. This might placate stakeholders who want this form of empirical evidence it 'worked', but scientifically it's bad evidence (which is often enough for politicians!). I'd sincerely doubt you can look at the macro-level and empirically reach a conclusion that holds up scientifically. Realistically, it's unlikely a statistically significant number of offenders accessed the intervention, never mind reacted to it.

If you really want to understand if this intervention worked, qualitative really seems the only viable route to reach meaningful conclusions. Thing there is the intervention might not have changed 1,000s of lives, but if it cost £100k and kept a single person out of prison for 10 years, it's actually much more than cost-efficient! This of course is less clean, easy, and perfect than a simple ANOVA of 'yes it did p=', but if it was possible to assess behavioural interventions so cleanly and iterate them we'd be a zero crime, carbon-neutral planet.

D

Quote From Nead:
What software do you plan on using ?
Personally, I would compare the first 6 months of one year with the same 6 months of the next year. Rather than 1 year with a half a year.
THere are helpful guides online like Laerd, minitab has there own guide etc
But it all depends on the software you use.


Hello! I am planning on using SPSS since I had to purchase that previously for a course. It seems quite capable of various types of regression analysis, but I am unsure of what option may be most appropriate.

D

Quote From abababa:
This is interestingly close to the crux of how the general public see data vs how researchers see data.

In many glossy lifestyle magazines you'll see 'eating more (or less) red meat reduces cancer risk'. This often refers to a study in which there's a correlation. But as per the cliche collelation does not equal causation; the way a researcher views this data is skeptically (what if people who eat more red meat are also more predisposed to smoking?) .

It sounds a bit like you've been set up with a PhD tied to this intervention to evaluate it. From my experience, this might well mean you have a board of public sector stakeholders who won't go so far as saying they want you to show it did, but probably will be more critical of evidence that shows it didn't. You will thus learn early on navigating these waters as a researcher, which is not an easy task, but a valuable skill. If in the same situation, my first line would probably be to explain that, due to the cofactors, it's not possible to empirically show it 'worked';


Yes! You have indeed brought up many of the concerns I have thought of approaching this. I do recognize this will be limited in its influence. However, I am approaching this as a correlational study, and am not looking for causation. Essentially, the intervention was put into place and a noted politician stated it would not impact the crime rate despite leaving people out of jail. I want to look at the simple response of whether or not there was a notable correlation in that. Would a regression analysis of sorts be most appropriate to accomplish this?

D

As I further consider, perhaps a time-series analysis is best yet for this?

D

I feel like a Pearson correlation test is the right one here, but I'm not sure that I can enter data from a dozen call types on a monthly basis in doing that. Any ideas?

61929