Research methods/psychology/social sciences question

T

Hello all

Couldn't quite think what to call this thread, and not quite sure how to word what I'm asking. Here goes...

When you design an experiment you review the literature and come up with research questions/hypotheses and ways to obtain and analyse data that will answer your research questions/test your hypotheses. What if, as I am sure is often the case, the data aren't as you predicted but you notice other interesting stuff in it. So you write up your results for the original questions and discuss your findings then explore further into what you found...

When does this become data "fishing"?

H

Interesting question. I don't think it's a problem exploring other questions in addition to the one you originally wanted answered.

You could change your research questions when writing up or explain that the research question evolved from the data. I don't think either way it should be a problem.

C

I haven't heard of the term 'fishing' before in connection with data, but it is an interesting question. I've been keeping things quite open in terms of seeing what's actually there in my data - I do have research questions, but I'm also looking to see what else emerges. I hadn't thought of that possibly being an issue!

T

I'm not sure it is so much of an issue. I've always been told that thesis questions will change as the research evolves, based on what results are found and what the literature holds.

T

Thanks people. I think I am just struggling with the notion that it should be a deductive approach with clearly set hypotheses that you test. For example: changing research questions based on what you observed in the data... I see this happening all the time in my research group and I am sure people do it all the time even when they don't state/explain it - they just present their evolved RQs as if they were the original ones. But this does not seem very "hypothetico-deductive". I don't think the scientific method as widely practiced is what people claim it to be. Or am I missing something here?!

C

I know what you mean - I am doing a mix of quantitative and qualitative studies, with one being 'hypothetico-deductive' and the other not, and I have written a section in my methodology chapter explaining that I don't think there's actually such a hard and fast divide between the two in practice, which sounds along the same lines as your point!

T

That is exactly what I'm talking about. I am a quants person (so as to speak), but I took a masters module in qualitative research methods and came to respect it A LOT. There is a transparency and willingness to disclose possible biases, preconceptions, and the evolution of your thought processes and reasoning. This seems to be entirely missing in a lot of quantitative research. It is often presented as if it is almost divine, with no human intervention at all! Hello!!!

T

Yes you are right, I didn't mention that I changed my objectives, I just rephrased my objectives around my data... My supervisors encouraged this - after all, if I set objective A in my first year, but didn't do it and did objective B instead, the thesis might end up looking a bit odd.

T

Yes, I think that is what I shall be doing too.

I

I think it becomes fishing if you don't follow up on your reframed hypotheses to check the outcomes match with your new way of thinking.

For example, you have some hypotheses, you run an experiment, the outcome isn't what you expected but you look at the data and find an interpretation that seems to fit. If you stop there, and report your conclusion in an absolute way this, in my view, based on a fishing exercise. You've looked for something, and found it. Wow. Big deal.*

However, if you carry on and do another experiment with the reframed question in mind, perhaps changing a few parameters or changing the method entirely, then you're trying to get closer to the truth of the matter.

In my research, I've done this with about 5 iterations and I think I now have a sensible explanation for what is going on and why the answers weren't as obvious as they perhaps first seemed. My final conclusion is being reported as a further question, i.e. I'm at the point where I'm pretty sure a) is the cause of this weird thing, but I don't have sufficient evidence to state that for sure. But, I can make a justified, logical argument to explain what I'm seeing in the data.

I see this type of science is about a process of getting closer to the truth.

*it's possible to report these inferences and ideas as conclusions without stating them as the absolute truth. It's hard, but not impossible.

C

I think it becomes data fishing if you analyse without a clear hypothesis and then write up based on what was significant with a new hypothesis attached that matches the findings.

I think this is definitely a problem big in psychology (which is my field) as many PIs don't think that it's the wrong thing to do. There is a move towards preregistration which is an attempt to crack down on this but I don't know many people that actually preregister their studies (I know I don't!)

T

This has been a really interesting discussion. I think the conditions of academia (e.g., publish or perish, significant results usually needed TO publish, no real accountability, peer review, peers responsible for who gets promotion) mean that a lot of stuff out there may be codswallop.

The preregistration thing is progress. I know what you mean - I am seeing it a lot in psychology too. At a PhD student's presentation the other day I wanted to ask "so what are your hypotheses?", but two professors were present, one of them the supervisor of the PhD student who was presenting, and neither of them seemed to consider it an issue! I need to get braver. Maybe...

I'm usually an optimist. Just feel a bit disillusioned lately!

I

I thin pre-registration is a good move too, as long as there is also a move to also appreciate and value research that hasn't accepted the initial hypothesis. It's still interesting to publish work where the initial hypothesis wasn't quite right, as long as the outcomes that are shown are framed in the right way.

I've been finding it very hard to publish my work with comments like "an interesting story in the presence of a failed experiment" - which is absolute crap. The experiment didn't fail, the experiment brought us closer to understanding a complex phenomenon. The initial hypotheses of the experiment were justified from our previous understanding of what was happening. Now, we're closer to knowing a bit more, which means that hopefully our next hypothesis will be closer.

Apologies for the rant, I am a little bitter about this particular topic :-)

(Oh, and don't get me started on people who report p values and leave out the effect sizes. Come on!)

Note - i'm not in psychology per se, it's more applied psychology. And lots of people do their experimental design/analysis really, really poorly.

T

Quote From IntoTheSpiral:
I've been finding it very hard to publish my work with comments like "an interesting story in the presence of a failed experiment" - which is absolute crap. The experiment didn't fail, the experiment brought us closer to understanding a complex phenomenon. The initial hypotheses of the experiment were justified from our previous understanding of what was happening. Now, we're closer to knowing a bit more, which means that hopefully our next hypothesis will be closer.


Yes, I fully agree! And re the p value only problem! There are some really good APA journals that cover things like this (formal rants :D), and reading them is so refreshing!

46281