Going to preface this by saying I am not speaking specifically of you OP so don't take anything as personal I say below.
Every so often I get motivated to do a post on studies, but then decide it isn't worth my time since most people don't seem to
actually look at science or talk about anything that isn't simple to understand, so I don't...then something like this pops up and I wish I had so I could refer to it.
I think most people here don't really understand a lot of what that even means, and just grab abstracts to try and confirm their biases rather than understanding the nuance of the situation.
I'll admit I don't feel motivated enough for a super in depth post, but I'll cover some of this briefly and I won't even pull direct quotes (feeling lazy) since it doesn't feel worth the effort (but I can if needed later).
First, this isn't the first study to address this. There have been some other meta-analysis that have shown this effect.
It is important to keep in mind, before we even look at this study, is that this effect being shown was small and there doesn't appear to be any underlying mechanism to determine why this would happen. You could argue that both ways of being good and bad. My view is that what this does is it means there is now reason for researchers to try and uncover why this may be seen to even determine if it is actually an issue or something unrelated. Nothing wrong with finding new information and then digging to find if that information is relevant, this can lead to other new discoveries along the way.
This study in particular though is probably not that helpful in helping that issue though. Not all studies are equal (as in a "study" can mean both a wide range of different things and even if the same "type" of study is done some can be "done" better than others, hence peer review and scrutiny after publishing with things like fully disclosing things done in the study) and this one wouldn't land at the peak (or the bottom to be fair) due to being a cohort. I don't want to get too off track and go into all the positives and negatives of cohorts because I am not saying cohorts are "bad", just that the information they present must be interpreted in a specific way. One thing this can be good for is finding things like what is being pointed out so that further studies can be done to find underlying issues, etc. The issue here is that we already kind of have seen this (although replication should be appreciated) so it doesn't tell us anything necessarily "new". This study in particular (and as is often the case in these types of studies) did not actually track the dose, frequency, or composition of the fish oil taken. I don't want to delve too far into speculation (as I didn't fully dissect this particular study), but one quick thought was maybe considering that those who already had issues were prescribed fish oil they were receiving a specific "quality", dose and the likelihood of compliance in taking it was probably higher than those who were "healthy" before and may just be buying random fish oil, with varying dosages, purity, and compliance in taking it.
I'd always recommend looking at something more in depth than just what a news outlet posts about studies, they tend to do a poor job explaining things.
Fish oil has lots of studies though so there is a lot to look at when determining if it fits into each persons specific use case scenario.