Of course yeah. If you’re gonna use ChatGPT as a resource try and specific. It pulls from the ENTIRE internet, remember. That means forums like this, where information may not be accurate.
In your instance, if phrase “does eating carbs blunt exogenous gh? Cite and link scientific data to support your answer”.
You can then go through the links and read yourself to put together a more well rounded conclusion.
Something worth talking about that I don't see discussed enough in this space: the way AI tools like ChatGPT are actually designed, and why blindly trusting them in research-heavy fields like ours can get, you into trouble fast.
A good friend of mine is head of operations at one of the big five tech companies, and with a huge fucking push for AI, he explained AI interaction was genuinely eye-opening. These systems aren't optimized for truth, they're optimized for engagement. The goal is to generate the highest probability answer that you're going to like and agree with. It reads how you're prompting it, picks up on what you seem to believe, and calibrates toward what's most likely to resonate with you. Validating by design.
A perfect real-world example: SLU-PP-332. For a while, some influencers on IG and TY were calling it a PPAR, when it's actually an ERR. Where did that come from? AI said so confidently, nobody checked the actual research, and it got copy-pasted across the internet. When you correct ChatGPT on it, it immediately says something in the line of "you're totally right!" which tells you everything. It wasn't sure, it just told people what sounded good and was being regurgitated by idiots who like to sound smart and have an agenda to push shit.
The scary part is the feedback loop. Wrong AI answers get posted publicly, that content gets absorbed back into future training data, and the error gets reinforced. In a field where mechanism details actually matter, that's a real problem.
The key is knowing what to use it for. Using AI to pull key bullet points from studies you already have in front of you? Totally reasonable, you're using it as a reading tool and you can verify against the source in real time... also reasonable.
The high-risk move is using AI as your starting point and asking it to explain a compound or mechanism cold, with no primary source to check against. That's exactly how you end up confidently posting something that's wrong.
Find the source first. Get to the actual paper, even if it's just the abstract or the figure legends. Then use AI to help you process and summarize it faster. You stay anchored to real evidence instead of letting the algorithm guess at what you want to hear.
AI is a useful tool. It's just not a reliable authority, and in this space, knowing the fucking difference matters.
.... The bottom line is don't Let AI Think For You !!! Use it so it doesn't burn you !!!