If you tell a profesional that the answer is “B”, while the professional had “A” in mind, you will have to convince them on why “B” is the correct answer, or they will ignore your suggestion. I think a good LLM model should be able to tell which features it valued most in it’s reasoning. It would make it much easier to get used to as a tool that way.
If you tell a profesional that the answer is “B”, while the professional had “A” in mind, you will have to convince them on why “B” is the correct answer, or they will ignore your suggestion. I think a good LLM model should be able to tell which features it valued most in it’s reasoning. It would make it much easier to get used to as a tool that way.
I agree, while they are sceotical. However research data over time should show sensitivity and specificity, just like any other test.