wuphysics87@lemmy.ml to Privacy@lemmy.ml · 21 days agoCan you trust locally run LLMs?message-squaremessage-square16fedilinkarrow-up170arrow-down16file-text
arrow-up164arrow-down1message-squareCan you trust locally run LLMs?wuphysics87@lemmy.ml to Privacy@lemmy.ml · 21 days agomessage-square16fedilinkfile-text
I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?
minus-squareacockworkorange@mander.xyzlinkfedilinkarrow-up4·edit-220 days agoIs the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?
Is the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?