Stamets@lemmy.world to People Twitter@sh.itjust.works · 11 months agoThe dreamlemmy.worldimagemessage-square6fedilinkarrow-up116arrow-down11
arrow-up115arrow-down1imageThe dreamlemmy.worldStamets@lemmy.world to People Twitter@sh.itjust.works · 11 months agomessage-square6fedilink
minus-squarecandle_lighter@lemmy.mllinkfedilinkEnglisharrow-up2·11 months agoI want said AI to be open source and run locally on my computer
minus-squareCeeBee@lemmy.worldlinkfedilinkarrow-up1·11 months agoIt’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally. I’m already doing it, but I have some higher end hardware.
minus-squareXanaus@lemmy.mllinkfedilinkarrow-up1·11 months agoCould you please share your process for us mortals ?
minus-squareCeeBee@lemmy.worldlinkfedilinkarrow-up1·11 months agoStable diffusion SXDL Turbo model running in Automatic1111 for image generation. Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results. I have some beefy hardware that I run it on, but it’s not necessary to have.
I want said AI to be open source and run locally on my computer
It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.
I’m already doing it, but I have some higher end hardware.
Could you please share your process for us mortals ?
Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.
Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.
I have some beefy hardware that I run it on, but it’s not necessary to have.