Sounds like TPU? Maybe soft one too.
Sounds like TPU? Maybe soft one too.
Thanks, I think I get it. There’s a lot of humidity where I live too, so while not at the same scale, the problems are at least relatable. Best of luck with the project, it sounds like a cool but lengthy and complex journey that can really pay off!
Oh! The “brassic” guy! I don’t have much help to offer, but I didn’t know that term, had to look it up and found the tv show :D so thank you.
A tiny bit of potential help: you mention wanting to use desiccant in the boat. I’m obviously not an expert, but it sounds like a bad idea, as the stuff absorbs water… but maybe you mean in small amounts, so that wouldn’t make a difference.
Had access to cli, restarted HA and quickly disabled the Alexa integration: so far everything is working as intended :)
Similarly unfortunate situation for me, using the backup didn’t really help. But I DO have the Alexa integration, I guess next time I get HA between reboots I’ll disable that.
I think on my system it’s causing reboots. Not fun.
I have a few examples that I hope retain their metadata.
Seed mode is… basically, I stopped using Automatic1111 a long time ago and kinda lost track of what goes on there but in the app I use (Draw Things) there’s a seed mode called Scale Alike. Could be exclusive, could be the standard everywhere for what I know. It does what it says, changing resolution will keep things looking close enough.
Edit: obviously at some point they had to lose the bloody metadata….
“Better quality” is an interesting concept. Increasing steps, depending in the sampler, changes the image. The seed mode usually changes image with changes in size.
So, what exactly do you mean with “better quality”?
I’m thinking it looks like the print gets to a spot where it can get faster, and your hot end can’t keep up with the temperature required by that filament, causing under extrusion. If my guess is correct, it would show on a small test print (same settings) where you get looooong straight lines to allow for speed. And would disappear by slowing down. Since it looks like a relatively expensive filament I suggest you wait for more feedback before trying my test, just in case I got it wrong and my test would waste some filament for nothing.
Fortunately my thermometers don’t do that, because they are a good choice, Zigbee wise. Always on the lookout for replacements, if the need arises…
The bloody morons… why they say 16 tops if it can do better? It’s not like they don’t have access to 16gb sticks to test 2 of them! Like, I get when it’s “this supports up to” and that’s the largest available at launch, but this is just stupid. Thanks for correcting me!
super easy to upgrade to 32/48gb
Not on an N95/97/100 as they support max 16… https://ark.intel.com/content/www/us/en/ark/products/231803/intel-processor-n100-6m-cache-up-to-3-40-ghz.html so they can be repaired, but not upgraded.
This isn’t what you asked specifically, but it’s related enough… have a look into https://apps.apple.com/it/app/draw-things-ai-generation/id6444050820?l=en-GB as it’s free, ad free, free from tracking and really well optimized. With that I can run Schnell on my iPhone 13 Pro!
Yep. No artistic skills here, fun shitposts like this one for me are only fueled by some sort of generative AI. In this instance I tried a few different models (initially I got a woman with a red shirt uniform and a bee logo, then a red shirt with yellow and black highlights, a different model gave me a buff, hairy bee in a red shirt uniform… on top of the enterprise, holding a small enterprise in a hand) but had to give up and use Flux even if it’s the most intensive to run. Hardly perfect, but it’s fun and easy to recognize :)
LIDAR sucks, accuracy wise. If you want accuracy, and hate yourself, then you need an iPhone XR/XS because that was the generation with the most accurate FaceID (for whatever reason). Or go photogrammetry, the LIDAR can help but isn’t the main thing there… this is both free and great. With a Mac you can get the data processed faster, or it can be done (paid) via cloud, or with less accuracy and a bit of patience, on device. It’s not going to be a professional solution, but depending on the task it works and chances are the hardware is already there :)
SV07 then? It’s pretty full optional, but I believe they didn’t open source the whole thing.
You might want to double check this, but as far as I remember both the Sovol SV06 and SV08 are open source. The SV06 sounds in line with your desired budget, IF I remember correctly the open source thing. And as others have said, Cura, Prusa slicer and Orca are open source and cross platform.
Sounds about right. But a multimodal one? Ehh… sticking with Meta, their smallest LLaMa is a 7b, and as such without any multimodal features it’s already going to use most of the Quest’s 8gb and it would be slow enough that people wouldn’t like it. Going smaller is fun, for example I like (in the app I linked) to use a 1.6b model, it’s virtually useless but it sure can summarize text. And to be fair, there are multimodal ones that could run on the Quest (not fast), but going small means lower quality. For example the best one I can run on my phone takes… maybe 20 seconds? To generate this description “ The image shows three high-performance sports cars racing on a track. The first car is a white Lamborghini, the second car is a red Ferrari, and the third car is a white Bugatti. The cars are positioned in a straight line, with the white Lamborghini in the lead, followed by the red Ferrari, and then the white Bugatti. The background is blurred, but it appears to be a race track, indicating that the cars are racing on a track.” and it’s not bad. But I’m not sure I’d call it trustworthy :D
If you happen to have an iPhone and want to get a sense of how difficult to run are LLM on a mobile device, there’s a free app https://apps.apple.com/app/id6497060890 that allows just that. If your device has at least 8gb of memory then it will even show the most basic version of LLaMa (just text there), and since everything is done on device, you can even try it in airplane mode. Less meaningful would be running it on a computer, for that I suggest https://lmstudio.ai/ that is very easy to use.
You might have some polypropylene there. Really strong material! Won’t stick to shit, temperature resistant, chemical resistant, can bend without breaking… never tried it, personally but it’s interesting stuff.