I’d guess that it’s most likely not systemd itself causing the problem, but rather something kernel- or hardware-side.
I’d guess that it’s most likely not systemd itself causing the problem, but rather something kernel- or hardware-side.
I’m not familiar enough with Cloudflare’s error messages — or deployment with Cloudflare — to know what exact behavior that corresponds to, but I’d guess that most likely it can open a TCP connection to port 443 on what it thinks is your server, but it’s not getting HTTPS on that port or your server isn’t configured to serve up the right certificate for that hostname or the web server software running on it is otherwise broken. Might be some sort of intervening firewall.
I don’t know where your actual server is, may not even be accessible to me. But if you have a Linux machine that can talk to it directly – including, perhaps, the server itself – you should be able to see what certificate it’s handing back via:
$ openssl s_client -showcerts -servername akaris.space IP-address-of-actual-server:443
That’ll try to establish a TLS connection, will send the specified server name so that if you’re using vhosting on the server, it knows which site to return, and then will tell you what certificate the web server used. Would probably be my first diagnostic step if I thought that there was a problem with the TLS handshake on a machine I was running.
That might provide enough information to you to let you resolve the issue yourself.
Beyond that, trying to provide much more information probably isn’t possible without more information about how your server is set up and what actually is working. You can censor IP addresses if you want to keep that private.
Less energy density, though.
On the other hand, maybe a less-fire-risky battery would be grounds for increasing the current 100Wh maximum that the FAA places on laptop batteries.
While details of the Pentagon’s plan remain secret, the White House proposal would commit $277 million in funding to kick off a new program called “pLEO SATCOM” or “MILNET.”
Please do not call it “MILNET”. That term’s already been taken.
https://en.wikipedia.org/wiki/MILNET
In computer networking, MILNET (fully Military Network) was the name given to the part of the ARPANET internetwork designated for unclassified United States Department of Defense traffic.[1][2]
cultural wasteland
https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_historical_population
According to this, Nevada only had 110k people statewide in 1940.
In 1940, New York City had 7.5 million.
Gotta have people to produce cultural output.
These are not official state foods. They are what the source website has decided to appoint as the favorite food for each.
This is a list of official state foods:
https://en.wikipedia.org/wiki/List_of_U.S._state_foods
EDIT: Corrected link; source page is down and had originally linked to wrong page. Used archive.org to get to original.
Well, under GE Proton, presumably.
Just giving an example; translate to your preferred environment!
If you use pixz
, you can get indexing-permitting-for-random access, parallel compression/decompression and (generally superior to gzip’s LZ77) LZMA compression with tarballs.
$ sudo apt install pixz
$ tar cvf blahaj.tar.pixz -Ipixz blahaj/
Also responding in response to a private message in hopes that some information might be useful to others:
To be honest, I understood about half of it haha.
rubs chin
So, I’m not sure what bits aren’t clear, but if I had to guess as to terms in my comments, you can mostly just search for and get a straightforward explanation, but:
inpainting
Inpainting is when you basically “erase” part of an already-generated image that you’re mostly happy with, and then generate a new image, but only for that tiny bit. It’s a useful way to fine-tune an image that you’re basically happy with.
“Image-to-image”.
That’s an Automatic1111 term, I think. Oh, Automatic1111 is a Web-based frontend to run local image generation, as opposed to ArtBot, which appears to be a Web-based frontend to Horde AI, which is a bunch of volunteers who donate their GPU time to people who want to do generation on someone else’s GPU. I’m guessing that ArtBot got it from there.
Automatic1111 is was widely-used, and IMHO is easier to start out with, but ComfyUI, which has a much steeper learning curve but is a lot more powerful, is displacing it as the big Web UI for local generation.
Basically, Automatic1111, as it ships without extensions, has two “tabs” where one does image generation. The first is “text-to-image”. You plug in a prompt, you get back an image. The second is “image-to-image”. You plug in an image and a prompt and process that image to get a new image. My bet is that ArtBot used that same terminology.
prompt
This is just the text that you’re feeding a generative image AI to get an image. A “prompt term” is one “word” in that.
Stable Diffusion
This is one model (well, a series of models). That’s what converts your text into an image. It was the first really popular one. Flux, which I referenced above, is a newer one. It’s possible for people who have enough hardware and compute time to create “derived models” — start from one of those and then train models on additional images and associated terms to “teach” them new concepts. Pony Diffusion is an influential model derived from Stable Diffusion, for example.
A popular place to download models — the ones that are freely distributable — for local use is civitai.com. That also has a ton of AI-generated images and shows the model and prompts used to generate them, which IMHO is a good way to come up to speed on what people are doing.
Horde AI — unfortunately but understandably — doesn’t let people upload their own models to the computers of the people volunteering their GPUs, so if you’re using that, you’re going to be limited to using the selection of models that Horde has chosen to support.
Models have different syntax. Unfortunately, it looks like ArtBot doesn’t provide a “tutorial” for each or anything. There are guides for making prompts for various “base” models, like Stable Diffusion and Flux, and generally you want to follow the “base” model’s conventions.
SD
A common acronym for “Stable Diffusion”.
sampler
So, the basic way these generative AIs work is by starting with what amounts to being an image full of noise – think of a TV just showing static. That static is randomly-generated. On computers, random numbers are usually generated via pseudo-random number generators. These PRNGs start with a “seed” value, and that determines what sequence of random numbers they come up with. Lots of generative AI frontends will let you specify a “seed”. That will, thus, determine what static you’re starting out with. You can have a seed that changes each generation, which many of them do and I think that ArtBot does, looking at its Web UI, since it has a “seed” field that isn’t filled in by default. IMHO, this is a bad default, since if you do that, each image you generate will be totally different — you can’t “refine” one by slightly changing the prompt to get a slightly-different image.
Anyway, once they have that “static” image, then they perform “steps”. Each “step” takes the existing image and uses the model, the prompt, and the sampler to determine a new state of the image. You can think of this as “trying to see images in the static”. They just repeat this a number of times, however many steps you have them set to run. They’ll tend to wind up with an image that is associated with the prompt terms you specified.
An easy way to see what they’re doing is to run a generation with a fixed seed set to 0 steps, then one set to 1 step, and so forth.
You seem super knowledgeable on the topic, where did you learn so much?
I honestly don’t, because for me, this is a part-time hobby. Probably the people who you can access who are most-familiar with it that I’ve seen are on subreddits on Reddit dedicated to this stuff. I’m trying to bring some of it over to the Threadiverse.
Civitai.com is a good place to see how people are generating images, look at their prompt terms.
Here and related Threadiverse communities, though there’s not a lot of talk on here, mostly people showing off images (though I’m trying to improve that with this comment and some of my past ones!). [email protected] tends towards more the technical side. [email protected] has porn, but not a lot of discussion, though I remember once posting an introduction to use of the Regional Prompting extension for Automatic1111 there.
Reddit’s got a lot more discussion; last I looked, mostly on /r/StableDiffusion, though the stuff there isn’t all about Stable Diffusion.
There are lots of online tutorials talking about designing a prompt and such, and these are good for learning about a particular model’s features.
Some stuff is specific to one particular model or frontend, and some spans multiple, and while there’s overlap today, that information isn’t exactly nicely and neatly categorized. For example, “negative prompts” are a feature of Stable Diffusion, and are invaluable there — are prompt terms that it tries to avoid rather than include — but Flux doesn’t support them. DALL-E, a commercial service, doesn’t support negative prompts. Midjourney, another commercial service, does. Commercial services also aren’t gonna tell everyone exactly how everything they do works. Also, today this is a young and very fast-moving field, and information that’s a year old can be kind of obsolete. There isn’t a great fix for that, I’m afraid, though I imagine that it may slow down as the field matures.
It does look like they have at least one Flux model in that ArtBot menu list of models, so might try playing around with that, see if you’re happier with the output. I also normally use 25 steps with Flux rather than 20, and the Euler sampler, both of which it looks like it can do.
EDIT: Looks like for them, “Euler” is “k_euler”.
I’m not familiar with Artbot.
investigates
Yes, it looks like it supports inpainting:
https://tinybots.net/artbot/create
Look down in the bottom section, next to “Image-to-image”.
That being said, my experience is that inpainting is kind of time-consuming. I could see fine-tuning the specific look of a feature – like, maybe an image is fine except for a hand that’s mangled, and you want to just tweak that bit. But I don’t know if it’d be the best way to do this.
I don’t know if this is actually true, but I recall reading that prompt term order matters for Stable Diffusion (assuming that that is the model you are using; it looks like ArtBot lets you select from a variety of models). Earlier prompt terms tend to define the scene. While I’ve tended to do this, I haven’t actually tried to experiment enough to convince myself that this is the case. You might try sticking the “dog” bit earlier in the prompt.
If this is Stable Diffusion or an SD-derived model and not, say, Flux, prompt weighting is supported (or at least it is when running locally on Automatic1111, and I think that that’s a property of the model, not the frontend). So if you want more weight to be placed on a prompt term, you can indicate that. Adding additional parentheses will increase weight of a term, and you can provide a numeric weight: A cozy biophilic seaport village. In the distance there are tall building and plants. There are spaceships flying above. In the foreground there is a cute ((dog)) sitting on a bench.
or A cozy biophilic seaport village. In the distance there are tall building and plants. There are spaceships flying above. In the foreground there is a cute (dog:3) sitting on a bench.
In general, my experience with Stable Diffusion XL is that it’s not nearly as good as Flux at taking in English-language descriptions of relationships between objects in a scene. That is, “dog on a bench” may result in a dog and a bench, but maybe not a dog on a bench. The images I tend to create with Stable Diffusion XL tend to be a list of keywords, rather than English-language sentences. The drawback with Flux is that it’s heavily weighted towards creating photographic images, and I’m guessing, from what you submitted, that you’re looking more for a “created by a graphic artist” look.
EDIT: Here’s the same prompt you used fed into stoiquoNewrealityFLUXSD35f1DAlphaTwo, which is derived from Flux, in ComfyUI:
Here it is fed into realmixXL, which is not derived from Flux, but just from SDXL:
The dog isn’t on the bench in the second image.
You don’t pipe salt water through the data center. You have a heat exchanger that touches the salt water.
The fragrance is available in two versions – one for men and one for women – and will set back supporters a whopping $249 for each 100ml bottle.
A Profile of Trump Voters: The Demographics of his MAGA Enthusiasts and Their Relationship to Him
In their majority they tend to be white, male, and mainly older, are highly conservative, support traditional values such as religion and proud patriotism, are less likely to have a college degree, are more likely to be rural or small town-based and lower-income
I dunno if $250 bottles of cologne is highly aligned with what Trump’s demographic wants, but I suppose he has more experience in brand building than I have.
It takes more work to avoid salt buildup, but you can evaporate saltwater as a place to dump heat, and we aren’t gonna run out of saltwater any time soon. 'Course, only so many places have saltwater access.
EDIT: You evaporate enough water for cooling, you can increase rainfall somewhat in the local area, which boosts crop growth measurably. I remember reading an article about nuclear power plants that use evaporative cooling producing that effect.
kagis
The growing prevalence of clean energy raises the question of possible associated externalities. This article studies the effects of nuclear power plant development (and, as a result, the increased amount of water in the atmosphere from evaporative cooling systems) on nearby crop yields and finds that an average nuclear power plant increases local soybean yields by 2 and corn yields by 1 percent.
As @[email protected] said.
https://en.wikipedia.org/wiki/Active_noise_control
Historically, if you were in a noisy environment, you could get closed-back, circumaural headphones — headphones that fit around your ears and had a lot of sound-absorption padding — to help soak up the sound. I still use decent non-ANC circumaural headphones at home.
There are also some people who are more-willing to tolerate discomfort than I am who get in-ear buds, which block noise in their ear canal, and on top of that, fit ear protectors intended for industrial use, like 3M X5 Peltor ear protectors, which have even more passive sound absorption stuff than current circumaural headphones do, and are even larger.
That sort of thing works well on higher frequency sound, but not as well on low-frequency stuff, like engine noise, large fans, stuff like that.
ANC basically has microphones in your headphones, picks up on what sounds are showing up at your ear, and then tries to compute and play back a sound that produces destructive interference at your ear. That is, if you look at the sound waves, where the environmental sound is low pressure, it plays back high pressure signal, and when the environmental sound is high pressure, it plays back low pressure signal. It’s not perfect, or it could make environmental sound totally inaudible. But high-end ANC headphones are pretty impressive these days. I have a pair of Sennheiser Momentum 4 headphones — good, though not the best ANC out there in 2025, and I don’t personally recommend these for other reasons — and when they kick on, the headphones are designed to have the ANC fade in; same thing happens in reverse, fades out when you flip the ANC off. It sounds almost as if fans and the like around you are powering up and down when that happens, very eerie if you’ve never experienced it before. Even the sounds that it doesn’t do so well on, like people talking, it significantly reduces in volume.
And ANC does best with the other side of the spectrum, the side that passive sound absorption doesn’t — the low-frequency stuff, especially regular sounds like fans. So having both a lot of passive sound absorption and ANC on a given pair of headphones let the two work well together.
People often use cell phones in noisy environments, with a lot of people around, and ANC makes it a lot easier to hear music or whatever without background sound interfering. I think that it’s very likely that people will, long term, mostly wind up using headphones with ANC (short of moving to something more elaborate like a direct brain interface or something). It’s not really all that important if you’re in a quiet environment, and I don’t bother using ANC headphones on my desktop at home. But if you’re in random environments — waiting a grocery store line, in a restaurant with music playing over the restaurant’s speakers, on an airplane with the drone of the airplane engines, whatever — it really helps to reduce that background sound. ANC isn’t that new. I think that I remember it mostly being billed as useful for airplane engine noise back when, which they’re a good fit for. But it’s gotten considerably better over the years. For me, in 2025, good ANC is something that I really want to have for smartphone use.
The problem is that in order to do ANC, you need at least a microphone, preferably an array, and somewhere you need to have a model of the sound transmission through the headphones and be running signal processing on the input sound to generate that output sound. In theory, you could do it on an attached computer if you had a fast data interface, but in practice, ANC-capable headphones are sold as self-contained units that handle all that themselves. So you gotta power the little computer in the headphones. That means that you probably have batteries and at least for full size headphones (rather than earbuds) you might as well stick a USB interface on them to charge them, even if the user is using Bluetooth for wireless connectivity. And if you’ve done that, it isn’t much more circuitry to just let the headphones act as USB headphones, so in general, ANC headphones tend to also be USB-capable. My Momentum 4 headphones have all of Bluetooth, USB-C, and a traditional headphones interface, but…I just haven’t really wound up using the headphones interface if I have the other options available on a given device. Might be convenient if I were using some device that only had headphones output. shrugs
I mean, there were legitimate technical issues with the standard, especially on smartphones, which is where they really got pushed out. Most other devices do have headphones jacks. If I get a laptop, it’s probably got a headphones jack. Radios will have headphones jacks. Get a mixer, it’s got a headphones jack. I don’t think that the standard is going to vanish anytime soon in general.
I like headphones jacks. I have a ton of 1/8" and 1/4" devices and headphones that I happily use. But they weren’t doing it for no reason.
From what I’ve read, the big, driving one that drove them out on smartphones was that the jack just takes up a lot more physical space in the phone than USB-C or Bluetooth. I’d rather just have a thicker phone, but a lot of people wouldn’t, and if you’re going all over the phone trying to figure out what to eject to buy more space, that’s gonna be a big target. For people who do want a jack on smartphones, which invariably have USB-C, you can get a similar effect to having a headphones jack by just leaving a small USB-C audio interface with a headphones jack on the end of your headphones (one with a passthrough USB-C port if you also want to use the USB-C port for charging).
A second issue was that the standard didn’t have a way to provide power (there was a now-dead extension from many years back, IIRC for MD players, that let a small amount of power be provided with an extra ring). That didn’t matter for a long time, as long as your device could put out a strong enough signal to drive headphones of whatever impedance you had. But ANC has started to become popular now, and you need power for ANC. This is really the first time I think that there’s a solid reason to want to power headphones.
The connection got shorted when plugging things in and out, which could result in loud sound on the membrane.
USB-C is designed so that the springy tensioning stuff that’s there to keep the connection solid is on the (cheap, easy to replace) cord rather than the (expensive, hard to replace) device; I understand from past reading that this was a major reason that micro-USB replaced mini-USB. Instead of your device wearing out, the cord wears out. Not as much of an issue for headphones as mini-USB, but I think that it’s probably fair to say that it’s desirable to have the tensioning on the cord side.
On USB-C, the right part breaks. One irritation I have with USB-C is that it is…kind of flimsy. Like, it doesn’t require that much force pushing on a plug sideways to damage a plug. However — and I don’t know if this was a design goal for USB-C, though I suspect it was — my experience has been that if that happens, it’s the plug on the (cheap, easy to replace) cord that gets damaged, not the device. I have a television with a headphones jack that I destroyed by tripping over a headphones cord once, because the headphones jack was nice and durable and let me tear components inside the television off. I’ve damaged several USB-C cables, but I’ve never damaged the device they’re connected to while doing so.
On an interesting note, the standard is extremely old, probably one of the oldest data standards in general use today; the 1/4" mono standard was from phone switchboards in the 1800s.
EDIT: Also, one other perk of using USB-C instead of a built-in headphones jack on a smartphone is that if the DAC on your phone sucks, going the USB-C-audio-interface route means that you can use a different DAC. Can’t really change the internal DAC. I don’t know about other people, but last phone I had that did have an audio jack would let through a “wub wub wub” sound when I was charging it on USB off my car’s 12V cigarette lighter adapter — dirty power, but USB power is often really dirty. Was really obnoxious when feeding my car’s stereo via its AUX port. That’s very much avoidable for the manufacturer by putting some filtering on the DAC’s power supply, maybe needs a capacitor on the thing, but the phone manufacturer didn’t do it, maybe to save space or money. That’s not something that I can go fix. I eventually worked around it by getting a battery-powered Bluetooth receiver that had a 1/8" headphones jack, cutting the phone’s DAC out of the equation. The phone’s internal DAC worked fine when the phone wasn’t charging, but I wanted to have the phone plugged in for (battery hungry) navigation stuff when I was driving.
Sony WH-1000XM4/5/6
I don’t have one of those, but they’re pretty popular as headphones with good ANC.
Jlab Epic Air Sport ANC
I do have those, though.
I’ve never broken a plastic ice cube tray twisting it. There are plenty of plastic trays on Amazon.
I have tried a silicone one once and didn’t like it, as it took more doing to get the ice cubes out than the plastic tray, where they tend to all readily slide out after the tray’s been given a twist.
Specifically what is the behavior you’re seeing? I can’t guess exactly from the meme.