Lua.
Don’t call the ambulance, it’s too late for me
Lua.
Don’t call the ambulance, it’s too late for me
Seeing as it’s their river and they are operating it, not sure what exactly you want as evidence.
Bvckup (not a typo)
Made by a little Swiss company, extremely light but very competent. Stays completely out of your way unless it absolutely must get your attention (which is usually never).
I think it’s paid only but it’s very reasonable. Works great in intermittent situations, I. E. It won’t blow up if it tries to run a scheduled backup and the source or target is disconnected etc… Works very well for me for a decade.
I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.
Just my thoughts though!
Lived at a farm that got some organic farming approvals; it depends on the country. And perhaps even your region. In my country, you can get certain approvals/certifications for organic farming, and the regulations for that is very strict. Things like “chemical” (synthetic) pesticides are forbidden outright, so are strong fertilizers etc. This has government oversight, so, there are randomized sampling and testing done on approved entities (farms, companies).
Sadly this often leads to higher costs and more land use. Like it or not, a lot of the things forbidden do lead to much higher yields etc. The end result is higher prices; organic (certified) products are quite expensive here.
While true that the x nm nomenclature doesn’t match physical feature size anymore, it’s definitely not just marketing bs. The process nodes are very significant and very (very) challenging technological steps, that yield power efficiency gains in the dozens of % usually.
To me, what is surprising is that people refuse to see the similarity between how our brains work and how neural networks work. I mean, it’s in the name. We are fundamentally the same, just on different scales. I belive we work exactly like that, but with way more inputs and outputs and way deeper network, but fundamental principles i think are the same.
Band-Maid.
I’ll assume a lot of people are unfamiliar, so for a sample, try Dice, From Now On (it’s a instrumental track), or for their more punk-y era, Choose Me, Alone…
They have a really wide reportoire, style wise. As someone who used to listen to the usual suspects of punk/rock in the early 2000s, Band-Maid is now my favorite band.
I can’t give an authorative answer (not my domain), but I think there are two ways these types of things are done.
First is just observing the page or service as an external entity; basically requesting a page or hitting an endpoint, and just tracking whether you get a response (if not, it must be down), or for measuring load level in a very naive way, track the response time. This is easy in the sense that you need no special access to the target. But it’s also limited in its accuracy.
Second way, like what your github example is doing, is having access to special api endpoints for (or direct access to) performance metrics. Since the github status page is literally ran by Github, they obviously have easy access to any metric they could want. They probably (certainly) run services whose entire job is to produce reliable data for their status page.
The minute details of each of these options is pretty open ended; many ways to do it.
Just my 5¢ as a non-web developer.