That’s mostly accounting for the resolution and motion sensitivity in different parts of the eye. With enough cameras a car should be able too “see” more than we could at any one time.
FLOSS virtualization hacker, occasional brewer
That’s mostly accounting for the resolution and motion sensitivity in different parts of the eye. With enough cameras a car should be able too “see” more than we could at any one time.
I think car automation peaked at adaptive cruise control. It’s a simple tractable problem that’s generally well confined and improves the drivers ability to concentrate on other road risks.
I can see the argument that visible light should be enough given we humans can drive with just two eyes and a few mirrors. However that argument probably misses the millions of years of evolution of our neural networks have gone through while hunting and tracking threats that happens to make predicting where other cars might be mostly fine.
I have a feeling regulators aren’t going to be happy with a claim of driving better than the average human. FSD should be aiming to be at least 10x better than the best human drivers and we’re a long way off from that.
For portability Vulkan is the way (it also gets you GPU compute for free without needing vendor libraries). That said the ruttabaga encapsulation is useful for things like Wayland over virtio-gpu which is useful for some use cases.
I should note for even closer to native performance you want virtio-gpu with native context. Patches for that are currently being reviewed on the mailing list: https://patchew.org/QEMU/20241024233355.136867-1-dmitry.osipenko@collabora.com/
It depends what they want to do. They can fork and take on the burden of maintaining the whole tree in which case good luck with that, linux is too much of a fire hose to enable a 3rd party to assemble something similar making different choices about what they merge. Otherwise they can maintain a re-based fork that tracks the Torvalds tree and then congratulations you’ve just invented a feature tree that can do contribution with extra steps.
I don’t think algorithms themselves are to blame but what they are tuned for. While engagement/eyeball hours for the adserver is the prime metric the quality of experience will be subservient to it. If the algorithms could better measure your mood and stimulation levels and maximise for that the effect would be less toxic. Ideally if it realised you were just mindlessly consuming it could suggest maybe you’ve done enough today and to try something else. But that I fear that is not something the owners of the various ecosystems want.
He has certainly been weirdly selective in the data he quotes while trying not to come across as complete loon.
So this is like extending mastodon replies into your blog post, but with more syndication options?
I met a man who’s focus was on making ASIC fabrication accessible to students and hobbyists. The audacity of the ambition made me smile and the fact they have succeeded using an open source workflow made me happy.
In all DRM devices there are private signed certificates that can be used to establish a secure authenticated connection. To get at them you need to crack/hack/file the top of the chip to exfiltrate the certificate. More modern “Trusted Computing” like platforms include verified boot chains so even if you extract the certificate you couldn’t use it because you also need to sign the boot chain to ensure no code has been altered.
Absolutely - modern pirates are extracting the digital streams with the DRM removed. However they closely guard the methods of operation because once the exploits or compromised keys are known they can be revoked and they have to start cracking again. They likely have hardware with reverse engineered firmware which won’t honour key revocation but still needs to be kept upto date with recent-ish keys.
For example the Blu-Ray encryption protocols are well enough known you can get things working if you have the volume keys. However getting hold of them is tricky and you have to be careful your Blu-Ray doesn’t read a disk that revokes the old keys.
For streaming things are a little easier because if you get the right side of the DRM you can simply copy the stream. However things like HDCP and moving DRM into secure enclaves are trying to ensure that the decryption process cannot be watched from the outside. I’m sure their are compromised HDCP devices but again once their keys get leaked they will no longer be able to accept a digital stream of data (or may negotiate down to a sub-HD rate).
I remember programming the VCR when VHS was first a thing and I’m definitely not nostalgic for it. It was the best most people could afford at the time but it certainly wasn’t good.
When did people get nostalgic for the crappy analogue definition that was VHS? What’s next betamax special editions?
I do pretty much everything in Firefox but during the week I keep a Chrome window up for Hangouts and Jira.
All the cache and prediction logic which eventually gave us spectre is basically compensating for whatever crap random compilers kick out.
Itanium was an interesting architecture but it relied on compilers to build efficient code for it’s bundles. This compares to x86 which dedicates loads of silicon to getting the best performance even out of mediocre code. Unfortunately the Itanic architecture was really ill suited to emulating the other.
ETA: rm dups.
Church of England? They are pretty vanilla and low key in my experience.
I don’t quite follow what this is. Is it a from scratch implementation of the vscode experience or a fork which has removed propriety bits and telemetry?
So the entire article basically comes down to democracy is messy and with PR you can’t necessarily predict who you are going to get in coalitions.