Thoth, this Apple Document describes how to update the software on your Mac.
Statements from various informed sources suggest that the average user will not notice the difference for most normal usage. My employer upgraded all servers yesterday, including the web and database servers underpinning a high traffic internet application. I understand that we are not seeing any measurable change in performance.
Interesting, most of the informed sources I've been talking to are tending to adopt a wait and see approach as there are still many unknowns. Updates for CVE-2017-5754 ("Meltdown") are being made available, but I personally feel it is too early to tell. Apple's update for Meltdown doesn't appear to have had a noticeable impact on performance. Likewise, they say of their upcoming updates for Spectre that the upcoming Safari mitigations will have "no measurable impact" on Speedometer and ARES-6 tests, and an impat of less than 2.5% on the JetStream benchmark. This is certainly encouraging, but note that the latter is only for Safari.
The deployment of Windows variant patches has been delayed in organisations I've dealt with due to conflicts with AV software that can cause stop errors. AV vendors are rushing out updates to implement a registry fix for this.
Of course YMMV, and I still think the impact of current and upcoming updates will vary a lot depending on hardware and use cases. As I said upthread, day-to-day PC tasks (Web browsing, word processing, email, games) are unlikely to see much impact from the Meltdown patches. (Spectre is more of an unknown.) Intel themselves acknowledge that a large system running kernel-intensive tasks might experience a hit of up to 30%. Even if it's less that's still a big impact (and cost) where it will necessitate additional hardware. Linus Torvalds notes that an average 5% drop might be expected, but that systems that make a lot of small system calls might see double digit slowdowns. Willy Tarreau says he's seen a performance drop of about 17% on a system using an i7-4790K, with a noticeable drop in network performance. As he notes, older processors without PCID are likely to be worse hit.
No it isn't, but it doesn't look to me like that is what has happened. The fact that AMD and ARM have similar faults suggest that this is not deliberate. Rather, it is an unanticipated side effect of measures put in place to enhance performance. Given the complexity of the code needed to attack this vulnerability I am not surprised it was missed by the hardware designers.
That's a valid point of view, though AMD and ARM aren't affected as much. Personally, I'm going to wait for more information before I defend tham. Hardware and microcode designers are used to dealing with exceptional complexity. Processor design for performance is highly competitive and there's unfortunately a long history of technology companies placing profit before security. I read somewhere fairly recently (can't remember where) that Lenovo are still paying for the Superfish fiasco of a couple of years ago.
I haven't read his blog but, if he says that, he is, to say the least, simplifying. A user mode process cannot get direct access to results of speculative execution. It would be more accurate to say that user mode processes can, with some difficulty, slowly figure out what the results were. Essentially the approach is to time actual execution of a piece of code and use that to figure out, one bit at a time, a value cached by speculative execution
There are three distinct vulnerabilities. The Fogh blog post, applying only to Meltdown, is referenced in the GPZ blog post linked above thus: "Basically, read Anders Fogh's blogpost". It is expected behaviour that a user mode process cannot access the results of a kernel mode instruction, but that's the problem. The GPZ post goes on to say: "the memory read could make the result of the read available to following instructions immediately and only perform the permission check asynchronously." This is why it's a vulnerability, effectively overcoming the kernel space/user space memory isolation barrier. It's why it's called "Meltdown". It affects Intel processors.
It is difficult to write code that actually works and the extraction is slow in computer terms. Plus which you don't know what you are looking at. It is a piece of memory that doesn't belong to you but it would take a lot of work to figure out what is actually stored in that piece of memory. It may be a password (or, far more likely, a hash of a password) or an encryption key but figuring out that it is and where it starts and ends would be a huge amount of work.
In the paper by Lipp et. al. (referenced in the GPZ blog post) the researchers report successfully utilising a Meltdown attack to dump kernel memory at up to 503 KB/s. They also demonstrate successfully dumping memory from both Linux and Windows 10 systems. In the Linux example the dump reveals plaintext passwords used by the Firefox 56 password manager. And this is part of the problem: it isn't really possible to suppose that passwords are most likely available only as hashes in kernel memory. User processes don't have direct access to I/O subsystems, so I/O calls will always utilise kernel space. This is why threat actors, some with huge resources, put so much effort into kernel attacks. Meltdown is quite easy to exploit. The good news is it's been patched.
Spectre is more complex certainly. It's very difficult to exploit but by no means impossible, and I'm sure some attacker groups are trying right now. The real problem with it is that any software, not just the OS, could be vulnerable, and patching it is equally complex. There are two vulnerabilities that allow a bounds-check bypass (CVE-2017-5753), or utilise branch target injection (CVE-2017-5715). They could be used to bypass both the syscall boundary (both variants), and the guest/host boundary (the second variant), so potentially subverting hypervisor security. Software isolation techniques are commonly deployed in operating systems and application software, and have relied on the fact that the CPU will faithfully execute software, including its safety checks. As noted in the Kocher paper: "speculative execution unfortunately violates this assumption".
The big issue here is that many software packages will need to be patched, and patches may not work consistently across different hardware. I think the situation will remain complicated for some time. It seems to me that it's Intel and AMD who are attempting to over-simplify. Paul Kocher has said in an interview with the New York Times that this may be a "festering problem over hardware life cycles. It's not going to change tomorrow or the day after. It's going to take awhile."
Personally, I genuinely hope that prh47bridge is proved correct, it would certainly make my life a lot less stressful, but I'm not going to assume insignificant performance or security impacts until I know more. The situation isn't simple.
Anyway, it's been a long day and I'm off to the pub :)