Quarterly Updates
We post a newsletter-y update quarterly on security-dev@chromium.org. It's an open list, so subscribe if you're interested in updates, discussion, or feisty rants related to Chromium security.
Q3 2024
Chrome Security 2024 Q3 Update
Hello everyone,
Here’s our regular update on what Chrome Security has been doing in Q3 of 2024.
In our efforts to address abusive web notifications, we launched the ability for Chrome to auto-disable the notification permission from abusive sites. This is integrated with the Safety Check in Chrome, giving users visibility and control.
As we wrapped up a long series of planned changes in Chrome’s download flow, we blogged about how the redesign allowed us to improve download protection for users in a number of ways that weren’t practical before, such as for malicious password-protected archives.
We published a draft spec for Device Bound Session Credentials (explainer). The goal of that protocol is to change the game against cookie-theft malware. We are landing code in Chromium to implement the spec while we simultaneously experiment with a prototype on Google properties.
The Chrome Root Program removed two CAs from the Chrome Root Store, each with a sustained history of compliance issues that posed risk to Chrome users and the integrity of the Web PKI.
We continue to lead security-forward initiatives within the CA/Browser Forum, with recent efforts focused on strengthening domain control validation (DCV) through Ballot SC-67 (“Require Multi-Perspective Issuance Corroboration”) and Ballot SC-80 (“Sunset the use of WHOIS to identify Domain Contacts and relying DCV Methods"). We updated “Moving Forward, Together", which serves as a public roadmap for our top priorities. The update highlights many of our recent accomplishments.
On the engineering side of the Chrome Root Store, we landed a read-only UI on Windows and Mac to show the contents of the Chrome Root Store and any local trust anchors imported from the platform. You can see the new UI at chrome://certificate-manager. We also announced the rough shape of plans to allow for the new Monologue / Static CT API logs to be used in Certificate Transparency. We anticipate this new log format will be considerably cheaper to run.
We launched a version of HTTPS-First Mode that only warns on public sites. It does not warn on plaintext HTTP accesses to sites on private networks, where there is often no good way to be issued a publicly-trusted certificate for HTTPS. You can try it out by selecting “Warns you for insecure public sites” under the “Secure Connections” header on chrome://settings/security.
In the web security space, we completed the initial implementation of Document-Isolation-Policy, a mechanism for more easily enabling cross-origin isolation. Document-Isolation-Policy should be available for origin trial in Chrome 132 for desktop. Some of us attended TPAC, the W3C plenary conference, where we presented proposals for improving HTTPS for local network devices.
The transition to post-quantum cryptography continues! NIST standardized ML-KEM, a post-quantum key exchange mechanism, and ML-DSA, a post-quantum signature algorithm. We have implemented both in BoringSSL. We blogged about our plans to cut over from the draft specification for Kyber, to the final specification of ML-KEM. We’re continuing to work on DTLS 1.3 in BoringSSL, which is necessary to provide post-quantum security to connections that use DTLS, such as WebRTC. We anticipate DTLS will be available in BoringSSL in early Q4. We’re also continued our BoringSSL refactor to support new FIPS validation requirements.
At IETF Vancouver we presented two proposals for Trust Anchor Agility—Trust Expressions, and our alternative draft, Trust Anchor Identifiers. We also further discussed the problem space at an interim meeting of the IETF TLS working group in September. There was broad consensus that the TLS working group should work to solve the problem statement we believe is addressed by Trust Anchor Negotiation.
In Q3, the CSA team started experiments to improve process reuse behavior and prepared for an Origin Isolation experiment. We also moved closer to enabling RenderDocument for main frames, origin computation in the browser process, and using SiteInstanceGroup for data URLs. We investigated issues caught in the isolated sandboxed frames launch, and made progress on simplifying Site Isolation checks in ChildProcessSecurityPolicy. Finally, we started adding process model concepts to our memory-safe browser kernel model in Rust, to explore ideas for improving the equivalent C++ code in Chromium.
The platform security team successfully launched app-bound encryption for cookies, which has led to an extremely marked decrease in detected cookie theft on Windows (we expect attackers to adapt). This is the first step of a longer term effect and we're now extending this protection to other types of sensitive data handled by Chromium and are carefully monitoring our metrics for any adverse effects in the process.
On the macOS front, we're continuing work on adhoc code signatures for progressive web apps, as well as on peer validation for native messaging hosts, which will allow NMHs to be confident they're talking to an authentic Chromium binary. We're also investigating ways to gather field metrics on sandbox violations, which would let us be a lot more ambitious in tightening our macOS sandbox policies. Stay tuned!
We're also continuing work on our project to retarget the ANGLE WebGL translator to use Dawn as a backend. This project is a long-term effort that will allow for moving ANGLE into the renderer where it will be protected by a much stronger sandbox.
The Safe Coding team, working with our partners in Skia, landed a Rust PNG decoder into the Chromium tree and have made significant progress toward plumbing it in. We've also been productively engaged with the upstream owners to improve performance and add new APIs we've found a need for. We are on track to run Finch experiments with the new Rust decoder in the first half of next year, with the ultimate goal of switching all uses to the Rust implementation and removing the C libpng
library from the Chromium codebase.
Our Rust JSON parser implementation has rolled out to 100% on all platforms except Android Webview. Since the new parser is memory-safe it does not need to be sandboxed and it can run in-process without an IPC hop; similar to previous work (QR code generator) we saw improvements in wallclock end-to-end decoding time of 25-40% at p50 and 42-99.5% at p99.
We were delighted to review and land work from an external contributor to wire Rust's log
crate to Chromium's existing //base
logging implementation, and we have continued to help with the integration of Fontations into Chrome.
Safe Coding also ratcheted up work on Spanification in our C++ codebase. We have expanded the reach of the -Wunsafe-buffers-usage
warning to 8.5 million lines of shipping C++ code, significantly reducing the likelihood of new out-of-bounds (OOB) access bugs in Chromium. We continue to move more library APIs to use base::span
.
On the V8 side, we focused Q3 on some of the remaining issues that need to be resolved for the V8 sandbox to become a security boundary. In particular, we worked on issues around JavaScript signature verification and designed and implemented “Leaptiering” to improve performance, reduce code complexity, and guarantee fine-grained control-flow integrity for JavaScript calls, thereby resolving the sandboxing issues. Besides sandbox improvements we also pushed forward with forward-edge CFI for Wasm by ensuring that all Wasm function calls go through a function pointer table.
The Infra Security team has been working on a number of initiatives to help Chromies manage the health of their third party dependencies. We continue to work on rolling out our metadata bug filing pipeline to uplift the health of our metadata across the board and are backfilling as much as we can from other sources. Thank you to everyone who’s received and actioned one of these bugs - you’re making a big difference to how meaningful our risk tracking is!
We’ve been working hard on our security review process. Internally, Chrome has had several different security review gates for each feature — web platform review, requirements document review, launch review — and sometimes this has been a bit duplicative. We’ve been moving all of this into one place to ensure feature owners get more consistent answers. We’ve also identified some types of features which are low risk and can proceed with their launch in parallel with security review — in particular, those features using entirely memory safe code.
In Q3 we launched significant changes to our Chrome VRP reward structure and amounts to incentivize deeper research of more complex and impactful security bugs. As Chrome browser has become mature and has made significant security investments in more security architecture and mitigations such as MiraclePtr, we understand the bar of difficulty to find the most exploitable and impactful bugs has been raised. Our reward amounts have been raised to signify that, but also to better reward effort put into deeper research of those vulnerabilities. We have also adjusted the amounts and structure across all reward categories to better incentivize more through research and reporting, as well as to help ensure rewards amounts are more deterministic to reflect the impact and exploitability of the bug being reported.
We also prepared for the October 2024 ESCAL8 event in Málaga, Spain. Part of this event is the bugSWAT event for top VRP researchers across all the Google VRP programs, including Chrome VRP.
The Fuzzing team has been busy with a couple different projects, continuing in the themes of a) writing novel and interesting fuzzers and b) maintaining our fuzzing infrastructure.
Our Mojo IPC fuzzer ran for a few weeks, then underwent a couple tweaks to help it explore deeper into Mojo interfaces. Since then, it has been finding stability issues in IPC code. This has led us to explore merging coverage-guided and grammar-based fuzzing in a new way, which we are excited to apply to IPC and WebIDL fuzzing. At the same time, we have worked out a way to harness the whole of Chrome with Fuzzilli, in collaboration with the V8 Security team. We also explored ways to improve our WGSL fuzzing efforts.
As for fuzzing infrastructure, we have been working with the maintainers of ClusterFuzz to introduce automated monitoring and alerting to the system, in an effort to improve its reliability and start measuring SLIs. Implementation work has started and we are beginning to reap the benefits. We also made further progress in integrating ClusterFuzz with Chrome’s test execution platform, Swarming.
Until next time,
Adrian
On behalf of Chrome Security
Q2 2024
Chrome Security 2024 Q2 Update
Greetings,
This is what the Chrome Security team have been doing in the second quarter of 2024.
Real-time phishing protection for Safe Browsing is now rolling out on Android (Desktop and iOS landed last quarter), bringing >25% more phishing protection. We’ve switched desktop users to asynchronous checks, which removes any performance impact of these real-time checks while retaining the protective value.
In extension safety, we published a blog detailing our efforts and what users can do to stay safe. Additionally, we added several more triggers in Chrome that will flag unwanted or unexpected extensions for the user in the Safety Check.
In the cookie theft space, we launched auto-deep scanning of suspicious downloads for ESB users, and expanded the encrypted-archive scanning as an option for standard-SB users.
We fully launched post-quantum TLS key exchange on Chrome Desktop platforms, and it is now used in 19-26% of all forward-secret HTTPS connections. We also added a device policy to disable it on the login screen for managed ChromeOS devices. We blogged about some of the challenges in deploying post-quantum cryptography, and explained our strategy of focusing on agility.
We’ve added support for certificate revocations due to key compromise to CRLSet, and enabled enforcement. Any certificate revoked with the key compromise reason code should now be blocked by Chrome clients within 24-48 hours. This approach should work for day-to-day revocation, but will not work for mass revocation events, due to a limit on the max size of a CRLSet.
The Chrome Root Program announced the distrust of two CAs—e-commerce Monitoring GmbH and Entrust—for compliance failures. Both distrusts used a new gradual approach to distrust, where certificates logged to a Certificate Transparency log prior to a well defined enforcement date continue to be trusted until expiry.
Within the CA/Browser Forum, the Chrome Root Program contributed to Ballot SC-75 (passed), which focused on linting. This ballot was partially motivated by “Moving Forward, Together" and the wide-spread certificate mis-issuance detected by our team in the Spring. We also continued pushing forward with Ballot SC-67 (moving to vote on approximately 7/15), which is focused on strengthening security practices via multi-perspective domain validation. At CA/Browser Forum Face-to-Face, we presented our expectations around incident response.
The Chrome Security Architecture team reached an exciting milestone in Q2, enabling isolated sandboxed frames by default! This adds a process boundary between origins and untrustworthy content they host, and it required solving numerous challenges with srcdoc URLs, data URLs, base URLs, and other corner cases. We also shipped RenderDocument for all subframes, ensuring that a new RenderFrameHost is consistently used for each new subframe document. We added several new security enforcements against compromised renderer processes as well, including opaque origin checks and expanding the new CanCommitURL checks to Android WebView. To prepare for future experiments, we made progress on Origin Isolation and SiteInstanceGroup modes. Finally, we expanded our memory-safe browser kernel model in Rust to simulate documents, navigations, and session history.
On Windows, the platform security team has continued work on app-bound encryption support for cookies. The first stages of this project are being rolled out through the release channels now. Work has also begun on expanding this protection to other secrets.
The ACG mitigation for the browser process on Windows, which was previously only behind a feature flag, was promoted to a new enterprise policy, and we encourage folks to try it out and report any incompatibilities.
On macOS, we are continuing to refine our approach to ad-hoc PWA signing. We have a new solution which we believe ought to interoperate well with all sorts of deployed host security software, and are now finalizing enterprise policies and other deployment requirements before we ship this.
We have brought the Skia font manager to our PDF reader, allowing us to strengthen the PDF sandbox, and we have made changes in several areas of Chromium to enable compiler protection against unsafe buffer uses. We have also continued our long-term architectural work to port ANGLE to target Dawn, which is partly complete but unlikely to be finished this year.
The Offensive Security team discovered and reported two high-severity bugs in Chrome.
Spanification, one of our Safe Coding projects, made significant progress in Q2. We have updated almost all APIs in //base
and //net
so they can take buffers as span
s. Thanks to our infrastructure work in Q1, we can roll out the -Wunsafe-buffers
warning incrementally across the codebase, and as of the end of Q2 Chrome has files comprising 1.5 million lines of non-test code that are protected by that warning and are therefore less likely to have out-of-bounds access bugs introduced in the future. Our process and docs are mature enough we're ready for early-adopter teams to start Spanifying their code; to that end, we made a formal announcement to chromium-dev
about the project and its goals.
As for MiraclePtr, we are at a stage to start the roll out to the renderer process. And as part of that work we have been especially focused on investigating the performance overhead of MiraclePtr. We now have bots and a dashboard to monitor the overhead over time so as not to introduce raw_ptr into performance hot spots.
Earlier this year the Chrome Security team stood up automation to build a CodeQL database for Chrome once a day. This quarter we started engaging more proactively with GitHub to identify and drive fixes in CodeQL's C++ extractor (e.g. this issue) that have led to incomplete CodeQL databases.
Early this quarter we launched the V8 Sandbox VRP alongside a technical blog post about the sandbox. While the V8 sandbox is still in development and not yet considered a security boundary, the VRP inclusion is an important step towards that goal. In May we then also presented about the sandbox at OffensiveCon in Berlin. On the CFI-side, we investigated an approach for forward-edge CFI based on memory protection keys which appears promising as it has very low performance overhead. We are now aiming to use it to achieve forward-edge CFI for both JavaScript- and WebAssembly calls. Finally, helped by events such as Pwn2Own or the V8CTF (our exploit bounty program), we collected some statistics about the types of bugs being exploited in V8.
The Chrome Fuzzing team continues work on two fronts: writing novel fuzzers and maintaining the tools and infrastructure used by Chromium engineers to write and run fuzzers.
We landed truly automated IPC fuzzing, which found a bug, though a couple infra bugs remain. We are experimenting with integration of Chrome and Fuzzilli and an accessibility tree based UI fuzzer. We also considered an experiment to unblock coverage-guided fuzzers using AI/LLMs.
On the infrastructure side, we have been working to diversify and expand our fuzzing fleet with newer devices. We are investing in aligning our fleet management with the rest of chrome test infrastructure to reduce operational toil and allow us to grow the fleet further next year.
After months of behind the scenes work and integration, the Chrome VRP, in conjunction with Google VRP, launched the new payments integration with Bugcrowd, resulting in a more flexible option for reward payments for all VRPs across Google, including Chrome VRP.
The Chrome VRP has also been working on forthcoming updates to the Chrome VRP reward structure as well as working on plans for upcoming VRP events, including the BugSWAT events in Las Vegas in August (in conjunction with Black Hat USA, Def Con, and Google’s 0x0G event) and as part of the annual ESCAL8 event in October in Málaga, Spain.
Until next time,
Adrian
On behalf of Chrome Security
Q1 2024
Chrome Security 2024 Q1 Update
Hello,
It’s 2024! Here’s a summary of what Chrome Security has been working on in the first three months of the year.
We rolled out real-time phishing protection to all Safe Browsing-enabled Chrome users on Desktop and iOS platforms (blog post) and observed a close to 25% increase in phishing warnings shown to those users. This protection will be landing for all Safe Browsing-enabled Chrome users on Android soon.
The number of users who opt in to Enhanced Protection in Chrome continues to grow, with a 19% increase since the start of 2024.
We publicly described the challenges of how cookie theft malware can compromise users’ accounts, and one way we’re addressing it, in a blog post on Device Bound Session Credentials.
After several months of experimentation for compatibility and performance impacts, we’re launching a hybrid postquantum TLS key exchange to desktop platforms in Chrome 124. This protects users’ traffic from so-called “store now decrypt later” attacks, in which a future quantum computer could decrypt encrypted traffic recorded today.
We continue to build out tooling and engineering support for the Chrome Root Program. This quarter, we began experimenting with expanded support for revocation of leaf certificates. We also built policy support for enterprises to customize root stores, and implemented new capabilities for distrusting CAs with minimal compatibility impact.
In the policy realm, our experiments with certificate linting tools revealed a large number of misissued certificates, leading to various active incident reports from CAs. Responding to these incidents should become less burdensome as more certificate issuance becomes automated and misissued certificates can be replaced more quickly and with less manual intervention. To that end, we published our latest Chrome Root Program policy update; among other changes, Chrome now requires new CA applicants to support automated certificate issuance.
We re-evaluated our efforts to make crossOriginIsolation more deployable. To that end, we propose a new policy, DocumentIsolationPolicy, that enables process isolation for a document and allows it to become crossOriginIsolated, without restrictions on popups it can communicate with and frames it can embed. On the XSS mitigation side, we’ve been following up on spec issues raised by Mozilla as TrustedTypes are implemented in Firefox. We also produced a first draft of the Sanitizer specification.
The Chrome Security Architecture team moved closer to launching some long term efforts, including isolated sandboxed iframes (with a 1% stable trial and more enforcements) and RenderDocument. We also strengthened the CanCommitURL checks when a navigation occurs, and refactored the PolicyContainerHost keepalive logic to support more types of navigation state, unblocking work on SiteInstanceGroup. We also cleaned up legacy RenderProcessHost privilege buckets and continued work to simplify checks in ChildProcessSecurityPolicy. Finally, we kicked off an experimental project to build a model of a memory-safe browser kernel in Rust, to help inform future refactoring work in Chrome.
On Windows, the platform security team has continued development of the app-bound encryption mechanism, which makes it more difficult for malware running on the same system as Chrome to extract cookies and other secrets. It also forces malware to use techniques which are easier to detect when stealing secrets from the browser. This feature will continue to roll out over the next few milestones, with increased protection being gradually applied to other kinds of data held by Chrome.
Also on Windows, we've made improvements to the sandbox by having the browser process provide already-created handles to needed log files and removing a sandbox exemption allowing sandboxed processes to access those files. We’ve worked closely with engineers from the Edge team to reduce access to the GDI subsystem, helping us to smooth the complexity of the sandbox.
On macOS, we've continued to work towards proper ad hoc code signing for installed progressive web apps. We've encountered significant complexity getting this improvement to play well with various host security software but believe we have a path forward.
We continue to add security improvements to Chrome’s IPC system (Mojo) - lately including feature annotations on interfaces and restrictions on opaque string types. Engineers from the Edge team, collaborating with members of the Chrome platform security team, added non-writable memory on some platforms to make it more difficult for some attackers to use javascript to talk directly to Mojo in their exploit chains.
Across all platforms, we've continued our effort (tracking bug) to retarget the ANGLE OpenGL ES implementation to use the Dawn WebGPU implementation as a backend. This effort will allow moving ANGLE into a more tightly-sandboxed process and considerably reduce GPU attack surface. It is ongoing and relatively early, and we don't have an estimated completion milestone yet.
Our vulnerability research in Q1 continued focusing on codec code, with at least three security bugs (one high-severity) bug found and fixed. Along the way we added or meaningfully improved fuzz coverage for sixteen security-relevant targets.
The MiraclePtr rewrite has been applied to the dawn repository and so that code is now protected from use-after-free exploits. We have also increased coverage by rewriting collections of pointers (e.g. std::vector<T*> => std::vector<raw_ptr<T>>, and more). While we continue our experiment to enable MiraclePtr for the renderer process, we have started experimenting with ADVANCED_MEMORY_SAFETY_CHECK and the Extreme Lightweight UAF Detector both of which aim to detect memory safety issues from the wild.
We've begun work in earnest on Spanification – finding sharp edges that need filing down or gaps that need filling for the project to succeed at Chromium scale. We introduced a number of new APIs within base::span
, primarily for working with byte spans and strings. We added HeapArray
to replace use of unique_ptr<T[]>
, cstring_view
to replace use of const char*
, span-based endian conversion functions, and SpanReader
/SpanWriter
for working with a dynamic series of bytes in a span. Finally, we implemented a novel Clang mechanism for incrementally rolling out a warning (-Wunsafe-buffer-usage
) across the codebase without it being viral across textual #includes that will allow us to ratchet Spanification progress forward. We tested this support out in //base
and //pdfium
.
On the V8 side, the major achievement this quarter was to launch VRP integration for the V8 sandbox. Even though there are still a number of open issues that need to be fixed, this will hopefully help prioritize the important ones and provide a better idea of the types of bugs encountered on this new attack surface. We also released a technical blog post about the motivation for the sandbox, its high-level design, and the goals we’re trying to accomplish with it.
There’s been a big win this quarter for third party dependency health thanks to a new process which will update our Chrome Rust dependencies each week. This is the result of a lot of love and investment in the Rust ecosystem within Chrome, and will smooth the path for replacing some of our C/C++ dependencies with Rust, and improving our memory safety.
The fuzzing team has been making progress on a few fronts. We’ve been working on a ClusterFuzz rearchitecture project for a while, with the goal of allowing VRP reporters eventually to upload test cases directly to ClusterFuzz. The last few steps are being worked through, and we expect to start testing this out with a few security shepherds in Q2. Building on the InProcessFuzzer work of last quarter, we have extended the framework to allow running test cases within the renderer process. We have used this to build an experimental fuzzer for the renderer-browser mojo boundary, and are now automatically fuzzing all interfaces vended by BrowserInterfaceBroker in a browsertest environment, on ClusterFuzz. Work remains to identify important Mojo interfaces to fuzz, and simplifying the introduction of other similar fuzzers.
Android fuzzing is truly back online with several phones fuzzing a HWASAN build of Chrome from Android’s test lab on Android’s fuzzing infra, with integration issues being addressed as we find them. We are working to enable HWASAN fuzzing on our own devices in Chrome test labs, as well as EOL devices graciously provided by the Android team.
Until next time,
Adrian
On behalf of Chrome Security
Q4 2023
Hello,
Here are some of the things Chrome Security has been working on in Q4 of last year.
To continue to combat the problem of Cookie Theft, we started offering the ability to scan suspicious passphrase-encrypted archives via Google servers for users who have enabled Enhanced Safe Browsing in Chrome. We also launched a refresh of the various types of download warnings shown in Chrome to make them consistent and understandable, and that, along with prior launches, has successfully lowered the warning click-through rate in H2’23.
We’re also working on protecting Chrome users even after they’ve gotten malware. We’ve been iterating on the design and implementation of the Device Bound Sessions Credentials (DBSC) API, which will allow sites to bind cookies to the device (without enabling tracking), and make it much harder for malware to do harm at scale.
To help protect Chrome’s locally-encrypted secrets from similar-privileged processes, we’re adding app-bound encryption on Windows and starting with cookies. Most of the mechanics of that encryption have landed and we expect to begin experimenting with it soon.
In our ongoing efforts to preemptively prevent quantum computers from destroying the internet, we continued to gradually roll out postquantum TLS key exchange. After getting experiment results, we implemented some performance optimizations to the underlying cryptographic algorithm. We published a proposal called TLS Trust Expressions and presented it at IETF in Prague. This proposal aims to improve the agility and robustness of the web PKI, and also ties into our previous Merkle Tree Certificates proposal that may be able to reduce the number of huge postquantum signatures needed on the wire to establish fully postquantum-secure TLS connections.
We launched new warnings on HTTP downloads, to help protect users from network attackers who might inject malicious binaries or other content into downloads. This is an expansion of our previous effort to block “mixed” downloads, or HTTP downloads initiated from HTTPS pages.
Finally, we finished porting Chromium’s certificate verifier into BoringSSL (Google’s cryptography/TLS library), and adapted Chromium to use the new BoringSSL version. After further development of a public API, this BoringSSL port will provide an alternative to the legacy OpenSSL certificate verifier for other users of BoringSSL.
We launched OAC-by-default, which deprecates document.domain by default, removing one of the footguns of the Web Platform. We participated in Interop discussions with other browser vendors around TrustedTypes, and Mozilla is now positive about the spec. Some organizations are interested in the spec being implemented in all browsers as they hope it will allow them to comply with imminent regulatory changes in the Netherlands and the broader EU, as outlined in the eIDAS Regulation (which restricts the usage of an upcoming EU identity mechanism to pages that do not use “eval”).
The Chrome Security Architecture team made good progress on security architecture improvements in Q4. The first RenderDocument field trial (for "non-local-root subframes") reached stable channel, and the compositor reuse field trial for remaining cases reached beta channel. We also finished the base URL inheritance launch and fixed most remaining issues for isolated sandboxed iframes to unblock a beta trial, which is now in progress. Citadel mode is now enabled everywhere to provide better protections against unlocked renderer processes. We're also excited that OAC-by-default is now fully launched, so that document.domain changes are blocked by default and we are free to experiment with Origin Isolation. Finally, we ran a team hackathon to clean up and simplify existing code, such as in ChildProcessSecurityPolicy.
The Platform Security team focused on continuing rollouts of the features we developed earlier this year. Our main priority was continuing the rollout of the network service sandbox to Windows, Linux, and ChromeOS platforms. For Windows, we're most of the way to fully supporting the Direct Socket API with the network service sandbox, which is a blocker for further experimentation; for Linux and ChromeOS, we're investigating performance issues on low-memory devices with the sandbox enabled.
We also continued work on adhoc PWA signing; during early Q4 we encountered an incompatibility between adhoc signing and Santa, a binary authorization system for macOS. While we're working on resolving that, rollout of adhoc PWA signing has been paused.
In the last three months of this year we reached important milestones in our efforts to improve memory safety in Chrome. First, we expanded the launch of our first Rust-backed product feature – in-process QR code generation – to 100% of the Stable (non-ChromeOS) population after experiments showed significant (95%+) reductions in 95th-percentile generation times with no negative impact on any topline metrics. That rollout helped us validate our Rust toolchain across all supported platforms and we were able to announce that the Rust toolchain is production-ready as of Chrome 119.
Elsewhere in memory safety, we were able to enable GWP-ASAN on Android which massively improves our insight into memory safety bugs that might get hit in the wild. We are also laying the groundwork for addressing the class of out-of-bounds memory access vulnerabilities through broader use of base::span
in the Chromium codebase: we added APIs to improve ergonomics and started updating existing APIs to track lengths where they hadn't before.
The V8 security team has continued the development of the V8 sandbox and CFI, as well as improved our open-source fuzzer, Fuzzilli. On the sandbox side, we shipped the trusted space for V8 this quarter and migrated BytecodeArrays into it, resolving a long-standing sandbox blocker. Since then, we’ve also migrated the trusted parts of the WasmInstanceObject (another long-standing issue), and expect to migrate further objects soon. For CFI, we worked on a number of under-the-hood changes, for example a refactoring of the way V8 manages heap chunks to be able to use the new mseal syscall to seal all executable memory in the future. Finally, in October we launched the V8CTF and soon after received the first successful submission!
In Q4, the infra security team rolled out a number of security changes to secure the Chromium code base, including the 2P committer review change which requires two persons to review code authored by someone who is not a Chromium committer.
We also deployed an automated pipeline called CPESuggest that creates automated CLs to add Common Platform Enumerations to our README.chromium metadata files so we can better keep on top of identifying CVEs which might affect our third party dependencies.
In our bug-zapping team, DEET, we’ve been focusing on polishing the things we did in Q3. For instance our fuzzing coverage dashboard now much more accurately represents the percent of Chromium code which is really covered by fuzzers (about 22% - please add fuzzers!) We also succeeded in running fuzz cases on more modern Android devices using hwasan which can detect more types of security bugs. We’ve continued to work on making centipede work reliably in Chrome, and especially worked on the complex “InProcessFuzzers” which in future will allow fuzzing in a realistic browser_test-like environment. Work has also been continuing apace on rearchitecting ClusterFuzz to enable running untrusted workloads safely. There is one new thing this quarter - we’ve been experimenting with FuzzTest and we’re starting to see new FuzzTest fuzz targets be added. These are simpler and quicker to write than the old libfuzzer technology, though they’re not yet supported on as many platforms.
The Chrome VRP had the opportunity to meet up with some of our top researchers from past and present years in Tokyo for our BugSWAT event as part of ESCAL8 2023. BugSWAT 2023 was three days of talks, live hacking, and socializing with VRP researchers and Googlers from other VRPs across Google. We also just recently announced our Top 20 VRP reporters for 2023, celebrating the achievements of the Chrome VRP researcher community this year.
We’ve also been heavily preparing for Chromium’s imminent move to a new issue tracker, which changes our world radically!
Finally, lots of different parts of Chrome Security came together to meet with various graphics teams to chart a course towards simplifying our graphics architecture in future and ensuring all the remaining attack surface is fuzzed to the extent possible.
Until next time,
Adrian
On behalf of Chrome Security
Q3 2023
Hello!
We’d like to share a few things that Chrome Security has been doing in the third quarter of 2023.
In the Counter Abuse team, we launched the overhauled Download UX to 100% (popular in the press)! The new UX provides a platform to build a number of security-focussed features, including many to counter the abuse via cookie theft.
In the extensions safety space, we launched the “Extension Module” in the Safety Hub to help users remove unwanted extensions, and the launch was well received by the press (e.g. The Verge, Engadget).
Within the Trusty Transports team, we’ve been working on TLS post-quantum key agreement which is currently rolling out to 1% Stable. This quarter we resolved server incompatibilities that appeared while on Beta and we are now proceeding with a gradual rollout. We also published an IETF draft to ease future compatibility and security problems with upcoming post-quantum transitions. Elsewhere in the TLS stack, we fully launched Encrypted Client Hello (ECH), in partnership with Mozilla, Cloudflare, and the IETF, and we completed the removal of SHA1 in signatures in the handshake.
We announced an upcoming iteration of our root program policy, focusing on requiring automation for new applicants, as previously explored in our Moving Forward Together roadmap. We blogged about why automation is critical for a secure and agile PKI. To improve the robustness of the web PKI’s Certificate Transparency (CT) infrastructure, we announced a significant update to our CT log monitoring tooling, which will allow us to detect many more types of CT log issues before they have broader ecosystem impact.
After a gradual rollout, we fully launched HTTPS upgrading in Chrome 117, which automatically attempts all plaintext navigation over HTTPS, and silently falls back to plaintext HTTP if the upgrade fails. This helps protect against passive eavesdropping, and marks a notable step in our continued march to make plaintext an aberration on the web. Along those lines, the replacement of the lock icon started to make its debut on Chrome Stable.
Finally, we hosted interns who made some improvements to our HSTS preload infrastructure. One of these improvements was automatically pruning sites from the list if they stop meeting the requirements for an extended period of time, which will help keep the list size more manageable.
In Q3, the Web Platform security team launched an Origin Trial for COOP restrict-properties. COOP restrict-properties allows a web page to limit its interactions while supporting OAuth and payments popups. It protects against a variety of XS-Leaks. If the page deploys COEP, it is also crossOriginIsolated. We believe this will make the deployment of crossOriginIsolation much easier, allowing web pages to gain access to powerful APIs like SharedArrayBuffers safely. We’ve also made progress on the Sanitizer API, and have a general agreement with other browser vendors on the shape of the API we want to build.
In Q3, the Security Architecture team spent significant effort designing a possible early RFH swap to unload one document before another commits, to unblock a potential feature. We also fixed crashes to unblock the base URL inheritance and isolated sandboxed iframes launches, and we helped respond to issues from the OAC-by-default rollout (to unblock origin isolation). In RenderDocument, we realized it would be necessary to reuse compositors across documents, so we started work on that and launched a "non-local-root subframe" experiment group for cases that aren't affected. We also made progress on giving data: URLs their own SiteInstances within SiteInstanceGroups, as a step towards always having an accurate SiteInstance even in shared processes.
In Q3, the Platform Security team focused our efforts on improving our sandboxes, researching and prototyping uses of new OS security primitives, and providing security guidance and leadership throughout Chrome (including security review, consultation, other ad-hoc engagements with other teams, and a very well-received public blog post).
The main sandbox improvements from Q3 were in the network service sandbox, including designing (but not yet implementing) a solution for the Direct Sockets web API and brokering some other APIs. We also raised the Windows sandbox memory limit to enable more WASM features, and reduced the renderer's access to KsecDD and fonts on Windows.
The main new OS security primitive work was implementing ad-hoc code signing of PWAs on Mac and working closely with Microsoft to implement and ship FSCTL syscall filter for LPAC processes. We also did a considerable amount of research on the Windows VBS Enclave API and started prototyping for future use.
We also had a couple of lowlights this quarter: we had to stop work on the Android network service sandbox after it became clear the resource usage would be too high to justify having a network service process, and we didn't manage to get Kerberos brokering on ChromeOS done, which leaves a big sandbox hole on that platform.
Offensive Security released a technical report describing our approach to attacking WebGPU in Chrome. We hope the report gives Chrome Vulnerability Rewards Program participants a boost to help us keep Chrome 0-day hard. Our ad hoc research also led to discovery and fixes for two vulnerabilities: crbug.com/1464680 (Critical) and crbug.com/1479104 (High), which will be de-restricted per the usual 14-week disclosure process.
We completed support for the myriad of Chromium build configurations in the Rust toolchain, with the public announcement of readiness to ship Rust code coming just shortly into Q4. The Rust-backed QR Code generator experiment has made its way to Stable on our largest platforms with 10% of users making use of the Rust implementation, and no stability or other issues identified. The last platform is ChromeOS where it has now reached beta without any issues. Our next steps will be to improve the process for updating third-party dependencies, our supply chain security processes, and sharing of third-party dependencies with our downstream projects such as Skia.
DanglingPointerDetector coverage continues to grow. It is now enabled by default for developers on Linux, and a PRESUBMIT was added to discourage developers cargo-culting the dangling pointer annotations over fixing the dangling pointers. Coverage has been expanded to all test types, except on Android. It has also been enabled on ChromeOS Ash which checks 7,742 more pointers, and uncovered 2,076 more dangling pointers that are now annotated. We organized 2 code health rotation projects to fix and remove dangling pointers.
PartitionAlloc repository can now be fetched and included from arbitrary locations. This will allow our downstream projects such as Skia and Dawn to make use of it. We have written out plans to adopt MiraclePtr (through PartitionAlloc) in Dawn and Angle. We concluded the experiment around rewriting vector<T*> to vector<raw_ptr<t>> and decided to proceed with the rewrite.
In Q3, the V8 Security team worked on CFI, the_hole mitigations, and on sandboxing. On the CFI side, we enabled PKEY-based code protection on supported platforms and have started to work on code validation. Read more about our CFI plans in this recent blog post. We also worked on more domain-specific mitigations, in particular to mitigate the effect of the_hole leaks, of which there have been a few recently. Next up, on the V8 sandboxing side, we’ve worked on creating a “trusted heap space” that will contain sensitive objects such as bytecode and code metadata that must not be corrupted by an attacker. Find more details in this design document. Finally, we also launched the V8CTF this quarter which is essentially an exploit-bounty for n-day renderer exploits. With that, we hope to get early feedback on our future exploit mitigations such as CFI or the V8 sandbox. Check out the rules here!
Our new bug-finding team is called DEET. In fuzzing, we’ve got three major projects going on: first, we’re upgrading our Android fuzzing devices so we can decommission our old fleet before it literally catches fire, and in the process revisiting our ASAN integration. Secondly, ClusterFuzz is being split so that fuzzers are run on untrusted virtual machines. (Part of our eventual goal here is to allow VRP reporters to upload their test cases directly to ClusterFuzz, though we don’t have a timeframe for that yet). And finally, we’re progressing towards efficient coverage-guided fuzzing of the whole Chromium browser using centipede, with a view to exploring our whole IPC and UI attack surface in future. Finally, we reinstated our Chromium fuzzing code coverage dashboard - please look for gaps and write fuzzers!
In the VRP space, we’ve been helping out with Chrome OS’s launch of an independent Vulnerability Rewards Program. Additionally, we sunset the additional V8 Exploit Bonus (V8 exploit submissions may now be covered by the V8 CTF) and launched the new reward structure for V8 bugs impacting Stable and older versions of Chrome. The goal of the new V8 rewards structure is to incentivize the reporting of long-existing exploitable security issues in V8, which has already resulted in a $30,000 reward for a report of V8 bug that has existed since at least Chrome 91. We’ve also been prepping talks and other engagement activities for the top Chrome VRP researchers invited to ESCAL8 in Tokyo in October 2023.
Lastly, we’re preparing (heavily!) for Chromium’s upcoming change to a new issue tracker.
Until next time,
Adrian
On behalf of Chrome Security
Q2 2023
Greetings,
I'm pleased to share an update on some of the things the Chrome Security team has been up to in the 2nd quarter of the year.
In addition to much behind the scenes work, the Chrome Counter Abuse team disabled the “--load-extension” flag for Enhanced Safe Browsing users to remove a common and easy technique to silently load malicious extensions. We also added support to scan additional archive types for malware, include those that are nested.
The Trusty Transport team advanced our years-long march towards ubiquitous HTTPS. We announced a major milestone: the upcoming removal of the address bar lock icon in Chrome 117. We’re now experimenting on stable with silently upgrading all navigations to HTTPS (falling back to HTTP if the upgrade fails).
We continue to improve the technologies underlying HTTPS via the Chrome Root Program and our BoringSSL library. We integrated the postquantum-secure X25519Kyber768 key encapsulation mechanism for TLS into BoringSSL and Chrome, and plan to begin experimenting with it in Chrome 116. The Chrome Root Store is now launched on stable for all platforms except iOS, bringing significant performance improvements to Android in particular. On the policy fronts, we passed a CA/Browser Forum ballot to incentivize short-lived and automated certificates and promote more privacy-preserving revocation infrastructure, and we distrusted the e-Tugra root certificates after a researcher discovered significant security issues in their systems.
The Web Platform Security team started an OT for a new COOP mode (restrict-properties) in Chrome 116. This allows websites to deploy cross-origin isolation, unlocking access to powerful web features, as well as secure themselves against cross-site leaks. On the road to enabling origin isolation, deprecating document.domain (aka Origin-keyed Agent Clustering by Default), is now enabled on 1% stable, and will keep ramping up to 100%. ORB (Opaque Response Blocking) v0.1 shipped to stable, improving on CORB to better protect cross-origin subresources from Spectre attacks. ORB v0.2 was scoped down to avoid web compatibility concerns and align with Firefox. We sent the I2S and are aiming to ship soon. Implementation continues on a new permission prompt allowing secure websites to bypass mixed content when accessing the private network. We are aiming to start an OT with an MVP in Chrome 117.
The Security Architecture team made progress on several launch experiments in Q2, aimed at improving security. The new base URL inheritance rules were approved and are in a beta field trial, which allowed us to restart the experiment for Site Isolation for sandboxed iframes. The trials for RenderDocument and navigation queueing are also in progress. In parallel, we built a new mode for origin isolation (OriginKeyedProcessesByDefault) built on top of OAC-by-default, with plans for performance experiments, and started work on a SiteInstanceGroup mode that uses a separate SiteInstance in the same group for data: URLs. We also made some improvements to BrowserContext lifetime to reduce the risk of use-after-frees. Finally, we started work on a new early RenderFrameHost swap approach to replace the old early commit optimization, and formed a navigation bug triage rotation to better manage the queue of bugs.
The Platform Security team continues to make progress on sandboxing the network service. UDP sockets can now be brokered into the sandbox, which will be used on Android and Windows. An initial network service sandbox policy was developed for Linux. On Mac, we audited use of CFPreferences within the sandbox. And on Windows we worked on removing unnecessary device handles from the renderer sandbox. With project Sandbake, we worked on improvements to the dangling pointer detector and investigated improvements to raw_ptr flags. We also revisited some of the memory limits on renderers and were able to unblock some Web Assembly use-cases without impacting memory usage or performance. In addition, work was done on hardening the renderer process against attacks on kernel devices such as \device\cng and some general continued sandbox refactoring including adding a generic capability to preload DLLs needed by sandboxed processes before the sandbox fully engages.
The OffSec team finished and circulated our WebGPU analysis documents internally within Chrome, which led to ongoing engagement with graphics colleagues. We're particularly proud of two impactful engagements:
- A fix for SwitchShader that squashes 4 bug variants and also led to insightful discussion with graphics colleagues.
- Collaboration with Android and partners to verify the fix for crbug.com/1420130 reported in Q1.
We did several presentations about attacking Chrome, including a quick summary of our WebGPU findings for Parisa Tabriz, a Learning Lunch about fuzzing Chrome with partners at Intel, and a Mojo bug walk-through at a sandbox escape analysis session with our colleagues in the Chrome Security Architecture team. Speaking of attacking Chrome, we continue to find and report security bugs (crbug.com/1431761 , crbug.com/1430985, crbug.com/1430221) as well as landing new automation to find bugs in areas of interest, such as this protobuf-mutator fuzzer for Dawn, which landed upstream after a multi-quarter review process. Finally, we hosted the inaugural Browser Vulnerability Research Summit, with much gratitude to colleagues at Google, Webkit, Microsoft and Mozilla who made it possible by participating and establishing an atmosphere of collaboration.
In memory safety news, MiraclePtr (BackUpRefPtr) is now enabled by default for Linux, macOS, and ChromeOS, following its previous enabling for Windows and Android last year. Usage of MiraclePtr (i.e. raw_ptr<T>) is now enforced for most code using a clang plugin.
The Lightweight Use-after-Free Detector experiment has demonstrated the potential to make use-after-free bug reports more actionable, and progress is being made to ship it. The DanglingPointerDetector experiment on the CQ concluded with positive feedback, and was made permanent. Coverage has been extended for all tests. It has been enabled by default on Linux/Debug, and under build flags for others.
Rust support is now enabled for all bots and developers, and we’re shipping chrome://crash/rust toward Stable on all platforms with Blink in Chrome 117.
A Rust-based QR Code generator will also be shipping in Chrome 117 behind a finch flag on Android, Windows, MacOS, and Linux (ChromeOS on Chrome 118) for experimenting with a real Rust feature. It will replace an asynchronous out-of-process C++ component with a synchronous in-the-browser-process Rust implementation.
In Q2, the V8 Security team continued to improve Fuzzilli, our JavaScript engine fuzzer. With the refactored HybridEngine, it is now possible to build “mini-fuzzers” on top of Fuzzilli, for example for fuzzing regular expressions or data serialization. A full changelog of all Fuzzilli changes can be found here. On the CFI side, we laid the foundation for JIT validation by tracking JIT-code regions in PKEY-protected memory, and for the V8 sandbox, we worked on a design to protect code pointers that should be performance neutral. Last but not least, we began refactoring the_hole in V8 to mitigate vulnerabilities that leak this internal value to JavaScript code.
In Q2, the Chrome VRP launched the Full Chain Exploit Bonus – until 1 December 2023, the first report of a full chain exploit in Chrome is eligible for triple the full reward amount, and any subsequent report of a full chain is eligible for double the full reward amount. MiraclePtr was enabled across Linux, Mac, and ChromeOS in M115, with that MiraclePtr protected non-renderer UAFs are now considered to be highly mitigated security bugs. In parallel to this change, we launched the MiraclePtr Bypass Reward, offering a $100,115 reward for an eligible submission of a MiraclePtr bypass.
In fuzzing, we’ve been building upon the new possibilities of Centipede by experimenting with a browser_test-based fuzzing target framework. The hope is that this will enable us to fuzz the whole of Chrome in a realistic fashion, while also getting the benefits of coverage guidance, though this is still quite experimental. We’ve been working on a Chrome UI fuzzer using this technology. We’ve also reinstated a Chrome fuzzing coverage dashboard - please look for gaps and submit fuzzers! We’ve also been preparing for the new Chromium issue tracker.
Until next time,
Andrew
On behalf of Chrome Security
Q1 2023
Greetings,
It's been a busy start to the year, and I'm delighted to share some of what the Chrome Security team has been up to.
We announced the turndown of the Chrome Cleanup Tool due to several factors including improvements in the platform ecosystem and changing trends in the malware space – learn more by reading our blog post: Thank you and goodbye to the Chrome Cleanup Tool.
At IETF 116 in Yokohama, the Trusty Transport team proposed Merkle Tree Certificates, an optional new “PKI 2.0” for the post-quantum future. The draft is still in the incredibly early stages, but it was well received, and builds off of ideas popularized by Certificate Transparency.
In collaboration with ChromeOS and Chrome Memory Safety teams, we’re beginning a purpose-built Rust interface for BoringSSL that can be used by other Rust code at Google that needs access to cryptographic primitives, especially code within ChromeOS, Android, and Chromium.
HTTPS Upgrades are now available on Beta. This feature automatically attempts all plaintext HTTP navigations over HTTPS, and silently falls back to plaintext HTTP if the upgrade fails. You can think of it as an “warning-free” version of HTTPS-First Mode (the Always Use Secure Connections setting). The next steps are to run trials on stable and go through the W3C process for spec changes. We also improved enterprise support for HTTPS-First Mode: this setting can now be force enabled via enterprise policy, and enterprises can specify an optional allowlist of known HTTP-only sites. HTTPS-First Mode is now also automatically enabled for users in the Advanced Protection Program.
Also in the HTTPS space, we’re launching mixed content autoupgrading for iOS in Chrome 112, bringing Chrome for iOS to parity with other Chrome platforms.
Following our announcement in December that we planned to distrust the Trustcor certification authority, we posted a blog about our plan and removed Trustcor from Chrome 111.
The Chrome Root Program continues to operate effectively and look to the future. We updated our public-facing, non-normative, forward-looking “Moving Forward, Together” document about future directions for the Chrome Root Program and the Web PKI. We included our intent to eventually reduce the maximum intermediate CA validity to 3 years, reduce the max leaf certificate validity and domain validation reuse period to 90 days, require ACME/ARI, and require multi-perspective domain validation. The CA/Browser Forum passed a ballot defining a certificate profile for Web PKI certificates, which reduces the set of X.509 features that can be included in trusted certificates to those relevant to authenticating TLS connections. This furthers our goal of agility, helping to ensure that as the Web PKI can safely evolve without impacting other uses of X.509 certificates.
The Web Platform Security team made progress on implementing COOP: restrict-properties and are targeting an Origin Trial in Chrome 115. COOP: restrict-properties will allow crossOriginIsolated websites to exchange with cross-origin popups and is an important step in making crossOriginIsolation more deployable.
We’re going back to the drawing board with the Sanitizer API following internal and external discussions. We aim to find a compromise that supports Declarative Shadow DOM and updates to the HTML parser while being always sanitized, which can be checked by static analysis.
We’ve relaunched ORB v0.1 and have a design for a simple JavaScript/JSON distinguisher to be written in Rust.
We’ve launched Origin-Agent-Cluster by default (aka document.domain deprecation) on Beta, and are looking to move the deprecation to Stable.
The Chrome Security Architecture team finished support for "citadel-style" enforcements for unlocked processes, contingent on a separate refactor of blob URL support. We also finished the implementation of new base URL inheritance rules and have started trials, in order to unblock Site Isolation for sandboxed iframes. We continued to make progress on RenderDocument and SiteInstanceGroup, including support for navigation queueing so that pending navigation commits cannot be canceled. Finally, we continued cleanup and simplifications in the code for navigation and the process model, while fixing several invariants that were found not to hold in the wild.
The Chrome Offensive Security team wrapped up our first audit of WebGPU and started the lengthy process of documenting what we learned. We created and updated four fuzzers targeting graphics features - including Blink APIs for WebGPU and WebGL, the new Tint shader compiler, and the GPU Command Buffer - so far finding two high-severity vulnerabilities and various stability bugs. We also experimented with Centipede and were impressed by its ergonomics compared to libfuzzer. Our Q1 vulnerability reports [1,2,3,4] include details of our findings, although be warned the reports are not yet publicly accessible at the time of publication. Last but not least, in partnership with Project Zero, we delivered a presentation on Variant Analysis concepts using examples from Chrome.
Chrome Platform Security continues to work on sandboxing the network service across all operating systems, with significant progress made on Linux/ChromeOS in Q1. We also added restrictions on transferring writable file handles from high-privilege to low-privilege processes, to help mitigate sandbox escapes. On Mac and Windows, we removed support for old OS versions in the sandbox policies. And we started Project Sandbake, to improve C++ memory safety by removing dangerous code paths and patterns.
We ran an experiment to collect performance data on MiraclePtr (BackUpRefPtr) on Linux, macOS, and ChromeOS. We are analyzing the data and hope to have MiraclePtr enabled on those platforms soon. We also started enabling a clang plugin that enforces MiraclePtr (i.e. raw_ptr<T>) usage.
In an effort to make use-after-free bug reports more actionable, we started implementation of a Lightweight Use-after-Free Detector.
We are running an experiment that enforces Dangling Pointer Detection on Commit Queue. The experiment prevents developers from submitting code that has a dangling pointer (identified through the test suites currently running Dangling Pointer Detection) and outputs helpful information for the developer to debug the issue.
We have written a vector<T*> rewriter, and will evaluate the performance impact of rewriting vector<T*> to vector<raw_ptr<T>> (backed by BackUpRefPtr).
We made progress toward providing production-quality Rust toolchains for most platforms targeted by Chrome. We added the very first lines of Rust that will ship in Chrome later this year – a new crash handler that we will use to test downstream handling of crash reports from Rust code.
The V8 Security team spent most of our time in Q1 improving our fuzzers and have implemented and open-sourced many improvements to Fuzzilli. Noteworthy examples include the new JavaScript-to-FuzzIL compiler which makes it possible to import existing JavaScript code into Fuzzilli, improvements to the HybridEngine (combined generative and mutation-based fuzzing), a new static-corpus fuzzing mode, and better coverage of JavaScript language features, such as loops. A full changelog of all Fuzzilli changes can be found here. We’ve also improved our fuzzing automation around Fuzzilli through which we have found and filed many bugs. Finally, we’ve worked on refactoring how V8 represents code and code metadata to prepare for future CFI changes.
We’re taking steps to make Chrome fuzzing a bit more reliable, starting with a program to monitor reliability issues with the Chrome builders which make fuzz builds. We also want to make ClusterFuzz more readily usable by Chrome Security Sheriffs, so we’ve put in place a simpler (internal) upload UI for test cases. We have also added support in Chrome for the new centipede fuzzing framework, which is currently similar in role to libfuzzer, but may in future allow us to fuzz more complex noisy binaries to find deeper bugs.
The Chrome Vulnerability Rewards Program (VRP) updated the program scope to combat the churn from the growing increase of reports of bugs in newly landed code in Trunk and Canary builds. Bugs on newly landed code in the last 48 hours are no longer eligible for VRP rewards and bugs introduced in the seven days are only eligible for VRP on a case-by-case basis. This has already reduced the number of VRP report collisions with issues discovered by ClusterFuzz or other internal means and reduced the churn on security sheriffs and engineers to chase down duplicates and ensure they are merging in the appropriate direction.
Additionally, the Chrome VRP has put together plans for bonus reward opportunities for 2023, which should begin rolling out in Q2 2023. Please keep an eye out for forthcoming announcements in the very near future!
Until next time,
Andrew
On behalf of Chrome Security
Q4 2022
Greetings,
With 2023 well underway, here's a look back at what the Chrome Security team got up to in the last quarter of last year.
After multiple years of laying policy and engineering groundwork, Chrome’s built-in certificate verifier and root store launched on Chrome for Windows and Mac – bringing both security and performance benefits. Chrome’s recently launched root program governs the certificates that are included in the root store, and this quarter we continued to refine Chrome’s root program policies and improve workflows for CA applicants, particularly through Common CA Database integration. To help keep users safe and ensure the integrity of certificates accepted by Chrome, we announced that Chrome will no longer trust the TrustCor CA as of Chrome 111.
Want more HTTPS in your life? On Canary and Dev, you can now enable the #https-upgrades and #https-first-mode-v2 at chrome://flags to tell Chrome to automatically attempt all your navigations over HTTPS. You can also enable #block-insecure-downloads to protect yourself from any download delivered over an insecure connection.
We’ve been working to bring Chrome for iOS users the same transport security features as Chrome has on other platforms, with HTTPS-First Mode, omnibox HTTPS upgrading, and mixed content autoupgrading all in various stages of launch on Chrome for iOS.
We shipped iframe credentialless in Chrome 110, allowing developers to easily embed iframe in COEP environments, even if they don’t deploy COEP themselves.
We started applying Private Network Access checks to web workers. Currently, they only trigger warnings, except for fetches within dedicated workers in insecure contexts. We are looking at launching enforcement when we better understand metrics.
The deprecation of document.domain — enabling origin-based Agent Clustering by default — is still on track. We are receiving a low-frequency stream of issues around the deprecation, as site owners notice document.domain is going away. We're working through these, and so far nothing appears to be blocking. With a bit of luck we will be able to finish this on the current schedule, in Chrome 112 or 113.
The first step of moving from CORB to ORB — ORB "v0.1" — is now enabled on 50% of stable, with no reported issues. We'd previously landed a fix for SVG images, and the last known origin mismatches between the browser and renderer processes. This makes us confident that we can launch "v0.2" next, which will change error handling to be conforming to the ORB proposal.
The Chrome Security Architecture team wrapped up 2022 by shipping Site Isolation for
The Platform Security team continues to work on sandboxing the Network Service across several platforms. On Linux-based OSes, we made proxy resolution asynchronous; and we are doing the same for UDP connection initiation, which is needed for brokering those sockets from the browser process. On Windows, we added a mitigation to restrict sending writable file handles to executable files to low-privileged processes and filtered potentially-sensitive environment variables from such processes, as well as blocking some more pipes from being accessible, as well as continued work on making the sandbox process startup faster. We also worked to integrate the V8 memory cage improvements into PDFium and worked with the BackupRefPtr team on additional improvements to pointer safety. And finally on Mac, we started rolling out performance improvements to sandbox initialization.
In memory safety news, we have announced a new policy for using Rust in Chromium, which has been getting some good press. We’re working on productionizing the Rust toolchain, which means making the compiler available on all of our development platforms (Linux, Windows, Mac), and the ability to cross-compile for Android/iOS/etc. And we have been working with partners to bring our first uses of Rust into Chromium.
We’ve continued building toward automated C++ bindings to Rust libraries, building out the design of the tool, implementing function calls, and addressing some difficult safety design topics.
C++ reference members are now protected by BackupRefPtr: We created, and executed a clang plugin to rewrite them as raw_ref<T>
.
We ran an experiment banning callbacks invoked with dangling pointers. It reached canary and dev. Discovered violations were listed and fixed in part by the code health rotation. The plan is to enable it by default after reaching stable.
We identified every pre-existing dangling raw_ptr in tests. It will allow us to start enforcing the DanglingPointerDetector on the CQ in 2023Q1.
A new clang GC plugin is now banning some problematic unsafe patterns of GC objects owning non-GC objects.
The V8 Security team landed many improvements to Fuzzilli, our JavaScript engine fuzzer, such as new mutators and support for more language features. We shipped the External Pointer Table for the V8 Sandbox in Chrome 107 and started working on code pointer sandboxing - further design and prototype work around CFI for V8.
The Chrome Offensive Security team continued our deep dive into Chromium graphics acceleration with an emphasis on inter-process communication (IPC) channels, namely the Dawn Wire protocol introduced by WebGPU and the enduring Command Buffer protocol. We filed two high severity security bugs, both found through manual analysis. Beyond IPC, we started an audit of Tint, the WebGPU shader compiler. We also made good progress writing two new fuzzers, one for Dawn Wire and the other for the Command Buffer, which will land separately in 2023Q1. We're excited to integrate them for cross-protocol fuzzing.
The Chrome Vulnerability Reward Program updated our policies and rewards. One of the changes was the introduction of a Bisect Bonus, following which we've seen an increase in the reporting of bisections provided by reporters. We are now receiving bisects as part of VRP reports in up to 40% of reports some weeks, but at a general average of 27%. This has reduced the amount of manual reproductions required by Security Sheriffs during bug triage to determine how far back bugs reproduce in active release channels.
The Chrome VRP also wrapped up another unparalleled year with a total of $4 million awarded to VRP researchers in 2022. $3.5 million was awarded to researchers for 363 reports of security bugs in Chrome Browser and $500,000 for 110 reports of security bugs in ChromeOS. To help show our appreciation for helping keep Chrome more secure in 2022, in collaboration with Google VRP, we sent end of year gifts to our top 22 Chrome VRP Researchers for 2022, and publicly celebrated their achievements.
Meanwhile, the Smithy team — working on security tooling — automated CVE information submission so that enterprises can get a reliable feed of security bugs we’ve fixed.
Until next time,
Andrew
On behalf of Chrome Security
Q3 2022
Greetings,
Chrome Security is hiring! We're looking for a software engineer to join the team as a macOS/iOS platform security expert (posting). More Chrome open positions at https://goo.gl/chrome/hiring
It's been a busy quarter for Chrome Security, and we're pleased to share this summary of what we've been up to.
On Chrome’s Counter-Abuse team, we expanded phishing protection on Android by enabling support for our client-side visual TFlite model. On Desktop and Android we made improvements for users with the Enhanced Protection mode of Safe Browsing enabled, effectively doubling the model’s ability to flag previously-undetected phishing sites by using higher fidelity visual features.
Our client-side telemetry framework for Chrome extensions is fully launched now and has helped flag a few more malware campaigns that were cloaking from our server-side scans. We have more signals lined up that we’ll be launching Q4.
We continued to land improvements to our new downloads UX, while keeping it enabled for 1% of Stable users to collect metrics and feedback. We did observe regressions in some key metrics, but some of them turned out to be red herrings because of the way the metrics were being logged.
We drove a 16% quarter-over-quarter growth in the number of Chrome users who opted in to Enhanced Protection!
The Trusty Transport team officially launched the Chrome Root Program! We are now maintaining our own list of trusted Certification Authorities (CAs), and open for processing inclusion applications from CAs. We investigated various metrics issues in the ongoing rollout of our own certificate verifier and root store on Windows and Mac, and began the slow rollout towards 100% Stable. We also began prefactoring work towards extracting Chromium’s certificate verifier to BoringSSL so that it can be used by other Google (and non-Google) customers.
Encrypted Client Hello (ECH), which encrypts the server name in the TLS handshake, launched to 50% on Canary and Dev with a server-side partner. While there is still additional work to do to gather more data and increase coverage, eventually this feature will give users better privacy as to what websites they are visiting.
To help decrease HTTP navigations, we published an explainer for opportunistically upgrading all navigations to HTTPS. We also brought some of our existing HTTPS upgrading features to iOS, including HTTPS-First Mode.
The Open Web Platform Security team shipped an MVP of the Sanitizer API in Chrome 105. The Sanitizer API offers an easy to use and safe by default HTML Sanitizer API, which web developers can use to remove content that may execute script from arbitrary, user-supplied HTML content.
In Chrome 106 to 108 we ran an Origin Trial for Anonymous iframes. Following positive feedback, we are looking to ship Anonymous iframes (renamed Credentialless iframes) in Chrome 110. This will allow websites that embed arbitrary 3rd party iframes to deploy COEP and enable crossOriginIsolation.
We started sending preflights for access to private resources from public pages as part of the Private Network Access project. Currently, they only trigger a warning. We are looking at launching enforcement when we better understand metrics.
We plan to launch Origin-Agent-Cluster by default at 50% in Beta in Chrome 109, followed by a full launch on Stable in Chrome 110. This will restrict access to document.domain by default, and allow Chrome to more easily experiment with origin based process isolation.
The Security Architecture team ran a stable channel trial of Site Isolation for
The Platform Security team landed several components of the ongoing work to sandbox the network service: TCP socket brokering on Android, making the proxy resolver asynchronous on POSIX systems, and socket handle transmission over Mojo on Windows. In other sandboxing news, we audited all Chromium’s service and sandbox types to identify places where we could lock things down more. We then moved several services to tighter sandboxes! We also performed an initial analysis of using virtualization for sandboxing, and we have identified several areas of further research for the future.
PDFium was upgraded to the latest version of PartitionAlloc. We added IPC types to enforce passing read-only file handles between privileged processes.
We made progress on building the foundations needed for stronger protection of client secrets on Windows. This required a re-design of os_crypt to support asynchronous operations. Meanwhile, progress has also been made on wiring this new code into application bound data encryption.
We implemented a dangling pointer detector and fixed ~150 dangling pointers in the codebase.
Chrome's new Offensive Security team reported a bug in Blink's V8 bindings and then created an exploit (bug currently restricted) for it, which was notable because it establishes new techniques to achieve code execution in the renderer sandbox. Separately in V8 land, we delivered a domain specific mitigation for a historically exploitable bug class.
We also continued our WebGPU audit that began in Q2 and will extend into 2023. In addition to reporting more security bugs in Q3, we're developing multiple fuzzers and staying engaged with the WebGPU team. Google Summer of Code gave us an opportunity to host Rares Moiseanu, a talented student who helped us add new Mojo IPC fuzzers and advanced our prototype Chrome snapshot fuzzer based on Nyx. We're planning to apply snapshot fuzzing widely across Chrome, including to WebGPU.
Finally, variant analysis remains a priority for us and we conduct variant analysis on select bug reports as time permits. We're always on the lookout for ways to make variant analysis more scalable.
The V8 security team launched the “2nd pillar” of the V8 Sandbox: the External Pointer Table. We implemented many new features for Fuzzilli, our JavaScript engine fuzzer, and released version 0.9.2.
We continued our work on the CFI proposal for V8 and started implementing the necessary building blocks, such as PKEY support in PartitionAlloc
The Chrome Vulnerability Rewards Program conducted an overhaul of reward amounts and policies, increasing reward amounts for more impactful and exploitable bug reports and updating bonuses and policies with the goal of incentivizing better quality bug reporting. So far this has resulted in a 25% increase of bisections included in reports and a small increase in the number of reports with functional exploits.
We've also stood up automated CVE filing, taking advantage of the new MITRE CVE Services API, so that downstream users can get an immediate feed of what bugs are fixed in a given release.
Until next time,
Andrew
On behalf of Chrome Security
Q2 2022
Greetings,
We're well over half way through 2022, so it's time to look back at what Chrome Security got up to in the 2nd quarter of the year.
The Chrome Counter-Abuse team launched redesigned downloads UX to 1% of Stable in Chrome 102. The new UX is more modern and usable, and provides surface area for experimentation. We’ll continue to collect metrics and feedback from this rollout to improve the design and identify future improvements.
We drove a 13% quarter-over-quarter growth in the number of Chrome users who opted in to Enhanced Protection!
Separately, we also landed changes to resolve some performance regressions on mobile with the use of TFLite models for reducing phishing false negatives, and will roll these out later in the year.
Our new extension telemetry signals have proven useful by helping the Chrome Web Store to catch and quickly take down a malware campaign.
In Trusty Transport news, at the June CA/Browser Forum meeting, we announced a significant update to the Chrome Root Store policy. This update introduces improved security requirements for new Certificate Authority applicants to our program, and details some of our future priorities for the web public key infrastructure. We also announced that we’ll be beginning to process applications – the official launch of our root program – in September. We implemented a cross-platform certificate viewer UI (currently in Canary) and mechanism for dynamically updating Chrome’s root store (launched to Stable) in preparation for this launch.
We built a mechanism for dynamically updating the static key pinning list, and are using that capability to launch key pinning support on Chrome for Android (currently in a stable experiment).
We revised the timeline for retiring several old Certificate Transparency logs after investigating unexpected breakage. We also shortened the timeline for compliance monitoring for bringing new logs online.
HTTPS-First mode, available on desktop and Android platforms, is now in beta on iOS.
In Q2, the Security Architecture team started experimental trials of Site Isolation for
The Platform Security team continues to make good progress on our top priority for the year: sandboxing the network service across Windows, Android, and Linux/Chrome OS. Initial support for brokering socket creation, needed on Windows and Android, has landed, and a long standing issue launching sandboxed processes on Windows was diagnosed! We’ve also created designs for brokering various network subsystems on Linux/Chrome OS. In addition, we added the ability to specify sandboxing requirements directly to .mojom files, to ease readability and reviewability. And on Windows, work is progressing on the app-bound encryption service, to help protect against cookie theft.
The new Offensive Security team audited a portion of Chrome’s forthcoming WebGPU features, which led to the discovery of several security bugs (1348733, 1346041, 1340654, 1336014, 1334865 — not currently visible, as they'll be restricted until 14 weeks after they've been marked as fixed, per usual). Separately, the team hardened a V8 feature abused by multiple previous exploits.
On the Web Platform APIs front, to protect private networks, we’re starting to deploy preflight checks when accessing private resources from secure HTTP pages as part of the Private Network Access spec implementation. We will start with warnings in Chrome 104, and will follow with enforcement in Chrome 107. Unsecure public pages will still not be allowed to access private resources. To help existing services migrate to HTTPS, we will be implementing a permission for a secure page to access unsecure content on the private network, effectively allowing the user to relax mixed content restrictions for a private IP.
We will be releasing an MVP of the Sanitizer API in Chrome 105.
In web-based isolation news, we are preparing for an Origin Trial of Anonymous Iframes in Chrome 106. We are also converging on a solution to have crossOriginIsolation and cross-origin popups called COOP: restrict-properties.
In Q2 we rolled out the minimal version of the V8 sandbox to Desktop in Chrome 103 and Android (targeting Chrome 105). It currently only prevents attackers from abusing ArrayBuffers in an exploit, and is still easy to bypass, but we will gradually make it stronger until it can become a security boundary by itself.
Besides that, we developed a CFI strategy for V8 that can deal with the additional challenges to CFI introduced by JIT compilation. This requires per-thread memory protections which likely needs special hardware support.
Until next time,
Andrew
On behalf of Chrome Security
Q1 2022
Greetings,
The first quarter of 2022 was a busy one for Chrome Security, as you can read below. This was all in addition to our evergreen role providing security review, consulting, and support to teams across Chrome. If you'd like to be part of this fantastic team Chrome is hiring for security positions! See goo.gl/chrome/hiring for more details.
We collaborated with the Google Accounts team to launch an integration that will help users opt-in to Chrome’s Enhanced Safe Browsing protection via a similar setting for their Google account.
We’ve almost completed the implementation for the initial version of a redesigned downloads experience, and will soon run an experiment with it on Chrome 102. To stop the spread of malware through macros embedded in Microsoft Office documents, we fully launched the parsing of downloaded Office documents in Chrome 97 to identify whether they contain macros and include this information when contacting Safe Browsing to determine if they’re unsafe.
Two extension-telemetry signals are active on Chrome early channels, feeding client-side data to Safe Browsing to suss out suspicious extensions.
We also completed the launch of a new TfLite-based client-side phishing detection model on desktop platforms in Chrome 97, which showed 2.5x as many warnings as the previous model.
This quarter we launched a major new Certificate Transparency policy that removes Google from the critical path of global HTTPS certificate issuance, made possible in part by expanding our SCT Auditing efforts. This quarter also saw CT enforcement and protections coming to Android, vastly expanding the number of users protected by CT.
In preparation for the upcoming rollout of our own Chrome Root Store, we've also been developing several major policies and processes for interacting with certificate authorities, and the engineering to deliver root certificates to Chrome out-of-band. This enables Chrome to directly validate site certificates, rather than relying on each operating system’s verification.
Following last quarter's investments in better infrastructure for handling lookalike warnings appeals, and this quarter's work on safer rollout mechanisms, we are rolling out a new heuristic to detect additional lookalike domains and prepping for an intern on the project starting in Q2. Our initial implementation of TLS ECH is also now nearly code complete, with only polish work remaining.
We made great progress on our Rust-in-Chromium experiments. Rust would have security, productivity and performance benefits over C++, but we don’t yet know if we can ergonomically mix it with C++ in Chromium. This quarter, we landed a Rust JSON parser, achieving some compile-time safety while wrapping existing C++ APIs. We also landed support for a C++ -> Rust bindings generator called autocxx. In the next quarter we’ll be using that, plus another tool called crubit, to build some ambitious demos.
Work continues on sandboxing the network service across Windows, Android, and Linux/CrOS. We are making good progress on brokering or servicifying the numerous network stack subsystems that do not work within the confines of a sandbox. On Windows, we also successfully landed CFG and investigated sandbox improvements. On Mac, we experimented with Apéritif, but hit roll-out issues on older macOS versions
We’re on track for a new attempt at preflight warnings for Private Network Access requests in Chrome 102. IoT developers reported that Web Transport was insufficient as the only workaround to the PNA secure context restriction, so we’re looking at a permission-based alternative and are seeking feedback on it. The initial attempt was rolled back due to various bugs, in particular one affecting partially-cached range requests.
We created a specification for anonymous iframe and are nearing code completion. Origin Trial is expected for Chrome 106. This resolves a common difficulty: embedding arbitrary 3rd party iframes inside a crossOriginIsolated page.
We have made progress towards a decision on a new COOP policy (restrict-properties), to solve the crossOriginIsolation + popups integration.
On continued progress towards safer defaults, we shipped warnings for document.domain usage without opt-in, to prepare for eventual deprecation. And Chrome 103 saw us block sandboxed iframe from opening external applications.
In Web Platform memory safety news, we implemented a C++ dangling pointer detector. We are now working on fixing all the occurrences, and refactoring Chrome for using safer memory ownership patterns.
In Q1, the Security Architecture team continued several projects to improve Site Isolation and related defenses, including implementation work for
Until next time,
Andrew
On behalf of Chrome Security
Q4 2021
Greetings,
As we enter the last month of the first quarter of 2022, here's a look back to what Chrome Security was doing in the last quarter of 2021.
Chrome is hiring for security positions! See goo.gl/chrome/hiring for more details.
For extension security, we are working on a telemetry framework that monitors suspicious extension activity and transmits associated signals to Safe Browsing, for users opt-ed into sharing these data. The signals are analyzed server-side (both manual and automated analysis) to detect and mitigate extension abuse patterns.
We proposed a redesigned downloads experience for Chrome on desktop platforms that moves downloads into the toolbar. This would be a better overall user experience and also allow us to build advanced downloads features in the future. We plan to launch the MVP in Q1 2022.
In preparation for an HTTPS-first world, we conducted Stable experiments to determine the impact of changing the lock icon (which has been shown to be misleading to users) to a more security-neutral and obviously-clickable icon, with 1% stable results from Chrome 96. Results from this experiment were positive, indicating that the new icon increased engagement with the Page Info surface without regressing user activity or security metrics.
We’re running an experiment to expand Certificate Transparency (CT) enforcement to Chrome for Android, improving our ability to detect malicious certificates and unifying certificate validation across platforms. This experiment is rolling out in Chrome 98.
We launched support for Control Flow Guard on Windows, and continue to make good progress with network process sandboxing on multiple platforms. We’ve also been involved in the “unseasoned PDF” project, which removes NaCl as a dependency from PDFium.
We’re experimenting with Rust in Chrome, to give easier options to write safe code. These experiments aren’t yet switched on in shipping code, but they help us learn what it would take to do so. For example, we’ve landed a memory-safe JSON parser which can save the overhead of creating a utility process.
We continued our progress towards increased isolation between websites and networks on the one hand, and cross-site scripting mitigation on the other. For isolation, we've started a Private Network Access experiment to ensure that preflights aren't going to cause problems for subresource requests, shipped COEP: credentialless, and reworked our document.domain deprecation plans based on feedback from the ecosystem. For injection, we've solidified the design and implementation of the Sanitizer API (you can poke at it with this handy Playground!) in coordination with our friends at Mozilla, whose implementation is also proceeding apace.
The Security Architecture team was honored to receive an IEEE Cybersecurity Award for Practice for Site Isolation's impact on browser security! We continued work on full Site Isolation on some Android devices, extension and citadel enforcements, ORB, and SiteInstanceGroups. We also started designing Site Isolation for the <webview> tags used in Chrome Apps and WebUI pages. We updated code to support new plans for turning on Origin-Agent-Cluster by default, which could allow isolating origins instead of sites. For memory safety, we updated several unsafe uses of RenderFrameHost pointers and continued local work with Rust and C++ lifetime annotations.
The Chrome VRP just achieved some new records as we closed out 2021 with close to $3.3 million in total rewards to 115 Chrome VRP researchers for 333 valid unique reports of Chrome browser and Chrome OS security bugs. Of that total, just under just over $3M was rewarded for Chrome browser bugs and $250,500 for Chrome OS bugs, with $45,000 being the highest reward for an individual Chrome OS report and $27,000 for a Chrome browser report. $58,000 was rewarded for security issues discovered by fuzzers contributed by VRP researchers to the Chrome Fuzzer program, the highest reward being $16,000 for an individual fuzzer-based report. To show our appreciation for helping us keep Chrome safe in 2021, in collaboration with Google VRP, we sent end of year gifts to our Top 20 researchers of 2021 and also celebrated their achievements publicly on Twitter.
Cheers,
Andrew
Q3 2021
Greetings,
Here's what the Chrome Security team has been up to in Q3 of this year,
Chrome is hiring, including for security positions! See goo.gl/chrome/hiring. In particular we're looking for a lead security product manager to work with the teams doing all the great things in this update, and more across the Chrome Trust and Safety organisation.
Through a series of in-product integrations and promotions on the new tab page on Desktop and Android, we saw a growth of almost 70% in the number of users who chose to opt-in to Enhanced Safe Browsing in Chrome.
We deployed two new machine learning models on Android to detect and block phishing pages: one operates on the contents of the DOM, the other is a TfLite model that operates on the overall appearance of the page. Both models led to a 30+% drop in password reuse on phishing pages and also helped us identify new, previously-unknown phishing pages. Following up from that, in Q4, we’ll try to launch the TfLite model on Desktop platforms also.
We landed protections that disabled installations of Chrome extensions that had been found to be violating Chrome Web Store policies previously but were still enabled on users’ machines.
We ran an experiment to understand whether users respond to a cookie-theft specific warning at the time of download any differently than our regular malware warning, and initial results suggest no change in the warning bypass rate.
To close a loophole currently being abused by a large cookie-theft campaign, we landed changes in Chrome 96 to stop circumvention of Chrome’s tracking of referrers.
This quarter we also launched an experiment to remove the padlock icon, a long-misunderstood component of browser security UI. This change will roll out to a small percentage of users gradually in Chrome 94+. We also launched HTTPS-First Mode, a setting that will cause Chrome to load all pages over HTTPS by default.
Chrome is now distributing Certificate Transparency log lists outside the binary update cycle, allowing faster and more reliable updates. This change will allow us to begin exploring Certificate Transparency on Chrome for Android as well as removing the requirement for all certificates to be logged to Google logs.
Our long-term goal has been to use Chrome’s own certificate verifier and root store on all Chrome platforms. This quarter we began rolling out our certificate verifier and transitional root store on Windows, with a metrics-only trial currently running in Chrome 95. We are also continuing to experiment and investigate compatibility issues on Mac.
To help people understand the domain names to which they’ve connected, we began experimenting with a new heuristic to identify typosquatting domain names such as “googel[.]com”. We also built a new workflow for developers of co-owned domains to opt out of warnings for lookalike domain names.
The Platform Security team has started experimenting with Rust in the Chromium tree as part of the memory safety effort. Also in the name of memory safety, we are experimenting with using WasmBoxC to create in-process sandboxes. The team is also making progress on sandboxing the network service on Windows, Android, and Linux. And we deprecated and removed an unsafe IPC pattern. Finally, we are keeping busy by helping review all the new features being launched in Chrome.
The Security Architecture team was excited to launch Site Isolation for additional sites on Android (including those using OAuth or COOP headers) as well as Strict Extension Isolation on desktop; see the Google Online Security blog and the Keyword blog. We are now experimenting with full Site Isolation on Android devices with sufficient RAM. Our work continues on adding more enforcements for extensions, protecting data with ORB, isolating sandboxed iframes, and improving Origin Agent Cluster. On the memory safety front, we have started local experiments with Rust in the tree, while also investigating approaches for improving C++ memory safety.
To make it easier to deploy cross-origin isolation deployment easier, we launched COEP credentialless in Chrome 96. We’ve also made good progress on the COOP same-origin-allow-popups-plus-coep spec and started implementation.
We launched the first part of Private Network Access checks in Chrome 94, which prevents non-secure websites on the public internet from pivoting through users' privileged network positions to make requests to private network resources. We’re planning to extend these protections in the Chrome 98 timeframe to include a preflight requirement ensuring that the private network resource opts-into communication with the public internet. We'll start with devtools warnings and outreach to give websites time to update their devices to respond to the preflights, and hopefully can roll things out more broadly in 2022.
Beyond isolation, we're working with our friends at Mozilla to finalize our implementation of a new Sanitizer API, that we hope can be an important tool for developers' mitigation of injection attacks. You can play with both Chrome and Firefox's implementations by flipping the relevant flag, and hopping over to the https://sanitizer-api.dev/ playground.
Cheers,
Andrew
Q1 and Q2 2021
Greetings,
With apologies to those still waiting patiently for our Q1 update, here instead is a look back at what the Chrome Security teams have been up to in the first half of 2021.
Chrome is hiring, including for security positions! See goo.gl/chrome/hiring. In particular we're looking for a lead security product manager to work with the teams doing all the great things in this update, and more across the Chrome Trust and Safety organisation.
The first half of 2021 is trending toward record-setting totals for the Chrome Vulnerability Reward Program (VRP) with the security researcher community awarded $1.7M for reporting close to 200 unique, valid security bugs. Of these reward-eligible reports, 84 were reports for Critical and High severity issues that impacted stable channel users. The Chrome VRP continues to be a vital part of our security ecosystem and we greatly appreciate the efforts of the Chrome VRP researcher community to help keep Chrome users more secure!
In a collaborative effort led by the Google VRP, the new Google BugHunters site was launched. Chrome bugs can be reported via that site, as well as at crbug.com/new using the Security Bug template as before.
In collaboration with the other VRP programs across Google, bonuses were paid out to VRP researchers impacted by recent payment delays. We are additionally working on ways to proactively decrease future delays, and improve the efficiency and processes of the program.
In Q1 the Safe Browsing team grew the Enhanced Safe Browsing population by more than 400% through in-product integrations with the security interstitial pages and Safety Check. We also started using machine learning models to protect users who have real-time Safe Browsing lookups against phishing attacks which, along with heuristic-based enforcement, allowed us to decrease our phishing false negatives by up to 20%.
We designed improvements to our client-side phishing detection subsystem, which will allow us to innovate faster in that area in the coming quarters.
In Q2 we rolled out a new set of protections for Enhanced Safe Browsing users in Chrome 91: Improved download protection by offering scanning of suspicious downloads, and better protection against untrusted extensions. We continued to see a phenomenal growth in the number of users who opt in to Enhanced Safe Browsing to get Chrome’s highest level of security.
We helped land improvements to the client-side phishing detection subsystem in Chrome 92 which made image-based phishing classification up to 50 times faster. And we landed improvements to the Chrome Cleanup Tool to remove new families of unwanted software from the users’ machines.
In Chrome 90, we launched a milestone for a secure web: Chrome’s omnibox now defaults to HTTPS when users don’t specify a scheme. We later announced a set of changes to prepare the web for an HTTPS-first future. We’re implementing HTTPS-First Mode as an option for Chrome 94, a setting that will cause Chrome to automatically upgrade navigations to HTTPS, and show a full-page warning before falling back to HTTP. We’ll also be experimenting with a new security indicator icon for HTTPS pages in Chrome 93, inspired by our research showing that many users don’t understand the security assurances of the padlock icon. Finally, we announced a set of guiding principles for protecting and informing users on the slice of the web that is still HTTP.
To try out HTTPS-First Mode in Chrome Canary, toggle “Always use secure connections” in chrome://settings/security. You can also preview our new HTTPS security indicator by enabling “Omnibox Updated connection security indicators” in chrome://flags and then re-launching Chrome.
In June, Chrome passed a huge milestone in the history of Certificate Transparency. The last certificates issued before Chrome required CT logging have now expired. That eliminates a hole where a malicious or compromised CA key could backdate a cert to avoid logging it. Congratulations to all who've worked on CT over the years, and those who continue to keep the ecosystem thriving.
To further strengthen the Certificate Transparency ecosystem, we launched the first phase of SCT auditing, which helps verify that CT logs are behaving honestly, and designed and began implementing subsequent phases to improve coverage and reliability. In Chrome 94 we’ll launch a change to distribute CT log information to clients faster and more reliably, which will help unblock CT enforcement on Chrome for Android.
We’ve made progress on under-the-hood improvements to certificates and TLS. We proposed a set of changes in the CA/Browser Forum to better specify how website (and other) certificates should be structured, and we helped make improvements such as tightening validation procedures for wildcard certificates and sunsetting an unvalidated certificate field. We distrusted the Camerfirma CA, initially planned for Chrome 90 but later delayed until Chrome 91 due to the exceptional circumstances of some Covid-19 related government websites being slow to migrate.
On the TLS front, we launched a performance improvement to the latest version of TLS — zero round-trip handshakes in TLS 1.3 — to Canary and Dev. We announced and implemented the removal of the obsolete 3DES cipher. Finally, a new privacy feature for TLS, Encrypted Client Hello, is now implemented in our TLS library, with integration into Chrome ongoing.
The Open Web Platform Security team implemented and specced a first version of the Sanitizer API, that will help developers avoid pesky XSS bugs. In combination with Trusted Types that we released last year, it will help websites defend against XSS attacks.
CORS-RFC1918 got renamed to Private Network Access. We are ready to ship restrictions on accessing resources from private networks from public HTTP pages in Chrome 94: public HTTP pages will no longer be able to request resources from private networks. We will have a reverse Origin Trial in place until our preferred workaround (WebTransport) has shipped. We are also working on the next stage of Private Network Access restrictions, where we will send a CORS preflight when a public page tries to access a private resource.
CrossOriginIsolated is really difficult to adopt for websites. We’re planning to make a few changes to help with deployment. First, we have a new version of COEP: credentialless currently undergoing an Origin Trial. It will help developers deploy COEP when they embed third-party subresources. We’re also working on anonymous iframes, to deploy COEP on pages that embed legacy 3rd party iframes. And we want to have COOP same-origin-allow-popups + COEP enable crossOriginIsolated to help with OAuth and payment flows support.
In Q1, the Security Architecture team continued work on several Site Isolation efforts: isolating sites that use COOP or OAuth on Android, metrics for protecting data with ORB, better handling of about:blank origins and tracking of content scripts, and helping with the communications for the Spectre proof-of-concept and recommendations. Additionally, Origin Agent Cluster shipped in Chrome 88, offering process isolation at an origin granularity (for performance reasons rather than security). We explored new options for memory safety and helped with the MiraclePtr experiments. Finally, we made several stability improvements, continued refactoring for SiteInstanceGroup, and helped unblock the MPArch work.
Q2 saw work improving Site Isolation protections for Origin headers (via request initiator enforcements) and extension IPCs. We started several Site Isolation related beta trials, including isolating more sites on Android and isolating extensions from each other on desktop. We started an early prototype of isolating same-site sandboxed iframes and analyzed metrics for protecting data with ORB, as well. We also contributed to several efforts to improve memory safety in Chrome, solved long-standing speculative RenderFrameHost crashes, and improved support for Origin Agent Cluster.
The Platform Security team had a busy first half of the year. We have now deployed Hardware-enforced stack protection for Windows (also known as Control-flow Enforcement Technology, CET) to most Chrome processes, on supported hardware. CET protects against control flow attacks attempting to subvert the return from a function, and we blogged about this earlier in the year.
With the returns from functions now protected by CET, we are making good headway in protecting the function calls themselves — indirect calls, or 'icalls' using CFG (Control Flow Guard). We have full CFG support for all processes behind a compile time flag 'win_enable_cfg_guards = true', so please try it, but in the meantime we are working on ironing out performance issues so we can roll it out to as many processes as possible.
The stack canary mitigation has been significantly strengthened on Linux and Chrome OS. These platforms use the zygote for launching new processes, so the secret stack canary value was the same in each process, which means the mitigation is useless once an attacker has taken over a single process. The stack canaries are now re-randomized in each process.
On macOS, we finished our complete rollout of our V2 sandbox architecture with the launch of the new GPU process sandbox. That marks the end of a nearly four-year project to eliminate the unsandboxed warm-up phase of our processes, which reduces the amount of attack surface available to a process. In addition, we enabled macOS 11’s new RIDL CPU mitigation for processes that handle untrustworthy arbitrary compute jobs (e.g. renderers).
GWP-ASAN is being field trialed on Linux, Chrome OS, and Android. GWP-ASAN is a sampling allocation tool designed to detect heap memory errors occurring in production with negligible overhead, providing allocation/deallocation/crashing stack traces for production crashes. It has already been launched on macOS and Windows but hopefully launching on new platforms should help us find and fix bugs in platform-specific code.
XFA (the form-filling part of PDF) is now using Blink's garbage collector Oilpan, protecting against use-after-frees in this code. The PDF code is also being moved from its own process that uses the legacy Pepper interface (previously used for Flash) into the same process as web content.
The work on the network service sandbox continues apace. Previously the sandbox technology being used on Windows was the same one used for the renderer (the restricted token sandbox). However, this tighter sandbox caused issues with parts of the network stack such as Windows Authentication and SSPI providers, so we are moving to an LPAC (Less Privilege App Container) sandbox which should play much nicer with enterprises.
Speaking of enterprises, we landed a new set of policies to control the use of the JIT (Just-In-Time) compiler in V8 (our JavaScript engine). These policies allow enterprises to set a default policy and also to enable or disable JIT for certain sites. The V8 JIT has often been a juicy target for exploit writers, and by not having any dynamically generated code we can also enable OS mitigations such as CET (see above) and ACG (Arbitrary Code Guard) in renderer processes to help prevent bugs from being turned into exploits as easily. Disabling the JIT does have some drawbacks on web compatibility and performance — but our friends in Edge subsequently wrote a great blog exploring this debate which we encourage you to read before deploying this policy.
In Q1 the extended team working on permissions was excited to start rolling out the fruits of several collaborations from last year, including the MVP of the Chrome Permission Suggestion Service (CPSS) to suppress very-unlikely-to-be-granted prompts, the automatic revocation of notification permission on abusive sites, a complete revamp of chrome://settings/content pages, and experiments for Permission Chip and one-time permission grants. CPSS reduces unwanted interruptions (number of explicit decisions which are dismissed, denied or granted) by 20 to 30%. Additionally, the less disruptive 'chip' permission UI is now live for all users for location permission requests and we’re migrating other permissions to the new pattern.
We organized a virtual workshop on next-gen permissions, identifying the core themes – modes, automation, and awareness – for future explorations, and we conducted our very first qualitative UXR study to better understand users’ mental models and expectations with permissions
We'll be back to our quarterly update cadence with news from Q3 later in the year.
Cheers,
Andrew
Q4 2020
Greetings,
Even as 2021 is well underway, here's a look back at what Chrome Security was up to in the last quarter of 2020.
Interested in helping to protect users of Chrome, Chromium, and the entire web? We're hiring! Take a look at goo.gl/chrome/hiring, with several of the roles in Washington, DC: g.co/chrome/securityprivacydc.
The Usable Security team fully launched a new warning for lookalike domain names: low-quality or suspicious domains that make it hard for people to understand which website they’re actually visiting. We continued to place some final nails in the coffin of mixed content (insecure subresources on secure pages). Secure pages are no longer allowed to initiate any insecure downloads as of Chrome 88. We uncovered some issues with our new warning on mixed form submissions due to redirects, and this warning will be re-launching in Chrome 88 as well.
With HTTPS adoption continuing to rise, it’s now time to begin treating https:// as the default protocol, so we began implementing a change to the Chrome address bar to default to https:// instead of http:// if the user doesn’t type a scheme. Stay tuned for more information about this change in Q1.
To improve the security of the Certificate Transparency (CT) ecosystem, we began dogfooding an opt-in approach to audit CT information seen in the wild, and we started designing improvements to make this approach more resilient.
The Chrome Safe Browsing team helped the Chrome for iOS team roll out real-time Safe Browsing protections in Chrome 86 for iOS. Also, in addition to our existing mechanism to disable malicious Chrome Extensions with a large install base, we rolled out a new mechanism that allows us to also disable malware extensions with a small install base.
On the memory safety front, we've been getting ready to ship Oilpanned XFA and continue to engage with the MiraclePtr and *Scan project. As those initiatives are treating the symptom rather than the cause, we continue to investigate what a safer dialect of C++ would look like, and to improve Rust/C++ interoperability ahead of any possible future rust experiments. Ongoing work on exploit mitigations includes Control-flow Enforcement Technology, GWP-ASan, and Control Flow Guard.
We’re also working on reducing the privilege of the network service sandbox on Windows. We’re planning to do the same on Android later in the year.
FuzzBench continues to help the research community benchmark and create more efficient fuzzing engines (e.g. AFL++ 3.0, SymQEMU, etc). We added support for bug-based benchmarking (sample report), fuzzer stats api, saturated corpora testing. Our OSS-Fuzz platform now has first-class support for Python fuzzing, and continues to grow at a brisk pace (~400 projects, 25K+ bugs). Based on community feedback, we created a lightweight, standalone ClusterFuzz python package (alpha) for common fuzzing use cases, e.g. stacktrace parsing. We have refactored AFL fuzzing integration to use the engine interface. We have been working on a solution to better track vulnerabilities in third-party dependencies. We have also bootstrapped several open source security efforts under the OpenSSF foundation, e.g. security scorecards, finding critical projects, etc.
We implemented blocking of requests from insecure contexts to private networks (first part of CORS-RFC1918), and are analyzing metrics to chart a path to launch.
We introduced the PolicyContainer to squash bugs around inheritance of security policies to about:blank, srdoc or javascript documents.
We also implemented a first version of a Sanitizer API and started the specification process.
With CrossOriginOpenerPolicy (COOP) and CrossOriginEmbedderPolicy (COEP) launched, we were able to re-enable SharedArrayBuffers on Android gated behind crossOriginIsolated (a.k.a COOP+COEP), which Firefox has also done. We have a plan to deprecate all SAB usage without crossOriginIsolated in Chrome 91 (with reverse Origin Trial until Chrome 93).
This will require users of SharedArrayBuffers to adopt COOP and COEP. Adopting COEP has proved difficult. We have heard that the deployment of COEP was difficult for a certain number of websites that embed third-party content. We are considering a new form of COEP that might alleviate those issues: credentialless. To help drive adoption of COOP we moved the COOP reporting API out of Origin Trial to on by default in Chrome 89.
We have started to collect metrics on dangerous web behaviors, with the hope of driving them down. The first one we’ll likely be looking at is document.domain.
The Security Architecture team completed the CORS for content scripts migration in Chrome 87, removing the allowlist for older extensions and strengthening Site Isolation for all desktop users! Opt-in origin isolation was renamed to Origin-Keyed Agent Clusters and is on track to launch in Chrome 88. We are making progress towards additional Android Site Isolation for OAuth and COOP sites, and we helped secure SkBitmap IPCs against memory bugs. Finally, we have been investing in architecture changes, including SiteInfo to better track principals and SiteInstanceGroup to simplify the process model, along with significant reviews for Multiple Page Architecture and Multiple Blink Isolates.
Cheers,
Andrew, on behalf of the Chrome security team
Q3 2020
Greetings,
Here's an update on what the teams in Chrome Security have been up to in the third quarter of 2020.
The Chrome Safe Browsing team continued the roll-out of Enhanced Safe Browsing by launching it on Android in Chrome 86, and releasing a video with background on the feature. We also launched deep scanning of suspicious downloads, initially for users of Google’s Advanced Protection program, which received positive coverage.
This quarter the Usable Security team vanquished a longtime foe: http:// subresources on https:// pages. Mixed content is either upgraded to https:// or blocked. We also built new warnings for mixed forms and continued rolling out mixed download blocking. These launches protect users’ privacy and security by decreasing plaintext content that attackers can spy on or manipulate.
In Chrome 86, we are beginning a gradual rollout of a new low-confidence warning for lookalike domains. We also expanded our existing lookalike interstitial.
Finally, we rolled out a 1% Chrome 86 experiment to explore how simplifying the URL in the address bar can improve security outcomes.
The Platform Security team continued to move forward on memory safety: With Rust currently not approved for use in Chromium, we must try to improve C++. Toward that end, the PDFium Oilpan and MiraclePtr/*Scan projects are moving forward quickly and ready to try in Q4 and Q1 2021.
In Sandboxing news, we made changes to Linux and our calling code to handle coming glibc changes, servicifying the Certificate Verifier (unblocking work to isolate the network service), and getting a better grip on Mojo.
Bugs-- has started encouraging Chrome developers to submit vulnerability analysis after the bug is fixed (example). This guides our future work on eliminating common bug patterns. We cross-collaborated with fuzzing teams across Google to host 50 summer interns, with strong impact across Chrome and other critical open source software (see blog post). We have added automated regression testing of past fixed crashes for engine-based fuzzers (e.g. libFuzzer, AFL). We have made several changes to our underlying fuzzing and build infrastructures - UI improvements, Syzkaller support, OSS-Fuzz builder rewrite, etc. Lastly, we continue to push fuzzing research across the industry using our FuzzBench benchmarking platform and have led to improvements in AFL++, libFuzzer and Honggfuzz fuzzing engines.
The Open Web Platform security team continues to focus on two problems: injection attacks, and isolation primitives.
Regarding injection, we're polishing our Trusted Types implementation, supporting Google's security team with bug fixes as they continue to roll it out across Google properties. We're following that up with experimental work on a Sanitizer API that's making good progress, and some hardening work around policy inheritance to fix a class of bugs that have cropped up recently.
For isolation, we're continuing to focus on COOP deployment. We shipped COOP's report-only mode as an origin trial, and we're aiming to re-enable SharedArrayBuffers behind COOP+COEP in Chrome 88 after shipping some changes to the process model in Chrome 87 to enable `crossOriginIsolated`.
In Q3, Chrome's Security Architecture team has enabled CORS for extension content scripts in Chrome 85, moving to a more secure model against compromised renderers. We made further progress on opt-in origin isolation, and we took the first steps towards several improved process model abstractions for Chrome. MiraclePtr work is progressing towards experiments, and we wrapped up the test infrastructure improvements from last quarter.
The CA/Browser Forum guidelines got big updates, with ballots to overhaul the guidelines to better match browser requirements, including certificate lifetimes, and long overdue cleanups and clarifications. One good revamp deserves another, and the Chrome Root Certificate Policy got a big facelift, as part of transitioning to a Chrome Root Store.
CT Days 2020 was held in September, including the big announcement that Chrome was working to remove the One Google Log requirement by implementing SCT auditing.
This summer, we also hosted an intern who worked on structure-aware ASN.1 fuzzing, and began integration with BoringSSL.
Cheers,
Andrew, on behalf of the Chrome security team
Q2 2020
Greetings,
The 2nd quarter of 2020 saw Chrome Security make good progress on multiple fronts, all helping to keep our users, and the web safe.
The Chrome Safe Browsing team launched real-time phishing protection for all Android devices, and observed a 164% increase in phishing warnings for main-frame URLs. We also completed the rollout of Enhanced Safe Browsing to all users of Chrome on desktop platforms.
We helped the Chrome for iOS team implement hash-based Safe Browsing protection in Chrome 84 for iOS for the first time ever. Also working with various teams, most notably the Mobile UX, we made significant progress in shipping Enhanced Safe Browsing in Chrome 86 for Android.
For desktop platforms, we landed changes to the in-browser phishing detection mechanism to help reduce phishing false negatives using new machine learning models for Chrome 84 and beyond. We also finalized the plan to disable more malicious Chrome Extensions, starting with Chrome 85.
The Enamel team put the finishing touches on our work to prevent https:// pages from loading insecure content. We built a new warning for https:// pages with forms targeting insecure endpoints, and prepared to start rolling out mixed download warnings in Chrome 84. This release will also include mixed image autoupgrading and the second phase of TLS 1.0/1.1 deprecation.
Even on an https:// website, users need to accurately understand which website they’re visiting. We expanded our lookalike domain warning with new triggering heuristics, and prepared to launch an additional warning (pictured) for lower-precision heuristics in M86.
The Platform Security team continued to make good progress on many of our longer term projects, including sandboxing the network service (and associated certificate verification servicification), adopting Oilpan garbage collection in PDFium's XFA implementation, and investigating memory safety techniques, and exploitation mitigation technologies.
Along with our colleagues in Chrome Security Architecture, we've sharpened the security focus on Mojo, Chrome's IPC system, and started looking at what's needed to improve developer ergonomics and make it easier to reason about communicating over security boundaries. Also with CSA, we've worked on how MiraclePtr could help prevent use after free bugs in C++ code.
Bugs-- continued to develop and improve the FuzzBench platform which has helped the security research community develop more efficient fuzzing engines (HonggFuzz, AFL++ got several improvements and leads the benchmarking results). Based on FuzzBench results, we have successfully integrated Entropic as a fuzzing strategy in ClusterFuzz. We have started rewriting/improving several Chrome blackbox fuzzers (e.g. dom, webbot, media, ipc), and also deprecated ~50 duplicate/unneeded fuzzers. In OSS-Fuzz service, we added first-class fuzzing support for Golang and Rust languages (better compiler instrumentation, crash parsing, and easier project integration) and improved CI (e.g. Honggfuzz checks). Lastly, we worked closely with Android Security and improved ClusterFuzz for on-device and host fuzzing use cases (e.g. syzkaller support, pixel hardware fuzzing).
The Open Web Platform Security team remained focused on mitigating injection attacks on the one hand, and improving isolation of sensitive content on the other. Q2 was exciting on both fronts!
We shipped an initial implementation of Trusted Types, which gives developers the ability to meaningfully combat DOM XSS, and nicely compliments CSP's existing mitigations against other forms of injection. Google has deployed Trusted Types in high-value applications like My Google Activity, and we're excited about further rollouts. We also rolled out our first pass at two new isolation primitives: Cross-Origin Opener Policy and Cross-Origin Embedder Policy. Opting-into these mechanisms improves our ability to process-isolate your pages, mitigating some impacts of Spectre and XSLeaks, which makes it possible to safely expose powerful APIs like SharedArrayBuffers.
The Chrome Security Architecture team has started Origin Trials for opt-in origin isolation, allowing origins to use separate processes from the rest of their site. We have also made progress on securing extension content script requests and enforcements for request initiators, and we improved the update mechanism for Android Site Isolation's list of isolated sites. Much of Q2 was spent on cleanup and documentation, though, particularly test infrastructure and flaky test improvements. Finally, we also contributed to MiraclePtr efforts to reduce memory bugs, and we helped more teams use WebUI by adding support for web iframes.
In the world of the Web PKI, TLS certificates issued from default-trusted CAs after 2020-09-01 will be rejected if their lifetime is greater than 398 days, beginning with Chrome 85. See the documentation and FAQ. This is part of a number of changes adopted by CA/Browser Forum with unanimous support from major Browsers, which aligns the Baseline Requirements with many existing Browser root program requirements.
We continued informal cross-browser collaboration and met with the European Union on their eIDAS Regulation, exploring how certificates can be used to provide identity information for domains in a manner consistent with the Web Platform.
Until next time, on behalf of Chrome Security, I wish you all the very best.
Andrew
Q1 2020
Greetings,
Amongst everything the first quarter of 2020 has thrown at the world, it has underlined the crucial role the web plays in our lives. As always, the Chrome Security teams have been focusing on the safety of our users, and on keeping Chrome secure and stable for all those who depend on it.
The Chrome Safe Browsing team, with the support of many teams, introduced a new Safe Browsing mode that users can opt-in to get “faster, proactive protection against dangerous websites, downloads, and extensions.”
We launched previously announced faster phishing protection to Chrome users on high-memory Android devices. This led to a 116% increase in the number of phishing warnings shown to users for main frame URLs.
We also launched predictive phishing protections to all users of Chrome Password Manager on Android, which warns users when they type their saved password on an unsafe website. The initial estimate from the launch on Beta population suggests an 11% increase in the number of warnings shown compared to that on Windows.
Chrome's Enamel team finalized plans to bring users a more secure HTTPS ecosystem by blocking mixed content, mixed downloads, and legacy TLS versions. These changes have now been delayed due to changing global circumstances, but are still planned for release at the appropriate time.
To improve how users understand website identity, we experimented with a new security indicator icon for insecure pages. We also experimentally launched a new warning for sites with spoofy-looking domain names. We’re now analyzing experiment results and planning next steps for these changes.
The Platform Security team made significant forward progress on enabling the network service to be sandboxed on all platforms (it already is on macOS). This required getting significant changes into Android R, migrating to a new way of using the Data Protection API on Windows (which had the side-effect of breaking some crime rings’ operations, albeit temporarily), and more. When complete, this will reduce the severity of bugs in that service from Critical to High.
We also made progress on Windows sandboxing, working towards adopting AppContainer, and are refactoring our Linux/Chrome OS sandbox to handle disruptive upstream changes in glibc and the kernel.
Discussions about the various ways we can improve memory safety continue, and we laid plans to migrate PDFium’s XFA support to Oilpan garbage collection, with the help of Oilpan and V8 teams. This will enable us to safely ship XFA in production, hopefully in 2020.
The bugs-- team launched FuzzBench, a fuzzer benchmarking platform to bridge the gap between academic fuzzing research and industry fuzzing engines (e.g libFuzzer, AFL, Honggfuzz). We have integrated new techniques in ClusterFuzz to improve fuzzing efficiency and break coverage walls - dataflow trace based fuzzing, in-process grammar mutators (radamsa, peach). Also, launched CIFuzz for OSS-Fuzz projects to catch obvious security regressions in a project’s continuous integration before they are checked in.
The Chrome Security Architecture (née Site Isolation) team has been strengthening Site Isolation this quarter. We're securing extension content script requests to unify CORS and CORB behavior, and we're progressing with a prototype to let websites opt in to origin-level isolation. To improve Chrome's security architecture, the team is working on a proposal for a new SecurityPrincipal abstraction. We have also cleaned up RenderWidget/RenderView lifetimes. Finally, we are starting to formalize our thinking about privilege levels and their interactions in Chrome. We are enumerating problem spots in IPC and other areas as we plan the next large projects for the team.
For the past five years, Chrome, along with counterparts at browser vendors such as Mozilla, Microsoft, Apple, Opera, and Vivaldi, have been discussing technical challenges involved in the eIDAS Regulation with members of the European Commission, ETSI, and European Union Agency for Cybersecurity (ENISA). These discussions saw more activity this past quarter, with browsers publicly sharing an alternative technical proposal to the current ETSI-defined approach, in order to help the Commission make the technology easier to use and interoperate with the web and browsers.
We announced Chrome’s 2020 Certificate Transparency plans with a focus on removing “One Google Log” policy dependency. Pending updates to travel policy, we have tentatively planned CT Days 2020 and sent out an interest survey for participants.
Until next time, on behalf of Chrome Security I wish you all the very best.
Andrew
Q4 2019
As we start 2020 and look forward to a new year and a new decade, the Chrome Security Team took a moment to look back at the final quarter of 2019.
The Safe Browsing team launched two features that significantly improve phishing protections available to Chrome users:
We reduced the false negative rate for Safe Browsing lookups in Chrome by launching real-time Safe Browsing lookups for users who have opted in to “Make Searches and Browsing better.” Early results are promising, with up to 55% more warnings shown to users who had this protection turned on, compared to those who did not.
A while ago we launched predictive phishing protections to warn users who are syncing history in Chrome when they enter their Google Account password into suspected phishing sites that try to steal their credentials. With the Chrome 79, we expanded this protection to everyone signed in to Chrome, even if you have not enabled Sync. In addition, this feature will now work for all the passwords that the user has stored in Chrome’s password manager; this will show an estimated 10 times more warnings daily.
We also had two telemetry based launches for sending pings to Safe Browsing when users who have opted into Safe Browsing Extended Reporting focus on password fields and reuse their passwords on Android.
HTTPS adoption has risen dramatically, but many https:// pages still include http:// subresources — known as mixed content. In October, the Usable Security team published a plan to eradicate mixed content from the web. The first phases of this plan started shipping in Chrome 79. In Chrome 79, we relocated the setting that allows users to load mixed content when it’s blocked by default. This setting used to be a shield icon in the Omnibox, and is now available in Site Settings instead. In Chrome 80, mixed audio and video will be automatically upgraded to https://, and they will be blocked if they fail to load. We started work on a web standard to codify these changes. See this article for how to fix mixed content if you run an affected website.
Website owners should keep their HTTPS configurations up-to-date with the latest security settings. Back in 2018, we (alongside other browsers) announced plans to remove support for legacy TLS versions 1.0 and 1.1. In October, we updated these plans to announce the specific UI treatments that we’ll use for this deprecation. Starting in January 2020, Chrome 79 will label affected websites with a “Not Secure” chip in the omnibox. Chrome 81 will show a full-page error. Make sure your server supports TLS >=1.2 to avoid this warning treatment.
To continue to polish our security UI, we iterated on our warning for lookalike domains to make the warning more understandable. We introduced a new gray triangle icon for http:// sites to make a clearer distinction between http:// and https://. This icon will appear for some users as part of a small-scale experiment in Chrome 80. Finally, we cleaned up a large backlog of low severity security UI vulnerabilities. We fixed, closed, or removed visibility restrictions on 33 out of 42 bugs.
The Platform Security Team sandboxed the network service on macOS in Chrome 79, and continued the work on sandboxing it on other Desktop platforms. There is also some forward momentum for reducing its privilege in version R of Android.
You can now check the sandboxing state of processes on Windows by navigating to chrome://sandbox. Also on Windows, we experimented with enabling the renderer App Container but ran into crashes likely related to third party software, and are now working to improve error reporting to support future experimentation. Chrome 79 also saw Code Integrity Guard enabled on supported Windows versions, blocking unsigned code injection into the renderer process.
We have also begun investigating new systemic approaches to memory unsafety. Look for news in 2020, as well as continual improvements to the core libraries in Chromium and PDFium.
In Q4, the Bugs-- team moved closer to our goal of achieving 50% fuzzing coverage in Chrome (it's currently at 48%). We added new features to our ClusterFuzz platform, such as Honggfuzz support, libFuzzer support for Android, improved fuzzer weights and more accurate statistics gathering pipeline. We also enabled several new UBSan features across both Chrome and OSS-Fuzz. As part of OSS-Fuzz, we added Go language support and on-boarded several new Go projects. We also gave a talk about ClusterFuzz platform at Black Hat Europe.
In conversation with our friends and colleagues at Mozilla over the course of Q4, the Open Web Platform Security team made substantial progress on Cross-Origin-Opener-Policy and Cross-Origin-Embedder-Policy. These isolation primitives will make it possible for us to ensure that process isolation is robust, even as we ship new and exciting APIs that give developers more capability. Implementations of both are mostly complete behind a flag, and we're looking forward to getting them out the door, and beginning the process of relying upon them to when deciding whether to allow cross-thread access to shared memory.
Similarly, we're polishing our implementation of Trusted Types based on feedback from origin trials and other vendors' review of the spec. We're still excited about its potential for injection mitigation, and we're looking forward to closing out the last few issues we know about in our implementation.
The Site Isolation team posted to the Google Security Blog and the Chromium Blog about our recent milestones for Site Isolation on Android and defending against compromised renderer processes. We also gave a talk at Black Hat Europe about Site Isolation and how to look for new bypasses for the VRP. At the same time, we made progress on additional enforcement, and we ran experiments to expand Android coverage to more devices. Finally, we also used Q4 to clean up a lot of core Site Isolation code, and we started updating Chrome's WebUI framework to better support new types of Chrome features without large risks of privilege escalation.
In the world of Web PKI Security, as part of our ongoing collaboration with Microsoft and Mozilla on the Common CA Database, "Audit Letter Validation" is now enabled for the full set of publicly trusted Certificate Authorities. This tool, developed by Microsoft and Mozilla, automatically validates the contents of audit letters to ensure they include the information required of a publicly trusted CA. Audit letter validation was previously done by hand, which was not scalable to CA's 2,500+ intermediate certificates.
Audit Letter Validation enabled us and other root stores to detect a wide variety of issues in the Web PKI that had previously gone unnoticed. We’ve spent the past quarter leading the incident response effort, working with non-compliant CAs to remediate issues and mitigate future risk. This helps not only Chrome users, but all users who trust these CAs. We can now automatically detect issues as they happen, ensuring prompt remediation.
We also collaborate with Mozilla to provide detailed reviews of organizations applying to be CAs, completing several in Q4. These public reviews take an extremely detailed look at how the CA is operated, looking at both compliance and for risky behaviour not explicitly forbidden, as well as opportunities for improvement based on emerging good practices.
Certificate Transparency (CT) continues to be an integral part of our work. Beyond helping protect users by allowing quick detection of potentially malicious certificates, the large-scale analysis that CT enables has been essential in helping improve the Web PKI. Analysis of CT logs this quarter revealed a number of systemic flaws in how Extended Validation certificates are validated, which has spurred industry-wide effort to address these issues.
We took steps to protect users from trusting harmful certificates that might be installed by software or which they might be directed to install. Working with the Enamel team, we built on steps we’d previously taken to protect users from certificates used to intercept their communications by adding the ability to rapidly deploy targeted protections via our CRLSet mechanism. CRLSets allow us to quickly respond, using the Component Updater, without requiring a full Chrome release or respin.
More generally, we continue to work on the “patch gap”, where security bug fixes are posted in our open-source code repository but then take some time before they are released as a Chrome stable update. We now make regular refresh releases every two weeks, containing the latest severe security fixes. This has brought down the median “patch gap” from 33 days in Chrome 76 to 15 days in Chrome 78, and we continue to work on improving it.
Finally, you can read what the Chrome (and other Google) Vulnerability Rewards Programs have been up to in 2019 in our recent blog post.
Cheers,
Andrew, on behalf of the Chrome security team
Q3 2019
Greetings!
With the equinox behind us, it's time for an update on what the Chrome security team has been up to in the third quarter of 2019.
The Chrome Safe Browsing team launched Stricter Download Protections for Advanced Protection users in Chrome and significantly reduce users’ exposure to potentially risky downloads.
In Q3, Safe Browsing also brought Google password protection to signed in, non-sync users. This project is code complete, and the team plans to roll it out in Chrome 79.
Enamel, the Security UX team, have been looking at mixed content: http:// subresources on https:// pages. Mixed content presents a confusing UX and a risk to user security and privacy. After a long-running data-gathering experiment on pre-stable channels, the Enamel team publicized plans to start gradually blocking mixed content. In Chrome 79, the team plans to relocate the setting to bypass mixed content blocking from a shield icon in the omnibox to Site Settings. In Chrome 80, we will start auto-upgrading mixed audio and video to https://, blocking resources if they fail to auto-upgrade. Chrome 80 will also introduce a “Not Secure” omnibox chip for mixed images, which we plan to start auto-upgrading in a future version of Chrome.
Furthering our quest to improve the quality of HTTPS deployments, we announced a new UI plan for the upcoming legacy TLS deprecation in early 2020.
In Q3, Enamel also made improvements to our lookalike domain warning, with clearer strings and new heuristics for detecting spoofing attacks. We also added additional signals in our Suspicious Site Reporter extension for power users to identify suspicious sites that they can report to Safe Browsing for scanning. In Chrome 77, we relocated the Extended Validation certificate UI to Page Info; we presented the user research that inspired this change at USENIX Security 2019.
The Platform Security team continues to help improve the memory safety of the PDFium code base, and have finished removing all bare new/delete pairs, and ad-hoc refcounting. We continued to push for greater memory safety on a number of fronts, and are busy working on plans for the rest of the year and 2020. Q3 saw a number of projects enter trials on Beta and Stable, including the V2 sandbox for GPU process and network service sandbox on macOS, and Code Integrity Guard on Windows. Look out for news of their launch in next quarter's update!
The XSS Auditor, which attempted to detect and prevent reflected XSS attacks, was removed in Chrome 78. It had a number of issues, and in the end the cons outweighed the pros.
The Bugs-- team added FuzzedDataProvider (FDP) as part of Clang, making it simple to write fuzz targets that require multiple inputs with just a single header file include. We refactored ClusterFuzz code to make it easier to add new fuzzing engines and migrated libFuzzer to use this new interface. We rewrote the ClusterFuzz reproduce tool, which is now part of main ClusterFuzz GitHub repo. On the OSS front, we launched new features in OSS-Fuzz - Golang support, X86 config support, FDP support, and OSS-Fuzz Badges. We also did fuzzer strategy weight adjustments based on multi-armed bandit experiments. Jonathan Metzman presented at Black Hat (USA) on structure aware fuzzing.
The Open Web Platform Security team have been working on Trusted Types, the Origin Trial for which is about to finish. We are making a number of changes to the feature, mainly to aid deployment and debugging of TT deployments, as well as some overall simplifications. We expect this work to finish in early Q4, and to launch in the same quarter.
The Site Isolation team reached two more important milestones in Q3. First, we enabled Site Isolation for password sites on Chrome for Android (on devices with at least 2GB of memory), bringing Spectre mitigations to mobile devices! Second, we added enough compromised renderer protections on Chrome for Desktop to include cross-site data disclosure to the Chrome VRP! We're very excited about the new protections, and we continue to improve the defenses on both Android and Desktop. Separately, we presented our USENIX Security paper in August and launched OOPIF-based PDF support, clearing the way to remove BrowserPlugin.
In the Web PKI space, the government of Kazakhstan recently created a Root CA and with local ISPs engaged in a campaign to encourage all KZ citizens to install and trust the CA. Ripe Atlas detected this CA conducting a man-in-the-middle on social media. Chrome blocked this certificate to prevent it from being used for MITMing Chrome users. In conjunction with several other major browsers, we made a joint PR statement against this type of intentional exploitation of users. Following this incident, we began working on a long-term solution to handling MITM CAs in Chrome.
In hacker philanthropy news, in July we increased the amounts awarded to security researchers who submit security bugs to us under the Chrome Vulnerability Reward Program. The update aligned both categories and amounts with the areas we'd like researchers to focus on. This generated some good press coverage which should help spread the word about the Chrome VRP. Tell your friends, and submit your Chrome security bugs here and they'll be considered for a reward when they're fixed!
In Chrome security generally we've been working to address an issue called the “patch gap”, where security bug fixes are posted in our open-source code repository but then take some time before they are released as a Chrome stable update. During that time, adversaries can use those fixes as evidence of vulnerabilities in the current version of Chrome. To reduce this problem, we’ve been merging more security fixes directly to stable, and we’re now always making a security respin mid-way through the six-week development cycle. This has reduced the median patch gap from ~33 days in Chrome 76 to ~19 days in Chrome 77. This is still too long, and we’re continuing to explore further solutions.
Cheers,
Andrew, on behalf of the Chrome security team
Q2 2019
Greetings,
With 2019 already more than 58% behind us, here's an update on what Chrome Security was up to in the second quarter of this year.
Chrome SafeBrowsing is launching stricter download protections for Advanced Protection users, and a teamfood has begun to test the policy in M75. This will launch broadly with M76. This significantly reduces an Advanced Protection user’s exposure to potentially risky downloads by showing them warnings when they try to download “risky” files (executable files that haven’t been vetted by SafeBrowsing) in Chrome.
Users need to understand site identity to make safe decisions on the web. Chrome Security UX published a USENIX Security paper exploring how users understand modern browser identity indicators. To help users understand site identity from confusing URLs, we launched a new warning detecting domains that look similar to domains you’ve visited in the past. We published a guide to how we triage spoofing bugs involving such domains. We also built a Suspicious Site Reporter extension that power users can use to report deceptive sites to Google’s Safe Browsing service, to help protect non-technical users who might not be able to discern a deceptive site’s identity as well.
Site identity is meaningless without HTTPS, and we continue to promote HTTPS adoption across the web. We implemented an experimental flag to block high-risk nonsecure downloads initiated from secure contexts. And we continued to roll out our experiment that auto-upgrades mixed content to HTTPS, pushing to 10% of beta channel and adding new metrics to quantify breakage.
In addition to helping with the usual unfaltering flow of security launch reviews, Platform Security engineers have been continuing to investigate ways to help Chrome engineers create fewer memory safety bugs for clusterfuzz to find. While performance is a concern when adding checks to libraries, some reports of regressions nicely turned out to be red herrings. On macOS, Chrome executables are now signed with the hardened runtime options enabled. Also on macOS, the change to have Mojo use Mach IPC, rather than POSIX file descriptors/socket pairs, is now fully rolled out. On Windows, we started to enable Arbitrary Code Guard on processes that don't need dynamic code at runtime.
We've done a lot of analysis on the types of security bugs which are still common in Chromium. The conclusion is that memory safety is still our biggest problem, so we've been working to figure out the best next steps to solve that—both in terms of safer C++, and investigating other choices to find if we can parse data in a safe language without disrupting the Chromium development environment too much.
We've also been looking at how security fixes are released, to ensure fixes get to our users in the quickest possible way. We have also improved some of the automatic triage that Clusterfuzz does to make sure that bugs get the right priority.
To augment our fuzzing efforts and find vulnerabilities for known bad patterns, we have decided to invest in static code analysis efforts with Semmle. We have written our custom QL queries and reported 15 bugs so far (some of these were developed in collaboration with Project Zero).
We have made several changes to improve fuzzing efficiency which include - leveraging DFSan for focused mutations, added support for custom mutators, build-type optimizations (sanitizers without instrumentation) and libFuzzer fork mode on Windows. We have upstreamed a helper module in libFuzzer to make it easy to split fuzz input and decrease fuzz target complexity.
The Open Web Platform Security team was mainly focused on Trusted Types, and conducted an Origin Trial for the feature in Q2. The team is presently scrambling to address the issues raised by public feedback, to modify the feature to make it easier to deploy, and to generally make Trusted Types fit for a full launch.
The Site Isolation team published their Usenix Security 2019 paper about the desktop launch (Site Isolation: Process Separation for Web Sites within the Browser), which will be presented in August. We now have a small Stable channel trial of Android Site Isolation, which isolates the sites that users log into rather than all sites. That work included persisting and clearing the sites to isolate, fixing text autosizing, and adding more metrics. Separately, we ran a trial of isolating origins rather than sites to gauge overhead, and we helped ship Sec-Fetch-Site headers. We also started collecting data on how well CORB is protecting sensitive resources in practice, and we've started launch trials of out-of-process iframe based PDFs (which adds CORB protection for PDFs).
The Chrome OS Security team has been working on the technology underlying Chrome OS verified boot. Going forward, dm_verity will use SHA256 as its hashing algorithm, replacing SHA1. So long, weak hashing algorithm!
We also spent some time making life easier for Chrome OS developers. Devs now have access to a time-of-check-time-of-use safe file library, and a simplified mechanism for building system call filtering policies.
Cheers,
Andrew, on behalf of the Chrome security team
Q1 2019
Greetings,
Here's an update on what Chrome Security was up to in the first quarter of 2019!
The Site Isolation team finished the groundwork for Android Beta Channel field trials, and the trials are now in progress. This Android mode isolates a subset of sites that users log into, to protect site data with less overhead than isolating all sites. We also started enforcing Cross-Origin Read Blocking for extension content script requests, maintaining a temporary allowlist for affected extensions that need to migrate. We tightened compromised renderer checks for navigations, postMessage, and BroadcastChannel. We also continued cross-browser discussions about Long-Term Web Browser Mitigations for Spectre, as well as headers for isolating pages and enabling precise timers. Finally, we are close to migrating PDFs from BrowserPlugin to out-of-process iframes, allowing BrowserPlugin to be deleted.
In the last several years, the Usable Security team have put a lot of effort into improving HTTPS adoption across the web, focusing on getting top sites to migrate to HTTPS for their top-level resources. We’re now starting to turn our attention to insecure subresources, which can harm user security and privacy even if the top-level page load is secure. We are currently running an experiment on Canary, Dev, and Beta that automatically upgrades insecure subresources on secure pages to HTTPS. We also collected metrics on insecure downloads in Q1 and have started putting together a proposal to block high-risk insecure downloads initiated from secure pages.
People need to understand website identity to make good security and trust decisions, but lots of research suggests that they don’t. We summarized our own research and thinking on this topic in an Enigma 2019 talk. We open-sourced a tool that we use to help browser developers display site identity correctly. We also published a set of URL display guidelines and subsequently incorporated them into the URL standard.
The Safe Browsing team increased the coverage against malware and unwanted software downloads by changing the logic of which file types to check against Safe Browsing. We flipped the heuristic to an allow-list of known-safe file extensions, and made the rest require verification. This adds protection from both the uncommon file extensions (where attackers convince users to rename them to a common executable after scanning), and from Office document types where the incidence of malware has increased significantly.
The Chrome Cleanup Tool is now in the Chromium repository! This lets the public audit the data collected by the tool, which is a win for user privacy, and gives an example of how to sandbox a file scanner. The open source version includes a sample scanner that detects only test files, while the version shipped in Chrome will continue to depend on internal resources for a licensed engine.
The Bugs-- team has open sourced ClusterFuzz, a fuzzing infrastructure that we have been developing over the past 8 years! This army of robots has found 30,000+ bugs in Chrome and 200+ open source projects. To improve the efficiency of our cores, we have developed automated fuzzer weights management based on fuzzer quality/freshness/code changes. Additionally, we have developed several new WebGL fuzzers (some of them leverage GraphicsFuzz) and found 63 bugs. We have significantly scaled up fuzzing Chrome on Android (x86) by using Cuttlefish over GCE. Lastly, we have transitioned Chrome code coverage tools development to Chrome Infra team, see the new dash here.
The Platform Security team added some checks for basic safety to our base and other fundamental libraries, and are investigating how to do more while maintaining efficiency (run-time, run space, and object code size). We hope to continue to do more, as well as investigate how to use absl without forgoing the safety checks. We’ve been having great success with this kind of thing in PDFium as well, where we’ve found that the compiler can often optimize away these checks, and investigating where it hasn’t been able to has highlighted several pre-existing bugs. On macOS, we have re-implemented the Mojo IPC Channel under the hood to use Mach IPC, which should help reduce system resource shortage crashes. This also led to the development of two libprotobuf-mutator (LPM) fuzzers for Mach IPC servers. We’re working on auto-generating an LPM based fuzzer from Mojo API descriptions to automatically fuzz Mojo endpoints, in-process. We also continue to write LPM fuzzers for tricky-to-reach areas of the code like the disk cache. We are also investigating reducing the privilege of the network process on Windows and macOS.
Our next update will be the first full quarter after joining Chrome Trust and Safety. We're looking forward to collaborating with more teams who are also working to keep our users safe!
Cheers,
Andrew on behalf of Chrome Security
Q4 2018
Greetings,
With the new year well underway, here's a look back at what Chrome Security was up to in the last quarter of 2018.
In our quest to make HTTPS the default, we started marking HTTP sites with a red Not Secure icon when users enter data into forms. This change launched to stable in Chrome 70 in October. A new version of the HTTPS error page also launched to the stable channel as an experiment: it looks the same but is much improved under the hood. We built a new version of the HTTPS Transparency Report for top sites; the report now displays aggregate statistics for the top sites instead of individual sites. We also built a new interstitial warning to notify Chrome users of unclear mobile subscription billing pages. The new warning and policy launched in Chrome 71.
The Bugs-- team ported libFuzzer to work on Windows, which was previously lacking coverage guided fuzzing support, and this resulted in 93 new bugs. We hosted a month-long Fuzzathon in November, focused on improving fuzz coverage for Chrome’s browser process and Chrome OS. This effort led to 85 submissions and 157 bugs. We have added more automation towards auto-adjusting cpu cycles allocated to various fuzzers based on code coverage changes and recency of fuzzer submission. Lastly, we added Linux x86 fuzzing configurations (1, 2) for libFuzzer, which resulted in 100 new bugs.
In Platform Security, we started sandboxing the network service on macOS. On Windows, we’re starting to experiment with an improved GPU sandbox. The network service has the beginnings of a sandbox on Windows, and we’ll be working on tightening it in future work. We’re also continuing to gradually harden the implementations of core Chromium libraries in base/ and elsewhere. We had a great adventure finding and fixing bugs in SQLite as well, including an innovative and productive new fuzzer. We’re continuing to hammer away at bugs in PDFium, and refactoring it significantly.
To help sites defend against cross-site scripting (XSS), we are working on Trusted Types. This aims to bring a derivative of Google's "Safe HTML Types" — which relies on external tooling that may be incompatible with existing workflows or code base — directly into the web platform, thus making it available to everyone. Both Google-internal and external teams are presently working on integrating Trusted Types into existing frameworks which, if successful, offers the chance to rapidly bring this technique to large parts of the web. Chrome 73 will see an origin trial.
The work on Site Isolation continues as we focus on enabling it on Android — support for adding isolated origins at runtime, fixing issues with touch events, and balancing process usage for maximizing stability. We added improvements to CORB to prevent bypasses from exploited renderers, we announced extensions changes for content script requests, and we reached out to affected authors with guidance on how to update. Additionally, we continue to add more enforcements to mitigate compromised renderers, which is the ultimate end goal of the project. Last but not least, we have worked to improve code quality and clean up architectural deficiencies which accumulated while developing the project.
Chrome OS 71 saw the initial, limited release of USBGuard, a technology that improves the security of the Chrome OS lock screen by (carefully) blocking USB devices on the lock screen.
As ever, many thanks to all those in the Chromium community, and our VRP reporters, who help make the Web more secure!
Cheers,
Andrew, on behalf of the Chrome security team
Q3 2018
Greetings!
Chrome turned 10 in September! Congrats to the team on a decade of making the web more secure.
In the quest to find security bugs, the Bugs-- team incorporated Machine Learning in ClusterFuzz infrastructure using RNN model to improve upon corpus quality and code coverage. We experimented with improving fuzzing efficiency by adding instability handling and mutation stats strategies inside libFuzzer. We added a new Mojo service fuzzer by extending the Mojo javascript bindings and found security bugs. We also migrated our fuzzing infrastructure to provide Clang Source-based Code Coverage reports and deprecated Sancov.
The Platform Security team continued to add hardening and checks to fundamental classes and libraries in base/, and did some of the same work in PDFium and other parsers and interpreters in Chromium. We also provided some sandboxing consulting to other teams for their new services including audio and networking.
Chrome on macOS now has a new sandbox architecture, launched in Chrome 69, which immediately initializes when a new process executes. This reduces Chrome’s attack surface and allows better auditing of system resource access between macOS versions.
Chrome OS Security wrapped up the response to the L1TF vulnerability, fixes for which enabled shipping Linux apps on Chrome OS without exposing users to extra risk. Moreover, we received an (almost) full-chain exploit for Chrome OS that both validated earlier sandboxing work (like for Shill, Chrome OS’s connection manager) and also shed light on further hardening work that was wrapped up in Q3.
Chrome 70 shipped TLS 1.3, although we did have to disable a downgrade check in this release due to a last-minute incompatibility with some network devices.
After the excitement enabling Site Isolation by default on desktop platforms in Q2, the team has been focused on building a form of Site Isolation suitable for devices that run Android, which have more limited memory and processing power. We've been fixing Android-specific issues (alongside a lot of maintenance for the desktop launch), we have started field trials for isolating a subset of sites, and we are working on ways to add more sites to isolate at runtime. Separately, we added several more enforcements to mitigate compromised renderers, to extend the protection beyond Spectre.
Users should expect that the web is safe by default, and they’ll be warned when there’s an issue. In Chrome 68, we hit a milestone for Chrome security UX, marking all HTTP sites as “not secure”. We continued down that path in Chrome 70, showing the “not secure” string in red when users enter data on an HTTP page. We began stepping towards removing Chrome’s positive security indicators so that the default unmarked state is secure, starting by removing the “Secure” wording in Chrome 69.
We would like to experiment with mixed content autoupgrading to simplify (i.e. improve) the user experience, and are currently collecting metrics about the impact. We’re also working to improve Chrome security UX under the hood -- we launched committed HTTPS interstitials on Canary and Dev.
As ever, many thanks to all those in the Chromium community, and our VRP reporters, who help make the Web more secure!
Cheers,
Andrew, on behalf of the Chrome security team
Q2 2018
Greetings and salutations,
It's time for another (rather belated!) update from your friends in Chrome Security, who are hard at work to keep Chrome the most secure platform to browse the Internet.
We're very excited that Site Isolation is now enabled by default as a Spectre mitigation in M67 for Windows, macOS, Linux, and Chrome OS users! This involved an incredible number of fixes from the team in Q2 to make out-of-process iframes fully functional, especially in areas like painting, input events and performance, and it included standardizing Cross-Origin Read Blocking (CORB). Stay tuned for more updates on Site Isolation coming later this year, including additional protections from compromised renderers. Chris and Emily talked about Spectre response, Site Isolation, and necessary developer steps at I/O. We also announced that security bugs found in Site Isolation could qualify for higher VRP reward payments for a limited time.
In their quest to find security bugs, the Bugs-- team integrated Clang Source-based Code Coverage into Chromium project and launched a dashboard to make it easy for developers to see which parts of the code are not covered by fuzzers and unit tests. We wrote a Mojo service fuzzer that generates fuzzing bindings in JS and found some scary vulnerabilities. We added libFuzzer fuzzing support in Chrome OS and got new fuzz target contributions from Chrome OS developers and found several bugs. We made numerous improvements to our ClusterFuzz fuzzing infrastructure, examples include dynamically adjusting CPU allocation for inefficient fuzz targets until their performance issues are resolved, cross-pollinating corpuses across fuzz targets and projects, and more.
The Platform Security team has been working on adding bounds checks and other sanity checks to base/containers, as part of an overarching effort to harden heavily-used code and catch bugs. We’ve had some good initial success and expect to keep working on this for the rest of the year. This is a good area for open source contributors and VRP hunters to work on, too!
In our quest to move the web to 100% HTTPS, we prepared for showing Not Secure warnings on all http:// pages which started in M68. We sent Search Console messages to affected sites and expanded our enterprise controls for this warning. We announced some further changes to Chrome’s connection security indicators: in M69, we’ll be removing the Secure chip next to https:// sites, and in M70 we’ll be turning the Not Secure warning red to more aggressively warn users when they enter data on a non-secure page.
We also added some features to help users and developers use HTTPS more often. The omnibox now remembers pages that redirect from http:// to https://, so that users don’t get sent to the http:// version in the future. We fixed a longstanding bug with the upgrade-insecure-requests CSP directive that helps developers find and fix mixed content: it now upgrades requests when following redirects. Finally, we added a setting to chrome://flags#unsafely-treat-insecure-origin-as-secure to let developers more easily test HTTPS-only features, especially on Android and ChromeOS.
To better protect users from unwanted extensions, we announced the deprecation of inline installations for extensions. This change will result in Chrome users being directed to the Chrome Web Store when installing extensions, helping to ensure user can make a better informed decision.
Chrome OS spent a big chunk of Q2 updating and documenting our processes to ensure we can better handle future incidents like Spectre and Meltdown. We expanded our security review guidelines so that they can be used both by security engineers while reviewing a feature, as well as by SWE and PM feature owners as they navigate the Chrome OS launch process.
We continued our system hardening efforts by making Shill, the Chrome OS network connection manager, run in a restrictive, non-root environment starting with M69. Shill was exploited as part of a Chrome OS full-chain exploit, so sandboxing it was something that we’ve been wanting to do for a long time. With PIN sign-in launching with M68, the remaining work to make the underlying user credential brute force protection mechanism more robust is underway, and we plan to enable it for password authentication later this year. Hardening work also happened on the Android side, as we made progress on functionality that will allow us to verify generated code on Android using the TPM.
Q2 continued to require incident response work on the Chrome OS front, as the fallout from Spectre and Meltdown included several researchers looking into the consequences of speculative execution. The good news is that we started receiving updated microcode for Intel devices and these updates will start to go out with M69.
As ever, many thanks to all those in the Chromium community, and our VRP reporters, who help make the Web more secure!
Cheers
Andrew
on behalf of the Chrome Security Team
Q1 2018
Greetings and salutations,
It's time for another update from your friends in Chrome Security, who are hard at work trying to keep Chrome as the most secure platform to browse the Internet. We'd also like to welcome our colleagues in Chrome OS security to this update - you'll be able to hear what they've been up to each quarter going forward.**
In our effort to find and fix bugs, we collaborated with the Skia team and integrated 21 fuzz targets into OSS-Fuzz for continuous 24x7 fuzzing on Skia trunk. So far, we have found 38 security vulns! We also added several new fuzz targets as part of a 2-week bug bash (e.g. multi-msg mojo fuzzer, audio decoder fuzzer, appcache manifest parsing fuzzer, json fuzzer improvements, etc) and found an additional vulnerability through code review. We added libFuzzer support for Chrome OS and integrated it with ClusterFuzz. Sample puffin fuzzer found 11 bugs (includes 2 security). We made several improvements to AFL fuzzing engine integration and fuzzing strategies. This brings it on-par with libFuzzer in terms of the number of bugs found -- it's now ~3X more productive than before! We added support for building MSan instrumented system libraries for newer debian distros (1, 2).
To help users infected with unwanted software, we moved the standalone Chrome Cleanup Tool into Chrome. Scanning and cleaning Windows machines can now be triggered by visiting chrome://settings/cleanup. There was some misunderstanding on Twitter about why Chrome was scanning, which we clarified. We also pointed people to the unwanted software protection section of Chrome's privacy whitepaper so they can understand what data is and isn’t sent back to Google.
In our effort to move the web to 100% HTTPS, we announced that Chrome will start marking all HTTP pages with a Not Secure warning in July. This is a big milestone that concludes a multi-year effort to roll out this warning to all non-secure pages. Alongside that announcement, we added a mixed content audit to Lighthouse, an automated tool for improving webpage quality. This audit helps developers find and fix mixed content, a major hurdle for migrating to HTTPS. We also announced the deprecation of AppCache in nonsecure contexts.
In addition to MOAR TLS, we also want more secure and usable HTTPS, or BETTER TLS. With that goal in mind, we made changes to get better metrics about features intended to help users with client or network misconfigurations that break their HTTPS connections (like our customized certificate warnings). We also added more of these “helper” features too: for example, we now bundle help content targeted at users who are stuck with incorrect clocks, captive portals, or other configuration problems that interfere with HTTPS. Finally, we started preparing for Chrome’s upcoming Certificate Transparency enforcement deadline by analyzing and releasing some metrics about the state of CT adoption so far.
To help make security more usable in Chrome, we’re exploring how URLs are problematic. We removed https/http schemes and www/m subdomains from the steady-state omnibox, and we’re studying the impact of removing positive security indicators that might mislead or distract from the important security information in the origin.
Chrome OS Security had a busy Q1. The vulnerabilities known as Meltdown and Spectre were disclosed in early January, and a flurry of activity followed as we rushed to patch older kernels against Meltdown in Chrome OS 66, and incorporated Spectre fixes for ARM Chrome OS devices in Chrome OS 67. We also started codifying our security review guidelines in a HOWTO doc, to allow the larger Chrome OS team to better prepare for security reviews of their features. Moreover, after being bit by symlinks and FIFOs being used as part of several exploit chains, we finally landed symlink and FIFO blocking in Chrome OS 67. On the hardware-backed security front, we've split off the component that allows irreversible once-per-boot decisions into its own service, bootlockboxd. Finally, work is nearing completion for a first shipping version of a hardware-backed mechanism to protect user credentials against brute force attacks. This will allow PIN codes as a new authentication mechanism for Chrome OS meeting our authentication security guidelines, and we'll use it to upgrade password-based authentication to a higher security bar subsequently.
Spectre kept us busy on the Chrome Browser side as well. The V8 team landed a large number of JIT mitigations to make Spectre exploits harder to produce, and high resolution timers like SharedArrayBuffer were temporarily disabled; more details on our response here. In parallel, the Site Isolation team significantly ramped up efforts to get Site Isolation launched as a Spectre mitigation, since it helps avoid having data worth stealing anywhere in a compromised process. In Q1, we substantially improved support for the Site Isolation enterprise policies that launched prior to the Spectre disclosure, including:
- Reducing total memory overhead from 20% to 10%.
- Significantly improving input event handling in out-of-process iframes (OOPIFs).
- Significantly improving DevTools support for OOPIFs.
- Adding ChromeDriver support for OOPIFs.
- Adding support for printing OOPIFs.
- Improving rendering performance for OOPIFs.
- Starting standards discussions for Cross-Origin Read Blocking (CORB).
Thanks to these improvements, we have been running field trials and are preparing to launch the strict Site Isolation policy on desktop. We talked about much of this work at Google I/O.
Finally, we continue to work on exploit mitigations and other security hardening efforts. For example, Oilpan, blink's garbage collecting memory management system, removed its inline metadata, which make it more difficult to overwrite with memory corruption bugs. This was the culmination of several years of effort, as performance issues were worked through. In Android P, we refactored the WebView zygote to become a child of the main app_process zygote, reducing memory usage and helping with the performance of future Site Isolation efforts. Members of Platform Security also helped coordinate the response to Spectre and Meltdown, and still managed to find time to conduct their routine reviews of new Chrome features.
Q4 2017
Greetings and salutations,
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. As it's the start of 2018, we reflected on a year’s worth of security improvements, and announced new stats around our VRP and Safe Browsing warnings.
Here are some highlights from the last quarter of 2017:
In effort to find and fix bugs, we (Bugs--):
Wrote a new and easily extensible javascript fuzzer using Babel, which found 100+ bugs in both V8 (list) and other browser javascript engines (list).
Started integrating Clang Source-based Code Coverage in the Chromium build system and are deprecating Sanitizer Coverage. Clang coverage is very precise, shows hit frequencies and is much easier to visualize. You can follow progress here.
Hosted a month long fuzzathon in October where Chromium developers participated in writing new fuzz targets and fixing blockers for existing ones. This resulted in 93 bugs, several of which were in new uncovered areas of codebase; results here.
Fixed several bugs in our automated owner and component assignment pipeline and expanded our builder infrastructure to archive builds more frequently, for more accurate blame results. Faster and more accurate bug triaging means faster fixes for users!
Other than fixing bugs, we (MOAR TLS, Enamel, Safe Browsing) also:
Blogged about the massive uptick in HTTPS we saw in 2017: 71 of the top 100 sites on the Web use HTTPS by default, up from 37 a year ago. We also announced our continuing platinum sponsorship of Let’s Encrypt.
Delivered a change (in M62, announced in April), extending the “Not Secure” omnibox warning chip to non-secure pages loaded while Incognito and to non-secure pages after the user edits a form field.
Added an enterprise policy (in M65) for treating insecure origins as secure contexts, to help with development, testing, and intranet sites that are not secured with HTTPS.
More tightly integrated Chrome’s captive portal detection with operating system APIs. This feature helps users log in to captive portals (like hotel or airport wifi networks) rather than seeing unhelpful TLS certificate errors.
Launched predictive phishing protection to warn users when they’ve typed their Google password into a never-seen-before phishing site.
As always, we invest a lot in security architecture and exploit mitigations. Last quarter, we (Platform Security / Site Isolation):
Started rolling out the Mac Sandbox v2, bringing both greater security and cleaner code.
Refactored sandbox code out of //content to make it easier to use across the system.
Worked on and helped coordinate Chrome's response to the recently announced Spectre and Meltdown CPU vulnerabilities and worked with the V8 team who spearheaded Chrome's Javascript and WebAssembly mitigations which are rolling out to users now.
Accelerated the rollout of Site Isolation as a mitigation for Spectre/Meltdown. Enabling Site Isolation reduces the amount of valuable cross-site data that can be stolen by such attacks. We're working to fix the currently known issues so that we can start enabling it by default.
Implemented cross-site document blocking in (M63) when Site Isolation is enabled. This ensures that cross-site HTML, XML, JSON, and plain text files are not given to a renderer process on subresource requests unless allowed by CORS. Both --site-per-process and --isolate-origins modes are now available via enterprise policy in Chrome 63.
Ran field trials of --site-per-process and --isolate-origins on 50% of Chrome Canary instances to measure performance, fix crashes, and spot potential issues. In a separate launch involving out-of-process iframes and Site Isolation logic, https://accounts.google.com now has a dedicated renderer process in Chrome 63 to support upcoming requirements for Chrome Signin.
We have landed a large number of functional and performance improvements for Site Isolation, including fixes for input events, DevTools, OAuth, hosted apps, crashes, and same-site process consolidation which reduces memory overhead.
To help users that inadvertently installs unwanted software, we (Chrome Protector):
Launched new Chrome Cleanup Tool UI , which we think is more comprehensible for users.
Launched a sandboxed ESET-Powered Chrome Cleanup Tool
Running on 100% of Chrome users by Nov 23
Lastly, we (BoringSSL) deployed TLS 1.3 to Chrome stable for a couple weeks in December and gathered valuable data.
As ever, many thanks to all those in the Chromium community who help make the Web more secure!
Cheers
Andrew
on behalf of the Chrome Security Team
Q3 2017
Greetings and salutations,
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Give you're reading this, you might well be interested in two whitepapers evaluating enterprise browser security that were released recently.
Beyond that, here's a recap from last quarter:
Bugs-- team
We've been researching ways to fuzz grammar based formats efficiently. We experimented with in-process fuzzing with libprotobuf-mutator and added a sample.
Recent ClusterFuzz improvements for developers:
The [reproduce tool](https://github.com/google/clusterfuzz-tools) is now
out of beta and supports Linux and Android platforms.
Performance improvements to ClusterFuzz UI and migrated to [Polymer
2](https://www.polymer-project.org/2.0/docs/about_20).
We finished the remaining pieces of our end-to-end bug triage automation
and are now auto-assigning
[owners](https://bugs.chromium.org/p/chromium/issues/list?can=1&q=label%3ATest-Predator-AutoOwner&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids)
and
[components](https://bugs.chromium.org/p/chromium/issues/list?can=1&q=label%3ATest-Predator-AutoComponents&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids)
for all newly filed bugs.
We have also made infrastructure improvements to OSS-Fuzz to better isolate workloads between different projects. OSS-Fuzz continues to improve the security of the overall web (74 projects running 24x7, 636 security bugs fixed)!
Enamel, Permissions
We began marking FTP as Not Secure with Chrome 63.
We published a paper in CCS 2017 describing years of work we’ve done to investigate and mitigate false-positive certificate errors. We also launched new improvements to help users who see lots of these spurious errors:
We launched an interstitial to help users with buggy MITM Software with
Chrome 63 (see chrome://interstitials/).
We launched an interstitial to help users affected by Superfish with
Chrome 61 (see chrome://interstitials/).
Better integration with the OS for captive portal detection on Android
and Windows in Chrome 63.
Launched new Site Details page.
Removed non-factory-default settings from PageInfo and added back in the Certificate Viewer link.
Launching modal permission prompts on Android in M63.
Removed the ability to request Notification permission from iframes and over HTTP.
MOAR TLS
The change to mark HTTP pages in Incognito or after form field editing as Not Secure is in Chrome 62. We sent > 1 million Search Console messages warning webmasters about this change.
Google is preloading HSTS for more TLDs, the first new ones since .google was preloaded in 2015.
Chrome Safe Browsing
Launched the PVer4 database-update protocol to all users, saving them 80% of the bandwidth used by Safe Browsing.
Platform Security
Added support for new Win10 sandbox mitigations in M61 as part of our continued Windows Sandbox efforts.
To help block 3rd-party code being injected into Chrome processes on Windows we've Enabled third-party blocking on all child processes, after warmup (delayed mitigation), in M62.
New in Android O, the Chrome-powered WebView component now renders content in a separate, sandboxed process! This brings the same security and stability benefits of Chrome to web pages rendered within apps.
Site Isolation
We launched OOPIF-based <webview> in M61 for ChromeOS/Mac/Linux and in M62 for Windows. This eliminates BrowserPlugin for everything except PDFs (which we're working on now), helping to clean up old code.
Running an experiment in M63 to give process isolation to accounts.google.com, to improve Chrome Signin security.
Finished design plans for using isolated processes when users click through SafeBrowsing malware warnings.
Improved some of the Site Isolation enforcement mechanisms, including passwords and localStorage.
Improved OOPIF support in several areas, including basic frame architecture (e.g., how proxy frames are created), touch selection editing, and gesture fling. Also making progress on OOPIF printing support.
As ever, many thanks to all those in the Chromium community who help make the web more secure!
Cheers
Andrew, on behalf of the Chrome Security Team
Q2 2017
Greetings and salutations,
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter:
The Bugs-- team have released a new tool to make ClusterFuzz testcase reproduction easy for developers. Our open source fuzzing efforts (aka OSS-Fuzz) continue to improve the security of the overall web (86 projects, 1859 bugs, see recent blog post here). We have written a new Javascript fuzzer that has filed 102 bugs to date, many with security implications. We also found some interesting vulnerabilities (1, 2, 3) through our code auditing efforts.
We integrated the Safe Browsing API with WebView starting in Android O, allowing custom interstitial blocking pages. WebView developers will be able to opt-in to check URLs against Google Safe Browsing’s list of unsafe websites.
We understand that sites which repeatedly prompt for powerful permissions often annoy users and generate warning fatigue. Starting in Chrome 59, we’ve started temporarily blocking permission requests if users have dismissed a permission prompt from a site multiple times. We’re also moving forward with plans to deprecate permissions in cross-origin iframes by default. Permission requests from iframes have the potential to mislead users into granting access to content they didn’t intend.
The Platform Security team has concluded several years of A/B experimentation on Android, and with Chrome 58 we have turned on the Seccomp-BPF sandbox for all compatible devices. This sandbox filters system calls to reduce the attack surface of the Linux kernel in renderer processes. Currently about 50% of Android devices support Seccomp, and this number is rising at a steady rate. In Chrome 59, you can navigate to about:sandbox to see whether your Android device supports Seccomp.
We have migrated PDFium to use PartitionAlloc for most allocations, with distinct partitions for strings, array buffers, and general allocations. In Chrome 61, all three partitions will be active.
We continue to work on MOAR+BETTER TLS and announced the next phase of our plan to help people understand the security limitations of non-secure HTTP. Starting in Chrome 62 (October), we’ll mark HTTP pages as “Not secure” when users enter data in forms, and on all HTTP pages in Incognito mode. We presented new HTTPS migration case studies at Google I/O, focusing on real-world site metrics like SEO, ad revenue, and site performance.
We experimented with improvements to Chrome’s captive portal detection on Canary and launched them to stable in Chrome 59, to avoid a predicted 1% of all certificate errors that users see.
Also, users may restore the Certificate information to the Page Information bubble!
Those working on the Open Web Platform have implemented three new Referrer Policies, giving developers more control over their HTTP Referer headers and bringing our implementation in line with the spec. We also fixed a longstanding bug so that site owners can now use upgrade-insecure-requests in conjunction with CSP reporting, allowing site owners to both upgrade and remediate HTTP references on their HTTPS sites.
After our launch of --isolate-extensions in Chrome 56, the Site Isolation team has been preparing for additional uses of out-of-process iframes (OOPIFs). We implemented a new --isolate-origins=https://example.com command line flag that can give dedicated processes to a subset of origins, which is an important step towards general Site Isolation. We also prepared the OOPIF-based <webview> field trial for Beta and Stable channels, and we ran a Canary field trial of Top Document Isolation to learn about the performance impact of putting all cross-site iframes into one subframe process. We've been improving general support for OOPIFs as well, including spellcheck, screen orientation, touch selection, and printing. The DevTools team has also helped out: OOPIFs can now be shown in the main frame's inspector window, and DevTools extensions are now more fully isolated from DevTools processes.
As ever, many thanks to all those in the Chromium community who help make the web more secure!
Q1 2017
Greetings and salutations,
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter:
Our Bugs-- effort aims to find (and exterminate) security bugs. In order to get bugs fixed faster, we released a new tool to improve developer experience when trying to reproduce ClusterFuzz bugs. We have overhauled a significant part of the ClusterFuzz UI which now feature a new fuzzer statistics page, crash statistics page and fuzzer performance analyzer. We’ve also continued to improve our OSS-Fuzz offering, adding numerous features requested by developers and reaching 1000 bugs milestone with 47 projects in just five months since launch.
Members of the Chrome Security team attended the 10th annual Pwn2Own competition at CanSecWest. While Chrome was again a target this year, no team was able to demonstrate a fully working chain to Windows SYSTEM code execution in the time allowed!
Bugs still happen, so our Guts effort builds in multiple layers of defense. Chrome 56 takes advantage of Control Flow Guard (CFG) on Windows for Microsoft system DLLs inside the Chrome.exe processes. CFG makes exploiting corruption vulnerabilities more challenging by limiting valid call targets, and is available from Win 8.1 Update 3.
Site Isolation makes the most of Chrome's multi-process architecture to help reduce the scope of attacks. The big news in Q1 is that we launched --isolate-extensions to Chrome Stable in Chrome 56! This first use of out-of-process iframes (OOPIFs) ensures that web content is never put into an extension process. To maintain the launch and prepare for additional uses of OOPIFs, we fixed numerous bugs, cleaned up old code, reduced OOPIF memory usage, and added OOPIF support for more features (e.g., IntersectionObserver, and hit testing and IME on Android). Our next step is expanding the OOPIF-based <webview> trial from Canary to Dev channel and adding more uses of dedicated processes.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. Over the holidays, Google's security team gave us a holiday gift consisting entirely of interesting ways to bypass CSP's nonces. We've fixed some obvious bugs they uncovered, and we'll continue working with other vendors to harden the spec and our implementations. In other CSP news, we polished a mechanism to enforce CSP on child frames, shipped a `script-sample` property in CSP reports, and allowed hashes to match external scripts. We're also gathering data to support a few dangling markup mitigations, and dropped support for subresource URLs with embedded credentials and legacy protocols.
We also spend time building security features that users see. To protect users from Data URI phishing attacks, Chrome shows the “not secure” warning on Data URIs and intends to deprecate and remove content-initiated top-frame navigations to Data URIs. We also brought AIA fetching to Chrome for Android, and early metrics show over an 85% reduction in the fraction of HTTPS warnings caused by misconfigured certificate chains on Android. We made additional progress on improving Chrome’s captive portal detection. Chrome now keeps precise attribution of where bad downloads come from, so we can catch malware and UwS earlier. Chrome 57 also saw the launch of a secure time service, for which early data shows detection of bad client clocks when validating certificates improving from 78% to 95%.
We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large. To help people understand the security limitations of non-secure HTTP, Chrome now marks HTTP pages with passwords or credit card form fields as “not secure” in the address bar, and is experimenting with in-form contextual warnings. We’ll remove support for EME over non-secure origins in Chrome 58, and we’ll remove support for notifications over non-secure origins in Chrome 61. We talked about our #MOARTLS methodology and the HTTPS business case at Enigma.
In addition to #MOARTLS, we want to ensure more secure TLS through work on protocols and the certificate ecosystem. TLS 1.3 is the next, major version of the Transport Layer Security protocol. In Q1, Chrome tried the first, significant deployment of TLS 1.3 by a browser. Based on what we learned from that we hope to fully enable TLS 1.3 in Chrome in Q2.
In February, researchers from Google and CWI Amsterdam successfully mounted a collision attack against the SHA-1 hash algorithm. It had been known to be weak for a very long time, and in Chrome 56 dropped support for website certificates that used SHA-1. This was the culmination of a plan first announced back in 2014, which we've updated a few times since.
As ever, many thanks to all those in the Chromium community who help make the web more secure!
Cheers
Andrew, on behalf of the Chrome Security Team
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://www.chromium.org/Home/chromium-security/quarterly-updates.
Q4 2016
Greetings and salutations,
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from the last quarter of 2016:
Our Bugs-- effort aims to find (and exterminate) security bugs.
We announced OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software. See full blog post here. So far, more than 50 projects have been integrated with OSS-Fuzz and we found ~350 bugs.
Security bugs submitted by external researchers can receive cash money from the Chrome VRP.
Last year the Chrome VRP paid out almost one million dollars! More details in a blog post we did with our colleagues in the Google and Android VRPs.
Bugs still happen, so our Guts effort builds in multiple layers of defense.
Win32k lockdown for Pepper processes, including Adobe Flash and PDFium was shipped to Windows 10 clients on all channels in October 2016. Soon after the mitigation was enabled, a Flash 0-day that used win32k.sys as a privilege escalation vector was discovered being used in the wild, and this was successfully blocked by this mitigation! James Forshaw from Project Zero also wrote a blog about the process of shipping this new mitigation.
A new security mitigation on >= Win8 hit stable in October 2016 (Chrome 54). This mitigation disables extension points (legacy hooking), blocking a number of third-party injection vectors. Enabled on all child processes - CL chain. As usual, you can find the Chromium sandbox documentation here.
Site Isolation makes the most of Chrome's multi-process architecture to help reduce the scope of attacks.
Our earlier plan to launch --isolate-extensions in Chrome 54 hit a last minute delay, and we're now aiming to turn it on in Chrome 56. In the meantime, we've added support for drag and drop into out-of-process iframes (OOPIFs) and for printing an OOPIF. We've fixed several other security and functional issues for --isolate-extensions as well. We've also started an A/B trial on Canary to use OOPIFs for Chrome App <webview> tags, and we're close to starting an A/B trial of --top-document-isolation.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features.
After a good deal of experimentation, we (finally) tightened the behavior of cookies' `secure` attribute. Referrer Policy moved to a candidate recommendation, we made solid progress on Clear-Site-Data, and we expect to start an origin trial for Suborigins shortly.
Looking to the future, we've started to flesh out our proposal for stronger origin isolation properties, continued discussions on a proposal for setting origin-wide policy, and began working with the IETF to expand opt-in Certificate Transparency enforcement to the open web. We hope to further solidify all of these proposals in Q1.
We also spend time building security features that users see.
Our security indicator text labels launched in Chrome 55 for “Secure” HTTPS, “Not Secure” broken HTTPS, and “Dangerous” pages flagged by Safe Browsing. As part of our long-term effort to mark HTTP pages as non-secure, we built address-bar warnings into Chrome 56 to mark HTTP pages with a password or credit card form fields as “Not secure”.
We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large.
We added a new HTTPS Usage section to the Transparency Report, which shows how the percentage of Chrome pages loaded over HTTPS increases with time. We talked externally at O’Reilly Security NYC + Amsterdam and Chrome Dev Summit about upcoming HTTP UI changes and the business case for HTTPS. We published positive stories about HTTPS migrations.
In addition to #MOARTLS, we want to ensure more secure TLS.
We concluded our experiment with post-quantum key agreement in TLS. We implemented TLS 1.3 draft 18, which will be enabled for a fraction of users with Chrome 56.
And here are some other areas we're still investing heavily in:
Keeping users safe from Unwanted Software (UwS, pronounced 'ooze') and improving the Chrome Cleanup Tool, which has helped millions remove UwS that was injecting ads, changing settings, and otherwise blighting their machines.
Working on usable, understandable permissions prompts. We're experimenting with different prompt UIs, tracking prompt interaction rates, and continuing to learn how best to ensure users are in control of powerful permissions.
As ever, many thanks to all those in the Chromium community who help make the web more secure!
Cheers
Andrew, on behalf of the Chrome Security Team
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://www.chromium.org/Home/chromium-security/quarterly-updates.
Q3 2016
Greetings and salutations!
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter:
Our Bugs-- effort aims to find (and exterminate) security bugs.
We have continued to improve upon our libFuzzer and AFL integration with ClusterFuzz, which includes automated performance analysis and quarantining of bad units (like slow units, leaks, etc). We have scaled our code coverage to ~160 targets with help from Chrome developers, who contributed these during the month-long Fuzzathon. We have improved our infrastructure reliability and response times by adding a 24x7 monitoring solution, and fixing more than two dozen fuzzers in the process. Finally, we have refined our crash bucketization algorithm and enabled automatic bug filing remove human latency in filing regression bugs — long live the machines!
For Site Isolation, the first uses of out-of-process iframes (OOPIFs) have reached the Stable channel in Chrome 54!
We're using OOPIFs for --isolate-extensions mode, which ensures that web content is never put into a privileged extension process. In the past quarter, we made significant progress and fixed all our blocking bugs, including enabling the new session history logic by default, supporting cross-process POST submissions, and IME in OOPIFs. We also fixed bugs in painting, input events, and many other areas. As a result, --isolate-extensions mode has been enabled for 50% of M54 Beta users and is turned on by default in M55. From here, we plan to further improve OOPIFs to support --top-document-isolation mode, Chrome App <webview> tags, and Site Isolation for real web sites.
We also spend time building security features that users see.
We overhauled Chrome’s site security indicators in Chrome 52 on Mac and Chrome 53 on all other platforms, including adding new icons for Safe Browsing. These icons were the result of extensive user research which we shared in a peer-reviewed paper. Lastly, we made recovering blocked-downloads much less confusing.
We like to avoid showing unnecessarily scary warnings when we can. We analyzed data from opted-in Safe Browsing Extended Reporting users to quantify the major causes of spurious TLS warnings, like bad client clocks and misconfigured intermediate certificates. We also launched two experiments, Expect-CT and Expect-Staple, to help site owners deploy advanced new TLS features (Certificate Transparency and OCSP stapling) without causing warnings for their users.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features.
We continued to lock down the security of the web platform while also expanding capabilities to developers. We helped lock down cookies by starting to ship Strict Secure Cookies. Similarly, we also shipped the Referrer Policy spec and policy header. Content Security Policy was expanded with the strict-dynamic and unsafe-hashed-attributes directives. Our work on suborigins continued, updating the serialization and adding new web platform support.
We've also been working on making users feel more in control of powerful permissions.
In M55 and M56 we will be running experiments on permissions prompts to evaluate how this affects acceptance and decision rates. The experiments are to let users make temporary decisions, to auto-deny prompts if users keep ignoring them, and making permission prompts modal.
We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large.
We announced concrete steps towards marking HTTP sites as non-secure in Chrome UI — starting with marking HTTP pages with password or credit card form fields as “Not secure” starting in Chrome 56 (Jan 2017). We added YouTube and Calendar to the HTTPS Transparency Report. We’re also happy to report that www.google.com uses HSTS!
In addition to #MOARTLS, we want to ensure more secure TLS.
We continue to work on TLS 1.3, a major revision of TLS. For current revisions, we’re also keeping the TLS ecosystem running smoothly with a little grease. We have removed DHE based ciphers and added RSA-PSS. Finally, having removed RC4 from Chrome earlier this year, we’ve now removed it from BoringSSL’s TLS logic completely.
We launched a very rough prototype of Roughtime, a combination of NTP and Certificate Transparency. In parallel we’re investigating what reduction in Chrome certificate errors a secure clock like Roughtime could give us.
We also continued our experiments with post-quantum cryptography by implementing CECPQ1 to help gather some real world data.
As ever, many thanks to all those in the Chromium community who help make the web more secure!
Cheers
Andrew on behalf of the Chrome Security Team
Q2 2016
Greetings Earthlings,
It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter:
Our Bugs-- effort aims to find (and exterminate) security bugs. At the start of the quarter, we initiated a team-wide Security FixIt to trim the backlog of open issues… a bit of Spring cleaning our issue tracker, if you will :) With the help of dozens of engineers across Chrome, we fixed over 61 Medium+ severity security bugs in 2 weeks and brought the count of open issues down to 22! On the fuzzing front, we’ve added support for AFL and continued to improve the libFuzzer-ClusterFuzz integration, both of which allow coverage-guided testing on a per-function basis. The number of libFuzzer based fuzzers have expanded from 70 to[ 115](https://cs.chromium.org/search/?q=TestOneInput%5C(const+-file:third_party/llvm+-file:third_party/libFuzzer/src&sq=package:chromium&type=cs), and we’re processing ~500 Billion testcases every day! We’re also researching new ways to improve fuzzer efficiency and maximize code coverage (example). In response to recent trends from Vulnerability Reward Program (VRP) and Pwnium submissions, we wrote a new fuzzer for v8 builtins, which has already yielded bugs. Not everything can be automated, so we started auditing parts of mojo, Chrome’s new IPC mechanism, and found several issues (1, 2, 3, 4, 5).
Bugs still happen, so our Guts effort builds in multiple layers of defense. Many Android apps use WebView to display web content inline within their app. A compromised WebView can get access to an app’s private user data and a number of Android system services / device drivers. To mitigate this risk, in the upcoming release of Android N, we’ve worked to move WebView rendering out-of-process into a sandboxed process. This new process model is still experimental and can be enabled under Developer Options in Settings. On Windows, a series of ongoing stability experiments with App Container and win32k lockdown for PPAPI processes (i.e. Flash and pdfium) have given us good data that puts us in a position to launch both of these new security mitigations on Windows 10 very soon!
For Site Isolation, we're getting close to enabling --isolate-extensions for everyone. We've been hard at work fixing launch blocking bugs, and out-of-process iframes (OOPIFs) now have support for POST submissions, fullscreen, find-in-page, zoom, scrolling, Flash, modal dialogs, and file choosers, among other features. We've also made lots of progress on the new navigation codepath, IME, and the task manager, along with fixing many layout tests and crashes. Finally, we're experimenting with --top-document-isolation mode to keep the main page responsive despite slow third party iframes, and with using OOPIFs to replace BrowserPlugin for the <webview> tag.
We also spend time building security features that users see. We’re overhauling the omnibox security iconography in Chrome -- new, improved connection security indicators are now in Chrome Beta (52) on Mac and Chrome Dev (53) for all other platforms. We created a reference interstitial warning that developers can use for their implementations of the Safe Browsing API. Speaking of Safe Browsing, we’ve extended protection to cover files downloaded by Flash apps, we’re evaluating many more file types than before, and we closed several gaps that were reported via our Safe Browsing Download Protection VRP program.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We shipped an implementation of the Credential Management API (and presented a detailed overview at Google I/O), iterated on Referrer Policy with a `referrer-policy` header implementation behind a flag, and improved our support for SameSite cookies. We're continuing to experiment with Suborigins with developers both inside and outside Google, built a prototype of CORS-RFC1918, and introduce safety nets to protect against XSS vulnerabilities due to browser bugs[1].
We've also been working on making users feel more in control of powerful permissions. All permissions will soon be scoped to origins, and we've started implementing permission delegation (which is becoming part of feature policy). We’re also actively working to show fewer permission prompts to users, and to improve the prompts and UI we do show... subtle, critical work that make web security more human-friendly (and thus, effective).
We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large. Emily and Emily busted HTTPS myths for large audiences at Google I/O and the Progressive Web App dev summit. The HSTS Preload list has seen 3x growth since the beginning of the year – a great problem to have! We’ve addressed some growth hurdles by a rewrite of the submission site, and we’re actively working on the preload list infrastructure and how to additionally scale in the long term.
In addition to #MOARTLS, we want to ensure more secure TLS. Some of us have been involved in the TLS 1.3 standardization work and implementation. On the PKI front, and as part of our Expect CT project, we built the infrastructure in Chrome that will help site owners track down certificates for their sites that are not publicly logged in Certificate Transparency logs. As of Chrome 53, we’ll be requiring Certificate Transparency information for certificates issued by Symantec-operated CAs, per our announcement last year. We also launched some post-quantum cipher suite experiments to protect everyone from... crypto hackers of the future and more advanced worlds ;)
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://www.chromium.org/Home/chromium-security/quarterly-updates.
Happy Hacking,
Parisa, on behalf of Chrome Security
[1] Please let us know if you manage to work around them!
Q1 2016
Greetings web fans,
The Bugs-- effort aims to find (and exterminate) security bugs. On the fuzzing front, we’ve continued to improve the integration between libFuzzer and ClusterFuzz, which allows coverage-guided testing on a per-function basis. With the help of many developers across several teams, we’ve expanded our collection of fuzzing targets in Chromium (that use libFuzzer) to 70! Not all bugs can be found by fuzzing, so we invest effort in targeted code audits too. We wrote a guest post on the Project Zero blog describing one of the more interesting vulnerabilities we discovered. Since we find a lot of bugs, we also want to make them easier to manage. We’ve updated our Sheriffbot tool to simplify the addition of new rules and expanded it to help manage functional bugs in addition just security issues. We’ve also automated assigning security severity recommendations. Finally, we continue to run our vulnerability reward program to recognize bugs discovered from researchers outside of the team. As of M50, we’ve paid out over $2.5 million since the start of the reward program, including over $500,000 in 2015. Our median payment amount for 2015 was $3,000 (up from $2,000 for 2014), and we want to see that increase again this year!
Bugs still happen, so our Guts effort builds in multiple layers of defense. On Android, our seccomp-bpf experiment has been running on the Dev channel and will advance to the Stable and Beta channels with M50.
Chrome on Windows is evolving rapidly in step with the operating system. We shipped four new layers of defense in depth to take advantage of the latest capabilities in Windows 10, some of which patch vulnerabilities found by our own research and feedback! There was great media attention when these changes landed, from Ars Technica to a Risky Business podcast, which said: “There have been some engineering changes to Chrome on Windows 10 which look pretty good. … It’s definitely the go-to browser, when it comes to not getting owned on the internet. And it’s a great example of Google pushing the state of the art in operating systems.”
For our Site Isolation effort, we have expanded our on-going launch trial of --isolate-extensions to include 50% of both Dev Channel and Canary Channel users! This mode uses out-of-process iframes (OOPIFs) to keep dangerous web content out of extension processes. (See here for how to try it.) We've fixed many launch blocking bugs, and improved support for navigation, input events, hit testing, and security features like CSP and mixed content. We improved our test coverage and made progress on updating features like fullscreen, zoom, and find-in-page to work with OOPIFs. We're also excited to see progress on other potential uses of OOPIFs, including the <webview> tag and an experimental "top document isolation" mode.
We spend time building security features that people see. In response to user feedback, we’ve replaced the old full screen prompt with a new, lighter weight ephemeral message in M50 across Windows and Linux. We launched a few bug fixes and updates to the Security panel, which we continue to iterate on and support in an effort to drive forward HTTPS adoption. We also continued our work on removing powerful features on insecure origins (e.g. geolocation).
We’re working on preventing abuse of powerful features on the web. We continue to support great “permissions request” UX, and have started reaching out to top websites to directly help them improve how they request permissions for powerful APIs. To give top-level websites more control over how iframes use permissions, we started external discussions about a new Permission Delegation API. We also extended our vulnerability rewards program to support Safe Browsing reports, in a first program of its kind.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We now have an implementation of Suborigins behind a flag, and have been experimenting with Google developers on usage. We polished up the Referrer Policy spec, refined its integration with ServiceWorker and Fetch, and shipped the `referrerpolicy` attribute from that document. We're excited about the potential of new CSP expressions like 'unsafe-dynamic', which will ship in Chrome 52 (and is experimentally deployed on our shiny new bug tracker). In that same release, we finally shipped SameSite cookies, which we hope will help prevent CSRF. Lastly, we're working to pay down some technical debt by refactoring our Mixed Content implementation and X-Frame-Options to work in an OOPIF world.
We see migration to HTTPS as foundational to any security whatsoever (and we're not the only ones), so we're actively working to drive #MOARTLS across Google and the Internet at large. We worked with a number of teams across Google to help publish an HTTPS Report Card, which aims to hold Google and other top sites accountable, as well as encourage others to encrypt the web. In addition to #MOARTLS, we want to ensure more secure TLS. We mentioned we were working on it last time, but RC4 support is dead! The insecure TLS version fallback is also gone. With help from the libFuzzer folks, we got much better fuzzing coverage on BoringSSL, which resulted in CVE-2016-0705. We ended up adding a "fuzzer mode" to the SSL stack to help the fuzzer get past cryptographic invariants in the handshake, which smoked out some minor (memory leak) bugs.
Last, but not least, we rewrote a large chunk of BoringSSL's ASN.1 parsing with a simpler and more standards-compliant stack.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://www.chromium.org/Home/chromium-security/quarterly-updates.
Happy Hacking,
Parisa, on behalf of Chrome Security
Q4 2015
Happy 2016 from the Chrome Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) security bugs. We’ve integrated libFuzzer into ClusterFuzz, which means we can do coverage-guided fuzz testing on a per-function basis. The result, as you may have guessed, is several new bugs. The Bugs-- team has a larger goal this year to help Chromium developers write a ClusterFuzz fuzzer alongside every unittest, and libFuzzer integration is an important step toward achieving that goal. Separately, we’ve made security improvements and cleanups in the Pdfium codebase and fixed lots of open bugs. We also started some manual code auditing efforts, and discovered several high severity bugs (here, here, and here), and 1 critical severity bug.
Bugs still happen, so our Guts effort builds in multiple layers of defense. On Android, we’re running an experiment that adds an additional seccomp-bpf sandbox to renderer processes, like we already do on Desktop Linux and Chrome OS. On Windows 8 (and above), a Win32k lockdown experiment has been implemented for PPAPI plugins including Flash and Pdfium to help reduce the kernel attack surface for potential sandbox escapes. Also on Windows 8 (and above), an AppContainer sandbox experiment has been introduced, which further reduces kernel attack surface and blocks network communication from renderers.
Our Site Isolation effort reached a large milestone in December: running trials of the --isolate-extensions mode on real Chrome Canary users! This mode uses out-of-process iframes to isolate extension processes from web content for security. (Give it a try!) The trials were made possible by many updates to session history, session restore, extensions, painting, focus, save page, popup menus, and more, as well as numerous crash fixes. We are continuing to fix the remaining blocking issues, and we aim to launch both --isolate-extensions and the broader Site Isolation feature in 2016.
We also spend time building security features that users see. The Safe Browsing team publicly announced a new social engineering policy, expanding Chrome’s protection against deceptive sites beyond phishing. One major milestone is the launch of Safe Browsing in Chrome for Android, protecting hundreds of millions of additional users from phishing, malware, and other web threats! This is on by default and is already stopping millions of attacks on mobile Chrome users. The next time you come across a Safe Browsing warning, you can search for the blocked website in the new Site Status section of the Transparency Report to learn why it’s been flagged by our systems. On the other hand, we’re also trying to show users fewer security warnings in the first place by decreasing our false positive rate for HTTPS warnings. We spent a large part of the quarter analyzing client errors that contribute to false alarm HTTPS errors; check out our Real World Crypto talk for more details.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We've made good progress with folks in the IETF to make some meaningful changes to cookies; cookie prefixes and locking down 'secure' cookies will be shipping shortly. Subresource Integrity and Mixed Content are trucking along the W3C Recommendation path, we've solidified our Suborigins proposal, and have our eyes on some new hotness like HSTS Priming, CSP3 bits and pieces, and limiting access to local network resources.
We see migration to HTTPS as foundational to any security whatsoever (and we're not the only ones), so we're actively working to drive #MOARTLS across Google and the Internet at large. We've continued our effort to deprecate powerful features on insecure origins by readying to block insecure usage of geolocation APIs. We also took to the stage at the Chrome Dev Summit to spread the word, telling developers about what we’re doing in Chrome to make deploying TLS easier and more secure.
In addition to more TLS, we want to ensure more secure TLS, which depends heavily on the certificate ecosystem. Via Certificate Transparency, we detected a fraudulent Symantec-issued certificate in September, which subsequently revealed a pattern of additional misissued certificates. Independent of that incident, we took proactive measures to protect users from a Symantec Root Certificate that was being decommissioned in a way that puts users at risk (i.e. no longer complying with the CA/Browser Forum’s Baseline Requirements). Other efforts include working with Mozilla and Microsoft to phase out RC4 ciphersuite support, and continuing the deprecation of SHA-1 certificates, which were shown to be even weaker than previously believed. To make it easier for developers and site operators to understand these changes, we debuted a new Security Panel that provides enhanced diagnostics and will continue to be improved with richer diagnostics in the coming months.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://www.chromium.org/Home/chromium-security/quarterly-updates.
Happy Hacking,
Parisa, on behalf of Chrome Security
Q3 2015
Hello from the Chrome Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) security bugs. We’ve continued our collaboration with Android Security team and now have a fully functional AddressSanitizer (ASAN) build configuration of AOSP master (public instructions here). ClusterFuzz is helping Android Security team triage and verify bugs, including incoming vulnerability reward submissions, and now supports custom APK uploads and the ability to launch commands. Back on the Chrome front, we’re working on enabling Control Flow Integrity (CFI) checks on Linux, which converts invalid vptr accesses into non-exploitable crashes; 8 bugs discovered so far! We’ve made numerous improvements to how we fuzz Chrome on Android with respect to speed and accuracy. We also made some progress toward our goal of expanding ClusterFuzz platform support to include iOS. In our efforts to improve Chrome Stability, we added LeakSanitizer (LSAN) into our list of supported memory tools, which has already found 38 bugs.
Bugs still happen, so our Guts effort builds in multiple layers of defense. Plugin security remains a very important area of work. With the final death of unsandboxed NPAPI plugins in September, we’ve continued to introduce mitigations for the remaining sandboxed PPAPI (Pepper) plugins. First, we implemented support for Flash component updates on Linux, a long-standing feature request, which allows us to respond to Flash 0-day incidents without waiting to qualify a new release of Chrome. We’ve also been spending time improving the code quality and test coverage of Pdfium, the now open-source version of the Foxit PDF reader. In addition, we have been having some success with enabling Win32k syscall filtering on Windows PPAPI processes (PDFium and Adobe Flash). This makes it even tougher for attackers to get out of the Chromium Flash sandbox, and can be enabled on Windows 8 and above on Canary channel right now by toggling the settings in chrome://flags/#enable-ppapi-win32k-lockdown.
We’ve been making steady progress on Site Isolation, and are preparing to enable out-of-process iframes (OOPIFs) for web pages inside extension processes. You can test this mode before it launches with --isolate-extensions. We have performance bots and UMA stats lined up, and we'll start with some early trials on Canary and Dev channel. Meanwhile, we've added support for hit testing in the browser process, scrolling, context menus, and script calls between all reachable frames (even with changes to window.opener).
Not all security problems can be solved in Chrome’s guts, so we work on making security more user-friendly too. To support developers migrating to HTTPS, starting with M46, Chrome is marking the “HTTPS with Minor Errors” state using the same neutral page icon as HTTP pages (instead of showing the yellow lock icon). We’ve started analyzing invalid (anonymized!) TLS certificate reports gathered from the field, to understand the root causes of unnecessary TLS/SSL warnings. One of the first causes we identified and fixed was certificate hostname mismatches due to a missing ‘www’. We also launched HPKP violation reporting in Chrome, helping developers detect misconfigurations and attacks by sending a report when a pin is violated. Finally, in an effort to support the Chrome experience across languages and locales, we made strides in improving how the omnibox is displayed in RTL languages.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We shipped Subresource Integrity (SRI), which defends against resource substitution attacks by allowing developers to specify a hash against which a script or stylesheet is matched before it's executed. We’re excited to see large sites, like Github, already deploying SRI! We've sketched out a concept for a Clear Site Data feature which we hope will make it possible for sites to reset their storage, and we're hard at work on the next iteration of Content Security Policy. Both of these will hopefully start seeing some implementation in Q4.
We see migration to HTTPS as foundational to any security whatsoever (and we're not the only ones), so we're actively working to drive #MOARTLS across Google and the Internet at large. We shipped Upgrade Insecure Requests, which eases the transition to HTTPS by transparently correcting a page's spelling from `http://` to `https://` for all resources before any requests are triggered. We've also continued our effort to deprecate powerful features on insecure origins by solidifying the definition of a "Secure Context", and applying that definition to block insecure usage of getUserMedia().
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
Happy Hacking,
Parisa, on behalf of Chrome Security
Q2 2015
Hello from the Chrome Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) security bugs. At the start of the quarter, we initiated a Security FixIt to trim back the fat backlog of open issues. With the help of dozens of engineers across Chrome, we fixed over 40 Medium+ severity security bugs in 2 weeks and brought the count of issues down to 15! We also collaborated with Android Security Attacks Team and added native platform fuzzing support to ClusterFuzz (and imported their fuzzers), which resulted in ~30 new bugs discovered. ClusterFuzz now supports fuzzing on all devices of the Nexus family (5,6,7,9) and Android One and is running on a few dozen devices in the Android Lab. On top of this, we have doubled our fuzzing capacity in Compute Engine to ~8000 cores by leveraging Preemptible VMs. Lastly, we have upgraded all of our sanitizer builds on Linux (ASan, MSan, TSan and UBSan) to report edge-level coverage data, which is now aggregated in the ClusterFuzz dashboard. We’re using this coverage information to expand data bundles by existing fuzzers and improve our corpus distillation.
Bugs still happen, so our Guts effort builds in multiple layers of defense. Our Site Isolation project is getting closer to its first stage of launch: using out-of-process iframes (OOPIFs) for web pages inside extension processes. We've made substantial progress (with lots of help from others on the Chrome team!) on core Chrome features when using --site-per-process: OOPIFs now work with back/forward, DevTools, and extensions, and they use Surfaces for efficient painting (and soon input event hit-testing). We've collected some preliminary performance data using Telemetry, we've fixed lots of crashes, and we've started enforcing cross-site security restrictions on cookies and passwords. Much work remains, but we're looking forward to turning on these protections for real users!
On Linux and Chrome OS, we’ve made changes to restrict one PID namespace per renderer process, which strengthens and cleans-up our sandbox (shipping in Chrome 45). We also finished up a major cleanup necessary toward deprecating the setuid sandbox, which should be happening soon. Work continued to prepare for the launch of Windows 10, which offers some opportunities for new security mitigations; the new version looks like the most secure Windows yet, so be sure to upgrade when it comes out!
Not all security problems can be solved in Chrome’s guts, so we work on making security more user-friendly too. We’ve continued our efforts to avoid showing unnecessary TLS/SSL warnings: decisions are now remembered for a week instead of a session, and a new checkbox on TLS/SSL warnings allows users to send us invalid certificate chains that help us root out false-positive warnings. Since developers and power users have been asking for more tools to debug TLS/SSL issues, we’ve started building more security information into DevTools and plan to launch a first version in Q3!
Another large focus for the team has been improving how users are asked for permissions, like camera and geolocation. We’ve finalized a redesign of the fullscreen permission flow that we hope to launch by the end of the year, fixed a number of bugs relating to permission prompts, and launched another round of updates to PageInfo and Website Settings on Android.
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. The W3C's WebAppSec working group continues to be a fairly productive venue for a number of important features: we've polished the Subresource Integrity spec and shipped an implementation in Chrome 46, published first drafts of Credential Management and Entry Point Regulation, continue to push Content Security Policy Level 2 and Mixed Content towards "Recommendation" status, and fixed some longstanding bugs with our Referrer Policy implementation.
Elsewhere, we've started prototyping Per-Page Suborigins with the intent of bringing a concrete proposal to WebAppSec, published a new draft of First-Party-Only cookies (and are working through some infrastructure improvements so we can ship them), and poked at sandboxed iframes to make it possible to sandbox ads.
We see migration to HTTPS as foundational to any security whatsoever (and we're not the only ones), so we're actively working to drive #MOARTLS across Google and the Internet at large. As a small practical step on top of the HTTPS webmasters fundamentals section, we’ve added some functionality to Webmaster Tools to provide better assistance to webmasters when dealing with common errors in managing a site over TLS (launching soon!). Also, we're now measuring the usage of pre-existing, powerful features on non-secure origins, and are now printing deprecation warnings in the JavaScript console. Our ultimate goal is to make all powerful features, such as Geolocation and getUserMedia, available only to secure origins.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
Happy Hacking,
Parisa, on behalf of Chrome Security
Q1 2015
Hello from the Chrome Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) security bugs. Last quarter, we rewrote our IPC fuzzer, which resulted in lots more bugs discovered by ClusterFuzz! We also expanded fuzzing platform support (Android Lollipop, Linux with Nvidia GPU), added archived builds for proprietary media codecs testing on all platforms, and used more code annotations to find bugs (like this or this). We auto-add previous crash tests to our data corpus, which helps to catch regressions even if a developer forgets to add a test (example). We’ve also started experimenting with enabling and leveraging code coverage information from fuzzing. Contrary to what some reports may imply, we don’t think vulnerability counting is a good standalone metric for security, and more bugs discovered internally (653 bugs in 2014 vs. 380 bugs in 2013), means more bugs fixed, means safer software! Outside of engineering, inferno@ gave a talk at nullcon about Chrome fuzzing (slides) and we launched never-ending Pwnium with a rewards pool up to $∞ million!
Bugs still happen, so our Guts effort builds in multiple layers of defense. On Linux and Chrome OS, we did some work to improve the seccomp-BPF compiler and infrastructure. On modern kernels, we finally completed the switch from the setuid sandbox to a new design using unprivileged namespaces. We’re also working on a generic, re-usable sandbox API on Linux, which we hope can be useful to other Linux projects that want to employ sandboxing. On Android, we’ve been experimenting with single-threaded renderer execution, which can yield performance and security benefits for Chrome. We’ve also been involved with the ambitious Mojo effort. On OSX, we shipped crashpad (which was a necessary project to investigate those sometimes-security-relevant crashes!). Finally, on Windows, the support to block Win32k system calls from renderers on Windows 8 and above is now enabled on Stable - and renderers on these systems are also running within App Containers on Chrome Beta, which blocks their access to the network. We also ensured all Chrome allocations are safe - and use less memory (!) - by moving to the Windows heap.
On our Site Isolation project, we’ve made progress on the underlying architecture so that complex pages are correct and stable (e.g. rendering any combination of iframes, evaluating renderer-side security checks, sending postMessage between subframes, keeping script references alive). Great progress has also been made on session history, DevTools, and test/performance infrastructure, and other teams have started updating their features for out-of-process iframes after our Site Isolation Summit.
Not all security problems can be solved in Chrome’s guts, so we work on making security more user-friendly too. In an effort to determine the causes of SSL errors, we’ve added a new checkbox on SSL warnings that allows users to send us invalid certificate chains for analysis. We’ve started looking at the data, and in the coming months we plan to introduce new warnings that provide specific troubleshooting steps for common causes of spurious warnings. We also recently launched the new permissions bubble UI, which solves some of the problems we had with permissions infobars (like better coalescing of multiple permission requests). And for our Android users, we recently revamped PageInfo and Site Settings, making it easier than ever for people to manage their permissions. Desktop updates to PageInfo and Site Settings are in progress, too. Finally, we just launched a new extension, Chrome User Experience Surveys, which asks people for in-the-moment feedback after they use certain Chrome features. If you’re interested in helping improve Chrome, you should try it out!
Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We're working hard with the good folks in the W3C's WebAppSec working group to make progress on a number of specifications: CSP 2 and Mixed Content have been published as Candidate Recommendations, Subresource Integrity is implemented behind a flag and the spec is coming together nicely, and we've fixed a number of Referrer Policy issues. First-Party-Only Cookies are just about ready to go, and Origin Cookies are on deck.
We see migration to HTTPS as foundational to any security whatsoever (and we're not the only ones), so we're actively working to define the properties of secure contexts, deprecate powerful features on insecure origins, and to make it simpler for developers to Upgrade Insecure Requests on existing sites.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
Happy Hacking,
Parisa, on behalf of Chrome Security
P.S. Go here to travel back in time and view previous Chrome security quarterly updates.
Q4 2014
Hello from the Chrome Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) security bugs. Last quarter, we incorporated more coverage data into our ClusterFuzz dashboard, especially for Android. With this, we hope to optimize our test cases and improve fuzzing efficiency. We also incorporated 5 new fuzzers from the external research community as part of the fuzzer reward program. This has resulted in 33 new security vulnerabilities. Finally, we wrote a multi-threaded test case minimizer from scratch based on delta debugging (a long-standing request from blink devs!) which produces clean, small, reproducible test cases. In reward program news, we've paid over $1.6 million for externally reported Chrome bugs since 2010 ($4 million total across Google). In 2014, over 50% of reward program bugs were found and fixed before they hit the stable channel, protecting our main user population. Oh, and in case you didn’t notice, the rewards we’re paying out for vulnerabilities went up again.
Bugs still happen, so our Guts effort builds in multiple layers of defense. We’re most excited about progress toward a tighter sandbox for Chrome on Android (via seccomp-bpf), which required landing seccomp-bpf support in Android and enabling TSYNC on all Chrome OS and Nexus kernels. We’ve continued to improve our Linux / Chrome OS sandboxing by (1) adding full cross-process interaction restrictions at the BPF sandbox level, (2) making API improvements and some code refactoring of //sandbox/linux, and (3) implementing a more powerful policy system for the GPU sandbox.
After ~2 years of work on Site Isolation, we’re happy to announce that out-of-process iframes are working well enough that some Chrome features have started updating to support them! These include autofill (done), accessibility (nearly done), <webview> (prototyping), devtools, and extensions. We know how complex a rollout this will be, and we’re ready with testing infrastructure and FYI bots. As we announced at our recent Site Isolation Summit (video, slides), our goal for Q1 is to finish up OOPIF support with the help of all of Chrome.
Not all security problems can be solved in Chrome’s Guts, so we work on making security more user-friendly too. For the past few months, we’ve been looking deeper into the causes of SSL errors by looking at UMA stats and monitoring user help forums. One source of SSL errors is system clocks with the wrong time, so we landed a more informative error message in Chrome 40 to let users know they need to fix their clock. We’ve also started working on a warning interstitial for captive portals to distinguish those SSL errors from the rest. Finally, we proposed a plan for browsers to migrate their user interface from marking insecure origins (i.e. HTTP) as explicitly insecure; the initial discussion and external attention has been generally positive.
Over the past few years, we’ve worked on a bunch of isolated projects to push security on the Open Web Platform forward and make it possible for developers to write more secure apps. We recognized we can move faster if we get some of the team fully dedicated to this work, so we formed a new group that will focus on web platform efforts.
As usual, for more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
To a safer web in 2015!
Parisa, on behalf of Chrome Security
Q3 2014
Hello from the Chrome Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) security bugs. We increased Clusterfuzz cores across all desktop platforms (Mac, Android, Windows, and Linux), resulting in 155 security and 275 functional bugs since last update! We also started fuzzing D-Bus system services on Chrome OS, which is our first attempt at leveraging Clusterfuzz for the operating system. One of the common security pitfalls in C++ is bad casting (often rooted in aggressive polymorphism). To address, one of our interns tweaked UBSAN (Undefined Behavior Sanitizer) vptr to detect bad-casting at runtime, which resulted in 11 new security bugs! We’ve continued to collaborate with external researchers on new fuzzing techniques to find bugs in V8, Pdfium, Web Workers, IDB, and more. Shout out to attekett, cloudfuzzer, decoder.oh, and therealholden for their attention and bugs over the past quarter!
Finding bugs is only half the battle, so we also did a few things to make it easier to get security bugs ==fixed==, including (1) a new security sheriff dashboard and (2) contributing to the FindIt project, which helps narrow down suspected CL(s) for a crash (given a regression range and stacktrace), thereby saving manual triage cycles.
Bugs still happen, so our Guts effort builds in multiple layers of defense. We did a number of things to push seccomp-bpf onto more platforms and architectures, including: (1) adding support for MIPS and ARM64, (2) adding a new capability to initialize seccomp-bpf in the presence of threads (bringing us a big step closer to a stronger sandbox on Android), (3) general tightening of the sandboxes, and (4) writing a domain-specific language to better express BPF policies. We also helped ensure a safe launch of Android apps on Chrome OS, and continued sandboxing new system services.
On Windows, we launched Win64 to Stable, giving users a safer, speedier, and more stable version of Chrome! On Windows 8, we added Win32k system call filtering behind a switch, further reducing the kernel attack surface accessible from the renderer. We also locked down the alternate desktop sandbox tokens and refactored the sandbox startup to cache tokens, which improves new tab responsiveness.
Finally, work continues on site isolation. Over the past few months, we’ve started creating RemoteFrames in Blink's frame tree to support out-of-process iframes (OOPIF) and got Linux and Windows FYI bots running tests with --site-per-process. We’ve also been working with the Accessibility team as our guinea pig feature to support OOPIF, and since that work is nearly done, we’re reaching out to more teams over the next few months to update their features (see our FAQ about updating features).
Not all security problems can be solved in Chrome’s guts, so we work on making security more user-friendly too. SSL-related warnings are still a major source of user pain and confusion. Over the past few months, we’ve been focused on determining the causes of false positive SSL errors (via adding UMA stats for known client / server errors) and investigating pinning violation reports. We’ve also been experimenting with cert memory strategies and integrating relevant detail when we detect a (likely) benign SSL error due to captive portal or a bad clock.
Developers are users too, so we know it’s important to support new web security features and ensure new features are safe to use by default. In that vein, we recently landed a first pass at subresource integrity support behind a flag (with useful console errors), we’re shipping most of CSP 2 in M40, we’ve continued to tighten up handling of mixed content, and are working to define and implement referrer policies. We’ve also been helping on some security consulting for Service Worker; kudos to the team for making changes to handle plugins more securely, restrict usage to secure origins, and for addressing some memory caching issues. If you want to learn more about what’s going on in the Blink Security world, check out the Blink-SecurityFeature label.
And then there’s other random things, like ad-hoc hunting for security bugs (e.g. local privilege escalation bug in pppd), giving Chromebooks to kids at DEFCON, and various artistic endeavors, like color-by-risk diagramming and security-inspired fashion.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
Happy Hacking (and Halloween),
Parisa, on behalf of Chrome Security
Q2 2014
Hello from the Chromium Security Team!
For those that don’t know us already, we do stuff to help make Chrome the most secure platform to browse the Internet. Here’s a recap of some work from last quarter:
One of our primary responsibilities is security adviser, and the main way we do this is via security reviews. A few weeks ago, jschuh@ announced a new and improved security review process that helps teams better assess their current security posture and helps our team collect more meaningful data about Chrome engineering. All features for M37 went through the new process, and we’ll be shepherding new projects and launches through this process going forward.
The Bugs-- effort aims to find (and exterminate) security bugs. One of our best ways of finding bugs and getting them fixed quickly is fuzz testing via ClusterFuzz. This quarter, we started fuzzing Chrome on Mac OS (extending the existing platform coverage on Windows, Linux, and Android). We also added code coverage stats to the ClusterFuzz UI, which some teams have been finding helpful as a complement to their QA testing, as well as fuzzer stats, which V8 team now checks in new rollouts. Finally, we added some new fuzzers (WebGL, GPU commands) and integrated a number of memory debugging tools to find new classes of bugs (e.g. AddressSanitizer on Windows found 22 bugs, Dr. Memory on Windows found 1 bug, MemorySanitizer on Linux found 146 bugs, and LeakSanitizer on Linux found 18 bugs).
Another source of security bugs is our vulnerability reward program, which saw a quiet quarter: only 32 reports opened in Q2 (lowest participation in 12 months) and an average payout of $765 per bug (lowest value in 12 months). This trend is likely due to (1) fuzzers, both internal and external, finding over 50% of all reported bugs in Q2, (2) a reflection of both the increasing difficulty of finding bugs and outdated reward amounts being less competitive, and (3) researcher fatigue / lack of interest or stimulus. Plans for Q3 include reinvigorating participation in the rewards program through a more generous reward structure and coming up with clever ways to keep researchers engaged.
Outside of external bug reports, we spent quite a bit of time improving the security posture of Pdfium (Chrome's recently opensourced PDF renderer) via finding / fixing ~150 bugs, removing risky code (e.g. custom allocator), and using secure integer library for overflow checks. Thanks to ifratric@, mjurczyk@, and gynvael@ for their PDF fuzzing help!
Bugs still happen, so our Guts effort builds in multiple layers of defense. We did lots of sandboxing work across platforms last quarter. On Mac OS, rsesek@ started working on a brand new bootstrap sandbox for OSX (//sandbox/mac) and on Android, he got a proof-of-concept renderer running under seccomp-bpf. On Linux and Chrome OS, we continued to improve the sandboxing testing framework and wrote dozens of new tests; all our security tests are now running on the Chrome OS BVT. We also refactored all of NaCl-related “outer” sandboxing to support a new and faster Non-SFI mode for NaCl. This is being used to run Android apps on Chrome, as you may have seen demoed at Google I/O.
After many months of hard work, we’re ecstatic to announce that we released Win64 on dev and canary to our Windows 7 and Windows 8 users. This release takes advantage of High Entropy ASLR on Windows 8, and the extra bits help improve the effectiveness of heap partitioning and mitigate common exploitation techniques (e.g. JIT spraying). The Win64 release also reduced ~⅓ of the crashes we were seeing on Windows, so it’s more stable too!
Finally, work continues on site isolation: lots of code written / rewritten / rearchitected and unknown unknowns discovered along the way. We're close to having "remote" frames for each out-of-process iframe, and you can now see subframe processes in Chrome's Task Manager when visiting a test page like this with the --site-per-process flag.
Not all security problems can be solved in Chrome’s guts, so we work on making security more user-friendly too. The themes of Q2 were SSL and permissions. For SSL, we nailed down a new "Prefer Safe Origins for Powerful Features" policy, which we’ll transition to going forward; kudos to palmer@ and sleevi@ for ironing out all the details and getting us to a safer default state. We’ve also been trying to improve the experience of our SSL interstitial, which most people ignore :-/ Work includes launching new UX for SSL warnings and incorporating captive portal status (ongoing). Congrats to agl@ for launching boringssl - if boring means avoiding Heartbleed-style hysteria, sounds good to us!
On the permissions front, we’re working on ways to give users more control over application privileges, such as (1) reducing the number of install-time CRX permissions, (2) running UX experiments on the effectiveness of permissions, and (3) working on building a security and permissions model to bring native capabilities to the web.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
In the meantime, happy hacking!
Parisa, on behalf of Chrome Security
P.S. A big kudos to the V8 team, and jkummerow@ in particular, for their extra security efforts this quarter! The team rapidly responded to and fixed a number of security bugs on top of doing some security-inspired hardening of V8 runtime functions.
Q1 2014
Hello from the Chrome Security Team!
For those that don’t know us already, we help make Chrome the most secure platform to browse the Internet. In addition to security reviews and consulting, running a vulnerability reward program, and dealing with security surprises, we instigate and work on engineering projects that make Chrome safer. Here’s a recap of some work from last quarter:
The Bugs-- effort aims to find (and exterminate) exploitable bugs. A major accomplishment from Q1 was getting ClusterFuzz coverage for Chrome on Android; we’re aiming to scale up resources from a few devices on inferno@’s desk to 100 bots over the next few months. On the fuzzer front, mbarbella@ wrote a new V8 fuzzer that helped shake out 30+ bugs; kudos to the V8 team for being so proactive at fixing these issues and prioritizing additional proactive security work this quarter. Spring welcomed a hot new line of PoC exploits at Pwn2Own and Pwnium 4: highlights included a classic ensemble of overly broad IPC paired with a Windows “feature,” and a bold chain of 5 intricate bugs for persistent system compromise on Chrome OS; more details posted soon here. Beyond exploit contests, we’ve rewarded $52,000 for reports received this year (from 16 researchers for 23 security bugs) via our ongoing vulnerability reward program. We also started rewarding researchers for bugs in Chrome extensions developed "by Google.” Outside of finding and fixing bugs, jschuh@ landed a safe numeric class to help prevent arithmetic overflow bugs from being introduced in the first place; use it and you'll sleep better too!
Bugs still happen, so we build in multiple layers of defense. One of our most common techniques is sandboxing, which helps to reduce the impact of any single bug. Simple in theory, but challenging to implement, maintain, and improve across all platforms. On Linux and Chrome OS, we spent a lot of the quarter paying back technical debt: cleaning up the GPU sandbox, writing and fixing tests, and replacing the setuid sandbox. On Android, we reached consensus with the Android Frameworks team on a path forward for seccomp-bpf sandboxing for Clank. We've started writing the CTS tests to verify this in Android, landed the baseline policy in upstream Clankium, and are working on the required upstream Linux Kernel changes to be incorporated into Chrome Linux, Chrome OS, and Android L. The site isolation project (i.e. sandboxing at the site level) landed a usable cross-process iframe implementation behind --site-per-process, which supports user interaction, nested iframes (one per doc), sad frame, and basic DevTools support. Major refactoring of Chrome and Blink, performance testing, and working with teams that need to update for site isolation continues this quarter. On Windows, we shipped Win64 canaries, landed code to sandbox the auto update mechanism, and improved the existing sandboxing, reducing the win32k attack surface by ~30%. Thanks to the Windows Aura team, we’ve also made tremendous progress on disabling win32k entirely in the Chrome sandbox, which will eventually eliminate most Windows-specific sandbox escapes.
Not all security can be solved in Chromium’s Guts, so we work on making security more user-friendly too. We finally landed the controversial change to remember passwords, even when autocomplete='off' in M34, which is a small, but significant change to return control back to the user. We also made some tweaks to the malware download UX in M32; previously users installed ~29% of downloads that were known malware, and that number is now down to <5%! We’ve recently been thinking a lot about how to improve the security of Chrome Extensions and Apps, including experimenting with several changes to the permission dialog to see if we can reduce the amount of malicious crx installed by users without reducing the amount of non-malicious items. Separately, we want to make it easier for developers to write secure APIs, so meacer@ wrote up some security tips to help developers avoid common abuse patterns we’ve identified from bad actors.
Finally, since Heartbleed is still on the forefront of many minds, a reminder that Chrome and Chrome OS were not directly affected. And if you're curious about how and why Chrome does SSL cert revocation the way it does, agl@ wrote a great post explaining that too.
For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org.
Happy Hacking, Parisa, on behalf of Chrome Security
Q4 2013
Hello from the Chrome Security Team! For those that don’t know us already, we help make Chromium the most secure browsing platform in the market. In addition to security reviews and consulting, running a vulnerability reward program, and dealing with security surprises, we instigate and work on engineering projects that make Chrome more secure. The end of last year flew by, but here are a couple of things we’re most proud of from the last quarter of 2013: Make security more usable: We made a number of changes to the malware download warning to discourage users from installing malware. We also worked on a reporting feature that lets users upload suspicious files to Safe Browsing, which will help Safe Browsing catch malicious downloads even faster. Since PDFs are a common vehicle for exploit delivery, we’ve modified PDF handling in Chrome so that they're all opened in Chrome’s PDF viewer by default. This is a huge security win because we believe Chrome’s PDF viewer is the safest, most hardened, and security-tested viewer available. Malware via Microsoft .docs are also common, so we’re eagerly awaiting the day we can open Office Docs in Quickoffice by default. Find (and fix) more security bugs: We recently welcomed a new member to the team, Sheriffbot. He’s already started making the mortal security sheriffs’ lives easier by finding new owners, adding Cr- area labels, helping apply and fix bug labels, and reminding people about open security bugs they have assigned to them. Our fuzzing mammoth, ClusterFuzz, is now fully supported on Windows and has helped find 32 new bugs. We’ve added a bunch of new fuzzers to cover Chromium IPC (5 high severity bugs), networking protocols (1 critical severity bug from a certificate fuzzer, 1 medium severity bug from an HTTP protocol fuzzer), and WebGL (1 high severity bug in Angle). Want to write a fuzzer to add security fuzzing coverage to your code? Check out the ClusterFuzz documentation, or get in touch. In November, we helped sponsor a Pwn2Own contest at the PacSec conference in Tokyo. Our good friend, Pinkie Pie, exploited an integer overflow in V8 to get reliable code execution in the renderer, and then exploited a bug in a Clipboard IPC message to get code execution in the browser process (by spraying multiple gigabytes of shared memory). We’ll be publishing a full write-up of the exploit on our site soon, and are starting to get excited about our upcoming Pwnium in March. Secure by default, defense in depth: In Chrome 32, we started blocking NPAPI by default and have plans to completely remove support by the end of the year. This change significantly reduces Chrome’s exposure to browser plugin vulnerabilities. We also implemented additional heap partitioning for buffers and strings in Blink, which further mitigates memory exploitation techniques. Our Win64 port of Chromium is now continuously tested on the main waterfall and is on track to ship this quarter. Lastly, we migrated our Linux and Chrome OS sandbox to a new policy format and did a lot of overdue sandbox code cleanup. On our site isolation project, we’ve started landing infrastructure code on trunk to support out-of-process iframes. We are few CLs away from having functional cross-process iframe behind a flag and expect it to be complete by the end of January! Mobile, mobile, mobile: We’ve started focusing more attention to hardening Chrome on Android. In particular, we’ve been hacking on approaches for strong sandboxing (e.g. seccomp-bpf), adding Safe Browsing protection, and getting ClusterFuzz tuned for Android. For more thrilling security updates and feisty rants, catch ya on security-dev@chromium.org. Happy Hacking, Parisa, on behalf of Chrome Security
Q3 2013
An early boo and (late) quarter update from the Chrome Security Team!
For those that don’t know us already, we help make Chromium the most secure browsing platform in the market. In addition to security reviews and consulting, running a vulnerability reward program, and dealing with security surprises, we instigate and work on engineering projects that make Chrome more secure.
Last quarter, we reorganized the larger team into 3 subgroups:
Bugs--, a group focused on finding security bugs, responding to them, and helping get them fixed. The group is currently working on expanding Clusterfuzz coverage to other platforms (Windows and Mac), adding fuzzers to cover IPC, networking, and WebGL, adding more security ASSERTS to catch memory corruption bugs. They're also automating some of the grungy and manual parts of being security sheriff to free up human cycles for more exciting things.
Enamel, a group focused on usability problems that affect end user security or the development of secure web applications. In the near-term, Enamel is working on: improving the malware download warnings, SSL warnings, and extension permission dialogs; making it safer to open PDFs and .docs in Chrome; and investigating ways to combat popular phishing attacks.
Guts, a group focused on ensuring Chrome’s architecture is secure by design and resilient to exploitation. Our largest project here is site isolation, and in Q4, we’re aiming to have a usable cross-process iframe implementation (behind a flag ;) Other Guts top priorities include sandboxing work (stronger sandboxing on Android, making Chrome OS’s seccomp-bpf easier to maintain and better tested), supporting NPAPI deprecation, launching 64bit Chrome for Windows, and Blink memory hardening (e.g. heap partitioning).
Retrospectively, here are some of notable security wins from recent Chrome releases:
In Chrome 29, we tightened up the sandboxing policies on Linux and added some defenses to the Omaha (Chrome Update) plugin, which is a particularly exposed and attractive target in Chrome. The first parts of Blink heap partition were released, and we’ve had “backchannel” feedback that we made an impact on the greyhat exploit market.
In Chrome 30 we fixed a load of security bugs! The spike in bugs was likely due to a few factors: (1) we started accepting fuzzers (7 total) from invited external researchers as part of a Beta extension to our vulnerability reward program (resulting in 26 new bugs), (2) we increased reward payouts to spark renewed interest from the public, and (3) we found a bunch of new buffer (over|under)flow and casting bugs ourselves by adding ASSERT_WITH_SECURITY_IMPLICATIONs in Blink. In M30, we also added a new layer of sandboxing to NaCl on Chrome OS, with seccomp-bpf.
Last, but not least, we want to give a shout out to individuals outside the security team that made an extraordinary effort to improve Chrome security:
- Jochen Eisinger for redoing the pop-up blocker... so that it actually blocks pop-ups (instead of hiding them). Beyond frustrating users, this bug was a security liability, but due to the complexity of the fix, languished in the issue tracker for years.
- Mike West for his work on CSP, as well as tightening downloading of bad content types.
- Avi Drissman for fixing a longstanding bug where PDF password input was not masked.
- Ben Hawkes and Ivan Fratic for finding four potentially exploitable Chrome bugs using WinFuzz.
- Mateusz Jurczyk on finding ton of bugs in VP9 video decoder.
Happy Hacking, Parisa, on behalf of Chrome Security
Q2 2013
Hello from the Chrome Security Team!
For those that don’t know us, we’re here to help make Chrome a very (the most!) secure browser. That boils down to a fair amount of work on security reviews (and other consulting), but here’s some insight into some of the other things we were up to last quarter:
Bug Fixin’ and Code Reviews
At the start of the quarter, we initiated a Code 28 on security bugs to trim back the fat backlog of open issues. With the help of dozens of engineers across Chrome, we fixed over 100 security bugs in just over 4 weeks and brought the count of Medium+ severity issues to single digits. (We’ve lapsed a bit in the past week, but hopefully will recover once everyone returns from July vacation :)
As of July 1st, Clusterfuzz has helped us find and fix 822 bugs! Last quarter, we added a new check to identify out of bound memory accesses and bad casts (ASSERT_WITH_SECURITY_IMPLICATION), which resulted in ~72 new bugs identified and fixed. We’re also beta testing a “Fuzzer Donation” extension to our vulnerability reward program.
Anecdotally, this quarter we noticed an increase in the number of IPC reviews and marked decrease in security issues! Not sure if our recent security tips doc is to credit, but well done to all the IPC authors and editors!
Process hardening
We’ve mostly wrapped up the binding integrity exploit mitigation changes we started last quarter, and it’s now landed on all desktop platforms and Clank. Remaining work entails making additional V8 wrapped types inherit from ScriptWrappable so more Chrome code benefits from this protection. We also started a new memory hardening change that aims to place DOM nodes inside their own heap partition. Why would we want to do that? Used-after-free memory bugs are common. By having a separate partition, the attacker gets a more limited choice of what to overlap on top of the freed memory slot, which makes these types of bugs substantially harder to exploit. (It turns out there is some performance improvement in doing this too!)
Sandboxing++
We’re constantly trying to improve Chrome sandboxing. On Chrome OS and Linux, The GPU process is now sandboxed on ARM (M28) and we’ve been been working on sandboxing NaCl under seccomp-bpf. We’ve also increased seccomp-bpf test coverage and locked down sandbox parameters (i.e. less attack surface). Part of the Chrome seccomp-bpf sandbox is now used in google3 (//third_party/chrome_seccomp), and Seccomp-legacy and SELinux have been deprecated as sandboxing mechanisms.
Chrome work across platforms
Mobile platforms pose a number of challenges to replicating some of the security features we’re most proud of on desktop, but with only expected growth of mobile, we know we need to shift some security love here. We’re getting more people ramped up to help on consulting (security and code reviews) and making headway on short and long-term goals.
On Windows, we’re still chugging along sorting out tests and build infrastructure to get a stable Win64 release build for canary tests.
On Chrome OS, work on kernel ASLR is ongoing, and we continued sandboxing system daemons.
Site Isolation Efforts
After some design and planning in Q1, we started building the early support for out-of-process iframes so that Chrome's sandbox can help us enforce the Same Origin Policy. In Q2, we added a FrameTreeNode class to track frames in the browser process, refactored some navigation logic, made DOMWindow own its Document (rather than vice versa) in Blink, and got our prototype to handle simple input events. We'll be using these changes to get basic out-of-process iframes working behind a flag in Q3!
Extensions & Apps
This quarter, we detected and removed ~N bad extensions from the Web Store that were either automatically detected or manually flagged as malicious or violating our policies. We’ve started transitioning manual CRX malware reviews to a newly formed team, who are staffing and ramping up to handle this significant workload. Finally, we’ve been looking at ways to improve the permission dialog for extensions so that it’s easier for users to understand the security implications of what they’re installing, and working on a set of experiments to understand how changes to the permissions dialog affect user installation of malware.
Happy Q3!
Parisa, on behalf of Chrome Security
Q1 2013
Hi from the Chrome Security Team!
For those that don’t know us already, we’re here to help make Chrome the most secure browser in the market. We do a fair bit of work on security reviews of new features (and other consulting), but here’s a summary of some of the other things we were up to last quarter:
Bug, bugs, bugs
Though some time is still spent handeling external security reports (mainly from participants of our vulnerability reward program), we spent comparatively more time in Q1 hunting for security bugs ourselves. In particular, we audited a bunch of IPC implementations after the two impressive IPC-based exploits from last year - aedla found some juicy sandbox bypass vulnerabilities (161564, 162114, 167840, 169685) and cdn and cevans found / fixed a bunch of other interesting memory corruption bugs (169973, 166708, 164682). Underground rumors indicate many of these internally discovered bugs collided with discoveries from third party researchers (that were either sitting on or using them for their own purposes). At this point, most of the IPCs that handle file paths have been audited, and we’ve started putting together a doc with security tips to mind when writing IPC.
On the fuzzing front, we updated and added a number of fuzzers to Clusterfuzz: HTML (ifratric, mjurczyk), Flash (fjserna), CSS (bcrane), V8 (farcasia), Video VTT (yihongg), extension APIs (meacer), WebRTC (phoglund), Canvas/Skia (aarya), and Flicker/media (aarya); aarya also taught Clusterfuzz to look for dangerous ASSERTs with security implications, which resulted in even more bugs. Kudos to Clusterfuzz and the ASAN team for kicking out another 132 security bugs last quarter! One downside to all these new bugs is that our queue of open security bugs across Chrome has really spiked (85+ as of today). PIease help us fix these bugs!
Process hardening
We’re constantly thinking about proactive hardening we can add to Chrome to eliminate or mitigate exploitation techniques. We find inspiration not only from cutting edge security defense research, but also industry chatter around what the grey and black hats are using to exploit Chrome and other browsers. This past quarter jln implemented more fine grained support for sandboxing on Linux, in addition to some low level tcmalloc changes that improve ASLR and general allocator security on 64-bit platforms. With jorgelo, they also implemented support for a stronger GPU sandbox on Chrome OS (which we believe was instrumental in avoiding a Pwnium 3 exploit). tsepez landed support for V8 bindings integrity on Linux and Mac OS, a novel feature that ensures DOM objects are valid when bound to Javascript; this avoids exploitation of type confusion bugs in the DOM, which Chrome has suffered from in the past. palmer just enabled bindings integrity for Chrome on Android, and work is in progress on Windows.
Work across platforms
One of our key goals is to get Chrome running natively on 64-bit Windows, where the platform mitigations against certain attacks (such as heap spray) are stronger than when running within a WOW64 process. (We’ve also seen some performance bump on graphics and media on 64-bit Windows!) We made serious progress on this work in Q1, coordinating with engineers on a dozen different teams to land fixes in our codebase (and dependencies), working with Adobe on early Flapper builds, porting components of the Windows sandbox to Win64, and landing 100+ generic Win64 build system and API fixes. Thanks to all that have made this possible!
As Chrome usage on mobile platforms increases, so too must our security attention. We’ve set out some short and long-term goals for mobile Chrome security, and are excited to start working with the Clank team on better sandboxing and improved HTTPS authentication.
Site isolation
Work continues on the ambitious project to support site-per-process sandboxing, which should help us prevent additional attacks aimed at stealing or tampering with user data from a specific site. Last quarter, we published a more complete design for out-of-process iframes, set up performance and testing infrastructure, and hacked together a prototype implementation that helped confirm the feasibility of this project and surface some challenges and open questions that need more investigation.
Extensions
When not feeding the team fish, meacer added a lot of features to Navitron to make flagged extensions easier to review and remove from the WebStore. To put this work in perspective, each week ~X new items are submitted to Webstore, ~Y of them are automatically flagged as malware (and taken down), ~Z malware escalations are manually escalated from extension reviewers (and then reviewed again by security;. meacer also added a fuzzer for extensions and apps APIs, and has been fixing the resulting bugs.
Until we meet again (probably in the issue tracker)...
Parisa, on behalf of Chrome Security