0

Create a Chrome Speed Metrics folder for documentation

In this CL, we add a speed-metrics folder, and in it we include:
* A README with links to other relevant content.
* The metrics changelog, moved from the speed folder.
* Web performance OKRs.

Change-Id: Idc392ab96bb98ce6f0e4b8100b7383a5b92e5e34
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2169360
Commit-Queue: Nicolás Peña Moreno <npm@chromium.org>
Reviewed-by: Annie Sullivan <sullivan@chromium.org>
Cr-Commit-Position: refs/heads/master@{#764447}
This commit is contained in:
Nicolás Peña Moreno
2020-04-30 22:17:12 +00:00
committed by Commit Bot
parent 2f220b2cec
commit c7391bbcd2
16 changed files with 234 additions and 2 deletions

@ -1693,7 +1693,7 @@
'filepath': 'third_party/blink/renderer/(core|modules|platform)/.*\.idl',
},
'speed_metrics_changelog': {
'filepath': 'docs/speed/metrics_changelog/.*.md',
'filepath': 'docs/speed_metrics/metrics_changelog/.*.md',
},
'spellcheck': {
'filepath': 'chrome/browser/spellchecker/'\

@ -379,6 +379,10 @@ used when committed.
* [Mojo “Style” Guide](security/mojo.md) - Recommendations for best practices
from Mojo and IPC reviewers
### Speed
* [Chrome Speed](speed/README.md) - Documentation for performance measurements and regressions in Chrome.
* [Chrome Speed Metrics](speed_metrics/README.md) - Documentation about user experience metrics in the web and their JavaScript APIs.
### WebXR
* [Running OpenVR Without Headset](xr/run_openvr_without_headset.md) -
Instructions for running OpenVR on Windows without a headset

@ -29,6 +29,6 @@
* Benchmark-specific discussion: benchmarking-dev@chromium.org
<!--- TODO: Requests for new benchmarks: chrome-benchmarking-request mailing list link -->
* Performance dashboard, bisect, try jobs: speed-services-dev@chromium.org
* **Chrome Speed Metrics**: provides a set of high-quality metrics that represent real-world user experience, and exposes these metrics to both Chrome and Web Developers.
* **[Chrome Speed Metrics](../speed_metrics/README.md)**: provides a set of high-quality metrics that represent real-world user experience, and exposes these metrics to both Chrome and Web Developers.
* General discussion: speed-metrics-dev@chromium.org
* The actual metrics: [speed launch metrics survey.](https://docs.google.com/document/d/1Ww487ZskJ-xBmJGwPO-XPz_QcJvw-kSNffm0nPhVpj8/edit#heading=h.2uunmi119swk)

@ -0,0 +1 @@
# COMPONENT: Blink>PerformanceAPIs

@ -0,0 +1,106 @@
# Chrome Speed Metrics
[TOC]
## Mission
The Chrome Speed Metrics team aims to quantify users' experience of the web to
provide Chrome engineers and web developers the metrics, insights, and
incentives they need to improve it. We aim to:
* **Quantify** web UX via a high quality set of UX metrics which Chrome devs
align on.
* **Expose** these metrics consistently to Chrome and Web devs, in the lab and
the wild.
* **Analyze** these metrics, producing actionable reports driving our UX
efforts.
* **Own** implementation for these metrics for TBMv2, UMA/UKM, and web perf
APIs.
## Goals
### Quantify Users Experience of the Web
Chrome needs a small, consistent set of high quality user experience metrics.
Chrome Speed Metrics is responsible for authoring reference implementations of
these metrics implemented using Trace Based Metrics v2 (TBMv2) in
[tracing/metrics](https://source.chromium.org/chromium/chromium/src/+/master:third_party/catapult/tracing/tracing/metrics/).
These reference implementations will often require adding C++ instrumentation.
Some metrics work will also be driven by more focused metrics teams, such as the
work on Frame Throughput. Chrome Speed Metrics also owns UMA/UKM metrics, and
speed metrics related Web Perf APIs.
The wider set of folks involved in defining these metrics will include:
* Area domain experts.
* Focused metrics teams.
* Devtools folks.
* DevX, documenting what these metrics mean for external developers.
* Occasional other experts (e.g., UMA folks).
### Expose Consistent Metrics Everywhere
Chrome Speed Metrics is responsible for ensuring that our core metrics are
exposed everywhere. This includes collaborating with devtools, lighthouse, etc.
to make sure our metrics are easy to expose, and are exposed effectively.
### Analyze Chrome Performance, providing data to drive our performance efforts
Metrics arent useful if no one looks at them. Chrome Speed Metrics performs
detailed analysis on our key metrics and breakdown metrics, providing actionable
reports on how Chrome performs in the lab and in the wild. These reports are
used to guide regular decision making processes as part of Chromes planning
cycle, as well as to inspire Chrome engineers with concrete ideas for how to
improve Chromes UX.
### Own Core Metrics
The Chrome Speed Metrics team will gradually gain ownership of
[tracing/metrics](https://source.chromium.org/chromium/chromium/src/+/master:third_party/catapult/tracing/tracing/metrics/),
and will be responsible for the long term code health of this directory. Were
also ramping up ownership in the Web Perf API space.
## Contact information
* **Email**: speed-metrics-dev@chromium.org
* **Tech lead**: sullivan@chromium.org
* **File a bug**:
* For issues related to web performance APIs, file a bug
[here](https://bugs.chromium.org/p/chromium/issues/entry?template=Defect+report+from+developer&components=Blink%3EPerformanceAPIs)
* For other kinds of issues, file a bug
[here](https://bugs.chromium.org/p/chromium/issues/entry?template=Defect+report+from+developer&components=Speed%3EMetrics)
## APIs we own
* [Element Timing](https://github.com/WICG/element-timing)
* [Event Timing](https://github.com/WICG/event-timing)
* [HR Time](https://github.com/w3c/hr-time/)
* [Largest Contentful Paint](https://github.com/WICG/largest-contentful-paint)
* [Layout Instability](https://github.com/WICG/layout-instability)
* [Longtasks](https://github.com/w3c/longtasks/)
* We own some of the implementation details of [Navigation
Timing](https://github.com/w3c/navigation-timing/)
* We are ramping up on [Page
Visibility](https://github.com/w3c/page-visibility/)
* [Paint Timing](https://github.com/w3c/paint-timing/)
* [Performance Timeline](https://github.com/w3c/performance-timeline)
* We own some of the implementation details of [Resource
Timing](https://github.com/w3c/resource-timing)
* [User Timing](https://github.com/w3c/user-timing)
## Web performance objectives
* See our [web performance objectives](webperf_okrs.md).
## Metrics changelog
We realize it's important to keep developers on the loop regarding important
changes to our metric definitions. For this reason, we have created a [metrics
changelog](metrics_changelog/README.md) which will be updated over time.
## User Docs
* [Metrics for web developers](https://web.dev/metrics/).
* [Properties of a good metric](../speed/good_toplevel_metrics.md)
* [Survey of current
metrics](https://docs.google.com/document/d/1Ww487ZskJ-xBmJGwPO-XPz_QcJvw-kSNffm0nPhVpj8/edit)
## Talks
* [Lessons learned from performance monitoring in
Chrome](https://www.youtube.com/watch?v=ctavZT87syI), by Annie Sullivan.
* [Shipping a performance API on
Chromium](https://ftp.osuosl.org/pub/fosdem/2020/H.1309/webperf_chromium_development.webm),
by Nicolás Peña Moreno.
* [Understanding Cumulative Layout
Shift](https://www.youtube.com/watch?v=zIJuY-JCjqw), by Annie Sullivan and
Steve Kobes.

@ -0,0 +1,121 @@
# Web Performance Objectives
[TOC]
## 2020 Q2
### New web performance APIs
* Work towards shipping
**[performance.measureMemory](https://github.com/WICG/performance-measure-memory)**.
This API intends to provide memory measurements for web pages without
leaking information. It will replace the non-standard performance.memory and
provide more accurate information, but will require the website to be
[cross-origin
isolated](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/crossOriginIsolated).
Try it out with the Origin Trial
[here](https://web.dev/monitor-total-page-memory-usage/#using-performance.measurememory())!
Deliverables for this quarter:
* Extend the scope of the API to workers.
* Draft a spec.
* Work towards web perf support for **Single Page Apps** (SPAs). SPAs have
long been mistreated by our web performance APIs, which mostly focus on the
initial page load for multi-page apps. It will be a long process to
resolve all measurement gaps, but we intend to start making progress on
better performance measurements for SPAs by using a data-driven approach.
Deliverables for this quarter:
* Implement a strategy for measuring the performance of SPA navigations in
RUM, based on explicit navigation annotations via User Timing.
* Partner with some frameworks to gather data using said strategy.
* Socialize an explainer with our ideas.
* Work towards web perf support for **page abandonment**. Currently, our APIs
are blind to a class of users that decide to leave the website very early
on, before the performance measurement framework of the website is set into
place. This quarter, we plan to create and socialize a proposal about
measuring early page abandonment.
* Ship the full **[Event Timing](https://github.com/WICG/event-timing)** API.
Currently, Chrome ships only first-input to enable users to measure their
[First Input Delay](https://web.dev/fid/). We intend to ship support for
event so that developers can track all slow events. Each entry will
include a target attribute to know which was the EventTarget. Well
support a durationThreshold parameter in the observer to tweak the duration
of events being observed. Finally, well also have performance.eventCounts
to enable computing estimated percentiles based on the data received.
* Ship a **[Page Visibility](https://github.com/w3c/page-visibility/)**
observer. Right now, the Page Visibility API allows registering an event
listener for future changes in visibility, but any visibility states prior
to that are missed. The solution to this is having an observer which enables
buffered entries, so a full history of the visibility states of the page
is available. An alternative considered was having a boolean flag in the
PerformanceEntry stating that the page was backgrounded before the entry was
created, but there was overwhelming
[support](https://lists.w3.org/Archives/Public/public-web-perf/2020Apr/0005.html)
for the observer instead.
* Provide support for two Facebook-driven APIs:
[isInputPending](https://github.com/WICG/is-input-pending) and [JavaScript
Self-Profiling](https://github.com/WICG/js-self-profiling). The
**isInputPending** API enables developers to query whether the browser has
received but not yet processed certain kinds of user inputs. This way, work
can be scheduled on longer tasks while still enabling the task to stopped
when higher priority work arises. The **JS Self-Profiling** API enables
developers to collect JS profiles from real users, given a sampling rate and
capacity. It enables measuring the performance impact of specific JS
functions and finding hotspots in JS code.
### Existing web performance API improvements
* Ship the
[sources](https://github.com/WICG/layout-instability#Source-Attribution)
attribute for the
**[LayoutInstability](https://github.com/WICG/layout-instability)** API. The
Layout Instability API provides excellent information about content shifting
on a website. This API is already shipped in Chrome. However, its often hard
to figure out which content is shifting. This new attribute will inform
developers about the shifting elements and their locations within the
viewport.
* **[LargestContentfulPaint](https://github.com/WICG/largest-contentful-paint)**:
gather data about LCP without excluding DOM nodes that were removed. The
Largest Contentful Paint API exposes the largest image or text that is painted
in the page. Currently, content removed from the website is also removed as a
candidate for LCP. However, this negatively affects some websites, for
instance those with certain types of image carousels. This quarter, well
gather data internally to determine whether we should start including removed
DOM content. The API itself will not change for now.
* _(Stretch)_ Work on exposing the **final LargestContentfulPaint** candidate.
Currently LCP just emits a new entry whenever a new candidate is found. This
means that a developer has no way to know when LCP is done, which can happen
early on if there is some relevant user input in the page. We could consider
surfacing an entry to indicate that LCP computations are finished and
including the final LCP value, when possible. Theres also an
[idea](https://github.com/WICG/largest-contentful-paint/issues/43#issuecomment-608569132)
to include some heuristics to get a higher quality signal regarding whether
the LCP obtained seems valid. If we have time this quarter, wed be happy to
do some exploration on this.
* _(Stretch)_ **[ResourceTiming](https://github.com/w3c/resource-timing)**:
outline a plan to fix the problem of TAO (Timing-Allow-Origin) being an opt-in
for non-timing information such as transferSize. This may mean using a new
header or relying on some of the upcoming new security primitives in the web.
If we have time this quarter, wed like to begin tackling this problem by
socializing a concrete proposal for a fix.
### Interop and documentation
* **[Paint Timing](https://github.com/w3c/paint-timing)**: change the Chromium
implementation so it passes [new web platform
tests](https://wpt.fyi/results/paint-timing/fcp-only?label=experimental&label=master&aligned).
These tests are based on the feedback from WebKit. They intend to ship First
Contentful Paint in the near future!
* Improve the **documentation** of our APIs on MDN and web.dev. We have been
busily shipping new web perf APIs, but some of the documentation of them has
lagged behind. For instance, well make sure that theres MDN pages on all of
the new APIs weve shipped, and well collaborate with DevRel to ensure that
the documentation on web.dev is accurate.