This is a proposal for using 2D and 3D <canvas> to customize the rendering of HTML content.
This is a living explainer which is continuously updated as we receive feedback.
The APIs described here are implemented behind a flag in Chromium and can be enabled with chrome://flags/#canvas-draw-element.
There is no web API to easily render complex layouts of text and other content into a <canvas>. As a result, <canvas>-based content suffers in accessibility, internationalization, performance, and quality.
- Styled, Laid Out Content in Canvas. There’s a strong need for better styled text support in Canvas. Examples include chart components (legend, axes, etc.), rich content boxes in creative tools, and in-game menus.
- Accessibility Improvements. There is currently no guarantee that the canvas fallback content used for
<canvas>accessibility always matches the rendered content, and such fallback content can be hard to generate. With this API, elements drawn into the canvas will match their corresponding canvas fallback. - Composing HTML Elements with Shaders. A limited set of CSS shaders, such as filter effects, are already available, but there is a desire to use general WebGL shaders with HTML.
- HTML Rendering in a 3D Context. 3D aspects of sites and games need to render rich 2D content into surfaces within a 3D scene.
The solution introduces three main primitives: an attribute to opt-in canvas elements, methods to draw child elements into the canvas, and an observer to handle updates.
The layoutsubtree attribute on a <canvas> element opts in canvas descendants to have layout and participate in hit testing. It causes the direct children of the <canvas> to have a stacking context, become a containing block for all descendants, and have paint containment.
The drawElementImage(element) method renders the DOM element and its subtree into the canvas, and returns a transform that can be applied to the transform property on element to align it's DOM location with its drawn location.
Requirements & Constraints:
layoutsubtreemust be specified on the<canvas>.- The
elementmust be a direct child of the<canvas>. - The
elementmust generate boxes (i.e., notdisplay: none). - Transforms: The canvas's current transformation matrix is applied when drawing into the canvas. CSS transforms on the source
elementare ignored for drawing (but continue to affect hit testing/accessibility, see below). - Clipping: Overflowing content (both layout and ink overflow) is clipped to the element's border box.
- Sizing: The optional
width/heightarguments specify a destination rect in canvas coordinates. If omitted, thewidth/heightarguments default to sizing the element so that it has the same on-screen size and proportion in canvas coordinates as it does outside the canvas.
WebGL/WebGPU Support:
Similar methods are added for 3D contexts: WebGLRenderingContext.texElementImage2D and copyElementImageToTexture.
A fireOnEveryPaint option is added to ResizeObserverOptions. This allows script to be notified whenever descendants of a <canvas> may render differently and may need to be re-drawn. The callback runs at Resize Observer timing (after DOM style/layout, but before paint).
Browser features like hit testing, intersection observer, and accessibility rely on an element's DOM location. To ensure these work, the element's transform property should be updated so that the DOM location matches the drawn location.
Caculating a CSS transform to match a drawn location
The the general formula for the CSS transform is:Where:
-
$$T_{\text{draw}}$$ : Transform used to draw the element in the canvas grid coordinate system. FordrawElementImage, this is$$CTM \cdot T_{(\text{x}, \text{y})} \cdot S_{(\text{destScale})}$$ , where$$CTM$$ is the Current Transformation Matrix,$$T_{(\text{x}, \text{y})}$$ is a translation from the x and y attributes, and$$S_{(\text{destScale})}$$ is a scale from the width and height attributes. -
$$T_{\text{origin}}$$ : Translation matrix of the element's computedtransform-origin. -
$$S_{\text{css} \to \text{grid}}$$ : Scaling matrix converting CSS pixels to Canvas Grid pixels.
To assist with synchronization, drawElementImage returns the CSS transform which can be applied to the element to keep it's location synchronized. For 3D contexts, the getElementTransform(element, draw_transform) helper method is provided which returns the CSS transform, provided a general transformation matrix.
<canvas id="canvas" style="width: 200px; height: 200px;" layoutsubtree>
<div id="form_element">
name: <input>
</div>
</canvas>
<script>
const ctx = document.getElementById('canvas').getContext('2d');
const observer = new ResizeObserver(([entry]) => {
canvas.width = entry.devicePixelContentBoxSize[0].inlineSize;
canvas.height = entry.devicePixelContentBoxSize[0].blockSize;
let transform = ctx.drawElementImage(form_element, 0, 0);
form_element.style.transform = transform.toString();
});
observer.observe(canvas, {box: 'device-pixel-content-box', fireOnEveryPaint: true});
</script>interface HTMLCanvasElement {
attribute boolean layoutSubtree;
[RaisesException]
DOMMatrix getElementTransform(Element element, DOMMatrix draw_transform);
}
interface CanvasRenderingContext2D {
[RaisesException]
DOMMatrix drawElementImage(Element element, unrestricted double x, unrestricted double y);
[RaisesException]
DOMMatrix drawElementImage(Element element, unrestricted double x, unrestricted double y,
unrestricted double dwidth, unrestricted double dheight);
};
interface WebGLRenderingContext {
[RaisesException]
void texElementImage2D(GLenum target, GLint level, GLint internalformat,
GLenum format, GLenum type, Element element);
};
interface GPUQueue {
[RaisesException]
void copyElementImageToTexture(Element source, GPUImageCopyTextureTagged destination);
}
dictionary ResizeObserverOptions {
boolean fireOnEveryPaint = false;
};See here for a demo using the drawElementImage API to draw rotated complex text.
See here for a demo using the WebGL texElementImage2D API to draw HTML onto a 3D cube.
A demo of the same thing using an experimental extension of three.js is here. Further instructions and context are here.
See here for a demo of interactive content in canvas.
The fireOnEveryPaint resize observer option is used to update the canvas as needed. The effect is a fully interactive form in canvas.
Both painting (via canvas pixel readbacks or timing attacks) and invalidation (via fireOnEveryPaint) have the potential to leak sensitive information, and this is prevented by excluding sensitive information when painting. While an exhaustive list cannot be enumerated, sensitive information includes:
- Cross-origin data in embedded content (e.g.,
<iframe>,<img>),<url>references (e.g.,background-image,clip-path), and SVG (e.g.,<use>). Note that same-origin iframes would still paint, but cross-origin content in them would not. - System colors, themes, or preferences.
- Spelling and grammar markers.
- Search text (find-in-page) and text-fragment (fragment url) markers.
- Visited link information.
- Form autofill information not otherwise available to javascript.
SVG's <foreignObject> can be combined with data uri images and canvas to access the pixel data of HTML content (example), and implementations currently have mitigations to prevent leaking sensitive content. As an example, an <input> with a spelling error is still painted, but any indication of spelling errors, which could expose the user's spelling dictionary, is not painted. Similar mitigations should be used for drawElementImage, but need to be expanded to cover additional cases.
The HTML-in-Canvas features may be enabled with chrome://flags/#canvas-draw-element in Chrome Canary.
We are most interested in feedback on the following topics:
- What content works, and what fails? Which failure modes are most important to fix?
- How does the feature interact with accessibility features? How can accessibility support be improved?
Please file bugs or design issues here.