The Setup
In large software companies, it's common to find multiple teams working on one document (i.e. web page). Moreover, different teams might work on their own completely separate processes which all contribute to a single output appearing in the end user's browser. It may not be possible or desirable for those separate teams to perfectly coordinate their efforts at all times: the more approvals and sign-offs needed for each change, the fewer changes you'll be able to make.
At first glance adding a component to a web page may seem like a simple thing, but in large organizations where many teams all contribute on their own schedules and with their own processes to the final output, it's anything but. Take a look at this hypothetical workflow, in which content, marketing, admin, and design teams all contribute their efforts to a single page:
Barriers to Collaboration
When it comes to JavaScript resources, those multiple inputs may all try to load the same (or similar) code. In the best case you ship a bloated document, with unnecessary JavaScript slowing down page load and interactivity: your users are annoyed, your conversions and sales suffer, and ultimately it impacts the bottom line.
At worst though, this setup can lead to run time conflicts and errors, in particular the infamous "custom-element double registration" error, which occurs when trying to register the same custom element name twice. This is because custom element tag names are globally and uniquely registered.
For example, a domain admin might create a minified bundle of version 1.0.0 of a design system, and load it on every page. Subsequently, a page content author who (correctly) wishes to declare all his dependencies might load up a CDN link to an individual design system element. Even if it's the same package version (1.0.0), if the individual element module tries to register a tag name already registered in the bundle, it will fail, and the rest of that author's script will not run.
If the page admin has perfect a priori knowledge of which elements are loaded on which pages, or if they simply load all elements on all pages, and if all content authors employ strict discipline in refraining from loading JavaScript resources, we can be reasonably confident that we'll avoid these errors for server-rendered content.
Known Unknowns
But then the marketing department or a related product team wants to run some client-rendered JavaScript to pull in marketing materials, or run an SPA or microfrontend. This code gets loaded dynamically after the CMS admins have already produced the final page output. If those teams are able to coordinate with the domain admin, they can agree to use the preloaded bundle (which will now by force have to load every element for every page, even if only one element is needed). They will not be able to declare their dependencies and will have to work closely with CMS admins to clarify the dependencies available to them at each release.
But if those marketing or product teams need to ship their content or apps to multiple domains with separate admins and diverse processes (as is likely to be the case in very large organizations), it can very quickly become prohibitively complicated to coordinate between admins, authors, and teams for every release of every page on every domain.
How can we enable multiple teams, at multiple levels in an organization, to work on the same document, and use the same web components - a la carte - without requiring them to load the entire bundle in advance?
Two Complementary Solutions
Organizations can benefit from either or both of two web standards in this situation: scoped custom element registries and import maps. These two specs can be used independently or in concert to help with the many-to-one problems described above.
Scoped Custom Element Registries
The idea here is to give custom-element authors the ability to re-register a tag name which exists elsewhere on the page, but scoped to a particular shadow root. This is useful in the microfrontend example above. The product team could load the specific versions of the web components they want and privately register their tag names within the shadow root (or shadow roots) of their microfrontend app.
Pros
- Spec track solution, so it's future-proof
- Teams can load the specific versions they want
- multiple versions can coexist on the page
Cons
- The polyfill must be specifically included (opted in) in every component's definition
- Requires Shadow DOM - (in fact, this is a plus, but some may consider this a dealbreaker)
- Custom element authors must provide "pure" modules, which is not always the case
In many cases, the cons listed here may not be of any consequence, but for well-established projects with tight integration requirements, it may be prohibitive to refactor all the components, just to get a polyfilled solution which doesn't match downstream expectations. Although teams at large organizations have successfully deployed the scoped custom element registries polyfill, in my opinion it is best to wait for this feature to land cross-browser and use it in combination with a future HTML module / declarative custom elements syntax.
Import Maps
Available today cross browser and even in server-side runtimes like Deno, import maps let page authors customize the way the browser loads javascript modules. For more about how to use import maps, read my earlier post.
An import map is a JSON object that specifies the URLs to use for a given import specifier or path prefix. At the moment, they must be included inline in the page in order for the browser to execute.
<script type="importmap">
{
"imports": {
"@patternfly/elements/": "https://esm.sh/@patternfly/elements@2.0.1/"
}
}
</script>
Import maps let page or app authors write familiar 'bare-specifiers', meaning import statements that look like:
import '@patternfly/elements/pf-button/pf-button.js';
Without import maps, they'd either have to write URL path import statements, or use a bundler on the server-side:
import '/assets/packages/@patternfly/elements/pf-button/pf-button.js';
Pros
- Developers can reference modules by package, instead of by URL
- Shipped cross browser (91.53% support globally as of this writing, according to caniuse)
- Can scope module resolutions
Cons
- Can only use one import map at a time
- Can't yet load import maps by URL (i.e.
<script type="importmap" src="...">
) - Doesn't scope custom element names, so some level of cross-team communication is still required
A Cross-Team Plan for Import-Map Adoption
Import maps increase internal collaboration by aligning teams around package names instead of resource addresses. By adopting import maps, the surface area for disagreement between teams can be reduced to "which version" of a package to load, rather than "which version, in what format, and at what address". A practical sketch of how this might look follows.
Three Spheres
Large organizations can adopt import maps by thinking of three spheres of interest for each page:
- CMS admins / domain-level teams / tool-and-process owners
- page-or-app level teams / section owners
- microfrontend teams / content authors / content injection authors
The first sphere is responsible for establishing and maintaining the tools and processes which produce each page. As a rough guide: they have the last word and are generally in charge of shared resources in the <head>
.
The second sphere is responsible for producing the content on entire areas of the site, but are not directly responsible for shared tools and processes. They generally-speaking use the tools and processes, but don't write or define them.
Lastly, teams in the third sphere produce content which is injected into the page after-the-fact. As far as resource loading goes, this content is arbitrary and independent of the content and planning done by the second sphere, but it may be beholden to the process decisions from the first sphere.
With that concept of the division of responsibilities in mind, teams shall:
- agree ahead-of-time on the major version line of the shared packages available for use,
- write page code which imports modules by package instead of by url
- establish a release schedule for major version updates, with lead time for teams to adapt their code.
It's important to recognize that import maps don't have to be the same across an entire domain. Each page can have it's own import map, so pages which don't dynamically load up content from the third sphere have a smaller surface area for disagreement between teams. Second-sphere teams that want to customize the import map on a given page can put in a request with admins, or admins might provide a per-page escape hatch (think CMS content knob) for app teams to override or customize the domain-wide import map. Tools like the @jspm/generator
package help admins the manage dependencies within each page's import map.
This means that third-sphere teams only have to go one organizational level up (to the page owners) in order to solve their dependency issue, rather than having to go two levels up to the domain admins.
Potential Problems
This approach is useful in limiting areas of potential clash between domain admins, page authors, and content-injecting teams, but it doesn't eliminate them. Even if the organization adopts import maps writ-large, teams in the second or third spheres who do not update their pages (or who do not opt them out of updates) in time for major package releases may find their experiences breaking when shared libraries are updated.
Similarly, third-sphere contributors, who are the furthest organizationally from the decision makers in the first and second spheres may not be aware that breaking changes are incoming, so effort will have to be expended to keep them up to date.
These problems exist when bundlers or import-by-URL are employed as well, but import maps reduce their severity by rationalizing imports around package names.