
2023-01-30
\\n\\n
What the diagram is saying is pretty obvious, remember to ask yourselves the following question before doing an update you'll be fine:
\\n\\n\\\"Are we sure we're not referencing new native stuff that weren't there last time we uploaded the app to the stores?\\\"
\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"OTA - mind the runtime version\"],\"summary\":[0,\"It's important to know what you're doing when pushing out over the air to your users\"],\"publishedAt\":[0,\"2023-01-30\"],\"tags\":[1,\"[[0,\\\"expo\\\"],[0,\\\"eas\\\"]]\"],\"image\":[0,\"/static/images/expomental.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"astro-blog.mdx\"],\"slug\":[0,\"typesafe-astro-blog\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline } from '../../components/Base'\\n\\nAstro just released their 2.0 version with lots of goodies, and the treat most important to this post is the introduction of content collections.
\\n\\nWith content collections you get to write typesafe markdown files by specifying a ZOD schema that decribes your posts, e.g:
\\n\\n```javascript\\nconst blog = defineCollection({\\n schema: z.object({\\n // Define your expected frontmatter properties\\n title: z.string(),\\n // Mark certain properties as optional\\n draft: z.boolean().optional(),\\n // Transform datestrings to full Date objects\\n publishDate: z.string().transform((val) => new Date(val))\\n // Improve SEO with descriptive warnings\\n\\tdescription: z.string()\\n // ...\\n }),\\n});\\n```\\n\\n\\nSaying that this is useful would be a gross understatement, head on over to the **Astro docs** to find out more about this awesome feature!
\\n\\n/Nico
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Typesafe markdown blog with Astro 2.0\"],\"summary\":[0,\"With the recent release of Astro 2.0 creating a type-safe blog has never been easier!\"],\"publishedAt\":[0,\"2023-01-29\"],\"tags\":[1,\"[[0,\\\"astro\\\"],[0,\\\"typescript\\\"],[0,\\\"content collection\\\"]]\"],\"image\":[0,\"/static/images/astro/astro.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"expo-ota-azure-pipelines.mdx\"],\"slug\":[0,\"expo-ota-azure\"],\"body\":[0,\"import { H1, H2, P } from '../../components/Base'\\n\\nAlthough EXPO has a lot of information regarding OTA updates, they lack information on how to setup automatic updates using Azure.
\\n\\nIn this example pipeline we trigger the code below to run every time code is pushed to a specific branch.\\nWe setup node, checkout the repo and install our dependencies. Then we login to Expo using our username and an environment variable we called EXPO_CLI_PASSWORD.\\nLastly we specify the EAS branch and use the commit message as our update message.
\\n\\nExample pipeline
\\n\\n```yaml\\ntrigger:\\n - (branch name)\\npool:\\n vmImage: 'ubuntu-latest'\\n\\njobs:\\n - job: EAS\\n displayName: 'Run Expo EAS update'\\n steps:\\n - task: NodeTool@0\\n inputs:\\n versionSpec: '16.14'\\n displayName: 'Install Node'\\n\\n - checkout: self\\n displayName: 'Checkout repo'\\n\\n - script: yarn install --frozen-lockfile\\n workingDirectory: '$(System.DefaultWorkingDirectory)'\\n displayName: 'Install app'\\n\\n - script: yarn global add expo-cli\\n workingDirectory: '$(System.DefaultWorkingDirectory)'\\n displayName: 'Install Expo CLI'\\n\\n - script: yarn global add eas-cli\\n workingDirectory: '$(System.DefaultWorkingDirectory)'\\n displayName: 'Install EAS CLI'\\n\\n - script: npx expo login -u (your expo username) -p $(EXPO_CLI_PASSWORD)\\n env:\\n EXPO_CLI_PASSWORD: $(EXPO_CLI_PASSWORD)\\n workingDirectory: '$(System.DefaultWorkingDirectory)'\\n displayName: 'Login to Expo'\\n\\n - script: yarn run sourceEnvVariables && eas update --branch (eas branch) --message \\\"$(Build.SourceVersionMessage)\\\"\\n workingDirectory: '$(System.DefaultWorkingDirectory)'\\n displayName: 'Perform OTA update'\\n\\n - script: |\\n # done\\n echo 'EAS update on commit:' $(Build.SourceVersionMessage)\\n```\\n\\nNote - it’s important to source your environment variables if you have some before running the update command.
\\n\\nWe declare the variable to make it available in the script.
\\n\\nSince this example is set up to trigger when code gets pushed to a specific branch, you could for instance save this pipeline as ota-preview.yml and trigger it when you push code to a git branch called preview (which in turn is connected to an EAS branch with the same name).
\\n\\nOTA updates with Expo EAS are amazing and I hope you liked this article showing\\none way of configuring this process using Azure pipelines.
\\n\\n**/ ND**\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Configuring Azure pipelines for OTA (over the air) updates using Expo EAS.\"],\"summary\":[0,\"In this post we'll look at how to configure Azure pipelines for Over The Air updates with Expo.\"],\"publishedAt\":[0,\"2022-12-06\"],\"tags\":[1,\"[[0,\\\"expo\\\"],[0,\\\"eas\\\"],[0,\\\"azure devops\\\"],[0,\\\"CI/CD\\\"],[0,\\\"react native\\\"]]\"],\"image\":[0,\"/static/images/ota/ota.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"perf-basics-performance-budgets.mdx\"],\"slug\":[0,\"performance-budgets\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline } from '../../components/Base'\\n\\n# Web perf basics - Performance Budgets\\n\\nNow that we’ve talked about performance metrics and introduced ourselves to **SpeedCurve**, I would like to continue with performance budgets and the role they play when building performant web experiences.
\\n\\nPerformance budgets are made up of two things, resource budgets - limiting asset file size, and metric budgets - monitoring **Synthetic** & **RUM** data and the many many metric acronyms that belong to that segment.
\\n\\nLet’s begin on the more uncomplicated side and talk about resource budgets first. Resource budgets are simply file-size constraints on normally all of your resources, like the HTML document, your CSS, your fonts, your images, JavaScript, and so on.
\\n\\nTo define a resource budget, ask yourself the question:
\\n\\nWhat's important for my site to be able to perform, to be valuable?
\\n\\nOdds are pretty high you may need to cut down on a few things to reach your set target, and you might be surprised that cutting down doesn’t need to be a bad thing.
\\n\\nIf you don’t know where to begin there are many resources out there that can assist you in the process of creating your own resource budget, and a favorite one of mine is the site performancebudget.io
\\n\\nTheir idea is simple - you create your budget simply by filling in the blanks in the following sentence:
\\n\\nThe generated budget is purely a result of full resource load vs connection speed, so to reach the goal of loading a site in 5 seconds on a Fast 3G mobile connection (1.6 Mbps), your budget would look something like this:
\\n\\nHTML - 25kb | CSS - 32kb | JavaScript - 165kb | Images - 630kb\\nVideo - 97kb | Fonts - 50kb
\\n\\n**Total budget: 999kb**
\\n\\nNow let's look at a full load in 5 seconds but on a slower 3G connection (780 Kbps)
\\n\\nHTML - 12kb | CSS - 15kb | JavaScript - 79kb | Images - 302kb\\nVideo - 46kb | Fonts - 24kb
\\n\\n**Total budget: 478kb**
\\n\\nThe numbers paint a pretty clear picture - images and scripts are seemingly our largest resources and thus our biggest problem area.
\\n\\nAnd although the \\\"reduce one, increase the other\\\" mindset does work, remember that it's generally considered a bad thing to increase on the script side due to its blocking (parsing, compiling, and execution) nature.
\\n\\nSo what's a good resource budget?
\\n\\nWell, it depends entirely on your application. While keeping Javascript to a minimum is sought after in an e-commerce context where speed equals money, you might actually be working on something that resembles more of an app than anything else. And in that case, your budget needs to take that into account.
\\n\\nIn other words, whatever you have - use that as your initial baseline and make sure you don't exceed your budget from now on. Then start questioning whether you really need [ X ] or [ X ], or isn't [ X ] more a nice to have kind of thing?
\\n\\nMaking sure your team doesn't exceed a set resource budget can be done in several ways, something I find to work is by setting up a step in your deployment or PR pipeline by running LightHouse CI with Puppeteer, making sure that pull requests that introduce budget regressions are invalidated.
\\n\\nThe reason why metric-based budgets are considered to be more difficult is that they require additional insight into the world of performance, reasoning about filesize feels more intuitive than knowing what a good threshold for a CLS score is, or if CLS even matters for you (bad example, it does).
\\n\\nThat being said, it doesn’t need to be that hard either.
\\n\\nSo what do you measure?
\\n\\nBefore the Web Vitals era, I would say this question was a lot harder to answer, and it still is to an extent. The main reason being:
\\n\\nPerformance metrics are highly personal for your type of client and their interests.
\\n\\nYour e-commerce client might be interested in their visitors being able to checkout as fast as humanly possible, while the client your colleague is working with needs to make sure a certain type of UI flow runs as smooth as possible from a certain point in time, thus more interested in runtime performance ( hi Ivan Akulov 👋 ).
\\n\\nStill, I would say that - if you don't know where to start? Start with the Google Web Vital metrics and continue on from there.
\\n\\nIf you feel there's nothing out there that really matters for your project then why not follow the likes of Twitter (they measure TTFT - Time to first tweet) and create your own metric using the PerformanceObserver API or the User Timing API.
\\n\\nHow do I measure all of these unknowns?
\\n\\nWith a tool like SpeedCurve, measuring is made dead easy as every dashboard you create gives you the opportunity to set a performance budget (not only that but email alerts too), the real question is what that budget threshold should be.
\\n\\nOne way of figuring that out, in the true spirit of competition, would be to use SpeedCurve to benchmark your competitors and then use that output as a way to produce a target you strive to beat by 20%.
\\n\\nPerformance budgeting is a big fish in the bigger pond of web performance and it needs to be taken seriously, but it's by no means the one answer. Think of it as a compliment to your other tools in your belt, like above-the-fold optimizations, caching strategies, image optimizations, etc.
\\n\\nThe possibilities are many, it's the knowing of what to measure and how that metric, whatever it may be, directly affects your client's needs and business values where you find the good stuff.
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Web perf basics - Performance Budgets\"],\"summary\":[0,\"Climbing the performance ladder requires hard work, learn how to prevent regressions and enjoy the view.\"],\"publishedAt\":[0,\"2021-10-01\"],\"tags\":[1,\"[[0,\\\"performance\\\"]]\"],\"image\":[0,\"/static/images/perfbudgets.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"cls-speedcurve.mdx\"],\"slug\":[0,\"cls-speedcurve\"],\"body\":[0,\"import { H1, H2, H3, P, Tagline } from '../../components/Base'\\n\\nIf you’ve ever played around with SpeedCurve and its charts you probably know you have the option to go further into each test that makes up a chart by clicking on a chart sample and selecting view test.
\\n\\nClicking on a chart sample takes you to a detailed view, giving you the opportunity to really focus on the output of a certain metric, and in this particular example, we’re going to look at one of the Web Vital metrics:
\\n\\n**CLS - Cumulative Layout Shift.**
\\n\\nCLS occurs when something forces a recalculation of the page layout (a shift). The score is derived from how much elements shift before stabilizing and a CLS score greater than 0.1 means you have things to improve.
\\n\\n\\n- Good – CLS below 0.1,\\n- Needs improvement – CLS between 0.1 and 0.25,\\n- Poor – CLS above 0.25.\\n
\\n\\nThe word cumulative means the sum of or the collections of (in our case shifts), and it is important we look at all shifts that collectively result in our CLS score. But in reality, when working to improve your CLS score, your efforts will obviously pay off more if you deal with the bigger shifts first.
\\n\\nTo identify these, just know that greater shifts result in larger blocks of red coloration on the screenshot sample and that you can look at the Cumulative Score for each screenshot to compare them against each other.
\\n\\nAlthough it’s sometimes tricky to mitigate shifts, remember they’re often caused by either web fonts without fallbacks (or fallback that doesn’t resemble the web font), late CSS imports hidden from the preload scanner (avoid @import at any cost!), ads, images without set dimensions and by third parties manipulating the DOM.
\\n\\n\\n**/ ND**\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Introduction to CLS - Cumulative Layout Shift\"],\"summary\":[0,\"In this post we'll quickly look at what CLS is, how it's measured within Speedcurve and tips on how to mitigate layout shifts.\"],\"publishedAt\":[0,\"2021-09-18\"],\"tags\":[1,\"[[0,\\\"performance\\\"],[0,\\\"speedcurve\\\"],[0,\\\"CLS\\\"],[0,\\\"web vitals\\\"]]\"],\"image\":[0,\"/static/images/cls/cls_2.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"micro-frontends-composition-classification.mdx\"],\"slug\":[0,\"micro-frontends-composition-classification\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline, HR } from '../../components/Base'\\n\\nWeb composition gets a spotlight due to the trending **“micro frontends”** term. There is a lot going under that umbrella term. In this post we’ll try to zoom-out and look at this larger puzzle and classify a lot off the different scenarios.
\\n\\nThe first book in the subject is out - **“Micro frontends in Action”**.
\\n\\nWe also see the downside or miss use of micro front-ends pop-up in thought works tech radar in “micro frontend anarchy”.
\\n\\nNicolas Delfino has also been publishing a few new articles of the subject on his blog, expanding on the ideas, focusing on different aspects of a larger puzzle.
\\n\\nThere needs to be something on the other side of a user route.
\\n\\nA resource that could own content and/or behavior but utilizing composition. A page where we could compose.
\\n\\nComposition could be conditional on cookies/user/feature-set/etc.
\\n\\n\\n- Layout - composition templates\\n- Cookies\\n- Login\\n- A/B test (conditional composition)\\n
\\n\\nThe page resource could be served static or processed (SSR).
\\n\\nPages could also contain content and behavior that is not intended for sharing i.e., not taking part in composition else-ware.
\\n\\nAny resource with content (page or part/fragment).
\\n\\nDepending on the type of content different strategies for rendering and serving the resource is applicable.
\\n\\n\\n- Static\\n- Dynamic (cache)\\n- Dynamic - personal\\n
\\n\\n\\n- Static\\n- Client\\n- SSR\\n
\\n\\n\\n** Pull\\n** Push\\n
\\n\\nUtilizing a CDN adds options for caching, performance and composition.
\\n\\nA CDN treats resources as a static cache, resources could be pushed to the cache/CDN or refreshed/pulled when expired.
\\n\\nThis give you many different options in handling rendering content and handling changes of content.
\\n\\nWith very dynamic/ad-hoc content the cache is insufficient.
\\n\\nThe “readiness” of the content, how much processing is left to the client, is also something to considerate for over-all performance and efficiency of the cache.
\\n\\nIn addition to the push/pull options we see static site generators for CMS and blogs using push on build options (with hydration, see next section).
\\n\\nThere is probably not one approach to apply over-all in a more complex scenario, rather you want to mix and match.
\\n\\nAdding behavior to a page or fragment introduces new considerations.
\\n\\nRuntime dependencies that might break different parts and have negative impact on performance. Execution on client or server.
\\n\\n\\n- Over content\\n- Component\\n- Forms\\n
\\n\\n\\n- Intrusive\\n- Progressive\\n- Hydration\\n
\\n\\nProcess popularized by “isomorphic” applications where rendering is made both on the server and on the client. Usually using Node on the server to maximize code reuse.
\\n\\nDue to the cost of delaying TTI by hydrating a complete tree, efforts are being made by several front-end libraries to achieve partial / progressive rendering.
\\n\\nMore about hydration under SPA/Applications
\\n\\nIn a more complex, full app experience there might be other considerations, where a “global” runtime and component model is more suited.
\\n\\nBut even here a complete solution/system, might be a set of web sites and web apps.
\\n\\n\\n- Component -> Component communication\\n- App like behavior\\n
\\n\\nThis is an area where traditionally SPA was intended, hence single page applications. But the ever-changing eco-systems these days tend to be complex and costly.
\\n\\nSince a lot off apps are catered for internal use within companies/enterprises, where adoption of client/front-end competence is expensive to gain and to keep, we now see attraction to WASM based app models like Blazor.
\\n\\nSome SPAs could be seen as a shell, so there is a option for refactoring components towards micro-frontends.
\\n\\n- While isomorphic SPAs suffer from worsened TTI as a result of the SSR + hydration process, loading subsequent routes may feel faster due to lowered data volumes between routes and the browser not having to parse and recompile previously loaded JS.
\\n\\n- Isomorphic SPAs provide users a fully rendered page compared to a blank screen on client side rendered SPA.
\\n\\n- Full hydration of a SPA is costly, delays TTI - partial / progressive hydration is adviced.
\\n\\n- While a good fit for web applications like Slack or Spotify, we see a trend of excessive usage of SPAs in the regular web space due to framework availability and popularity. This introduces unnecessary complexity and costs associated with SPA (e.g handling SEO). **You probably don’t need a single-page application**
\\n\\n\\n\\n Blazor | Technology Radar | ThoughtWorks\\n
\\n\\n\\n\\n Micro frontend anarchy | Technology Radar | ThoughtWorks\\n
\\n\\n\\n\\n Micro frontends in Action - Google Search\\n
\\n\\n\\n\\n You probably don’t need a single-page application\\n
\\n\\n**/ Per Ökvist & Nicolás Delfino**
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Composition and Classification of micro frontends\"],\"summary\":[0,\"In this post, Per Ökvist and I will try to zoom-out and take a look at the large puzzle regarding web-composition\"],\"publishedAt\":[0,\"2020-11-02\"],\"tags\":[1,\"[[0,\\\"micro-frontends\\\"],[0,\\\"composition\\\"],[0,\\\"classification\\\"]]\"],\"image\":[0,\"/static/images/micro-frontends-composition-classification/splash.jpg\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"component-library-module-federation.mdx\"],\"slug\":[0,\"component-library-module-federation\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline, HR } from '../../components/Base'\\nimport { Recap } from '../../components/federated/Recap'\\n\\nIn my previous post **Micro frontends with Module Federation and Webpack 5**, we looked at how to utilise the new Module Federation plugin available with Webpack 5 (**MF**) to chop up a SPA into multiple, independently owned micro-frontends.
\\n\\nSince this post is about showcasing a component UI library, I am going to skip some of the setup boilerplate explained in my **last post** and walk you through what we’re working with.
\\n\\nJust as last time we're using a monorepo for convenience and an app shell containing two routes - the base route and the ui catalog route.
\\n\\n\\n```json\\n \\\"private\\\": true,\\n \\\"scripts\\\": {\\n \\\"installDependencies\\\": \\\"yarn workspaces run deps\\\",\\n \\\"build\\\": \\\"yarn workspaces run build\\\",\\n \\\"start\\\": \\\"concurrently \\\\\\\"wsrun --parallel start\\\\\\\"\\\",\\n \\\"clean\\\": \\\"rm -fr node_modules sites/**/node_modules && yarn run clean:dist\\\",\\n \\\"clean:dist\\\": \\\"rm -fr node_modules sites/**/dist\\\"\\n },\\n \\\"workspaces\\\": [\\n \\\"sites/*\\\"\\n ],\\n```\\n
\\n\\n\\n
\\n```jsx\\nconst Home = React.lazy(() => import('team-home/Home'));\\nconst Catalog = React.lazy(() => import('team-ui/Catalog'));\\nconst Routes = () => {\\n return (\\n
\\n
\\n
\\n```javascript\\nnew ModuleFederationPlugin({\\n name: \\\"home\\\",\\n filename: \\\"remoteEntry.js\\\",\\n remotes: {\\n \\\"team-ui\\\": \\\"ui@http://localhost:5000/remoteEntry.js\\\"\\n },\\n shared: {\\n ...deps,\\n react: {\\n singleton: true,\\n requiredVersion: deps.react,\\n },\\n \\\"react-dom\\\": {\\n singleton: true,\\n requiredVersion: deps[\\\"react-dom\\\"],\\n },\\n },\\n}),\\n```\\n
\\nNow on to the actual UI library exposed by the UI team through MF, btw - read my previous post about **micro-frontends SPAs** using MF if you feel you need more examples to follow along.
\\n\\nFirst of, let´s look at the remote and the exposes properties of the UI team top to bottom:
\\n\\n\\n
\\n```javascript\\nnew ModuleFederationPlugin({\\n remotes: {\\n \\\"team-ui\\\": \\\"ui@http://localhost:3001/remoteEntry.js\\\",\\n },\\n exposes: {\\n \\\"./BaseStyles\\\": \\\"./src/federated/styles/base.css\\\",\\n \\\"./Components\\\": \\\"./src/federated/components/\\\",\\n \\\"./Catalog\\\": \\\"./src/federated/catalog/\\\",\\n \\\"./Components/Utils\\\": \\\"./src/federated/components/utils/\\\"\\n },\\n}),\\n```\\n
\\n\\n**BaseStyles** - e.g styles for wrapping a page, base fonts etc...
\\n\\n\\n
\\n```javascript\\nexport * from './Buttons';\\nexport * from './Headings';\\nexport * from './Boxs';\\nexport * from './Flexs';\\nexport * from './Sections';\\nexport * from './Dividers';\\nexport * from './Texts';\\nexport * from './Avatars';\\n```\\n
\\n\\n\\n
\\n```jsx\\nimport 'team-ui/BaseStyles';\\nimport {\\n ConfirmButton,\\n RejectButton,\\n Button,\\n Heading,\\n Box,\\n PromptBox,\\n FlexSpread,\\n Section,\\n Divider,\\n AvatarBox\\n} from 'team-ui/Components';\\n\\nconst Catalog = () => (\\n
\\n
**Components/Utils** - a react specific prop sanitation utility
\\n\\n\\n
\\n```javascript\\nexport const getValidProps = (props) => {\\n const {\\n customProp, /* (props for react) */,\\n ...rest\\n } = props;\\n const invalid = null || undefined;\\n\\n const styles = {\\n ...(customProp !== invalid && { something: customProp }),\\n };\\n\\n return {\\n props: rest,\\n class: classAdd,\\n styles\\n };\\n};\\n```\\n
\\n\\n\\n
\\n```jsx\\nimport './styles/ButtonStyles.css';\\nimport { getValidProps } from 'team-ui/Components/Utils';\\n\\nconst BaseHeading = (props) => {\\n const Props = getValidProps(props);\\n const Tag = Props.tag;\\n\\n return (\\n
Creating a federated UI library this way works really well, and something that I feel could be quite advantageous for larger teams working with Single Page Applications wanting to have an option to NPM or the like.
\\n\\nIf you're interested in how **resilience** comes to play using MF - e.g **What happens if the server is down?** I highly recommend you checking out Jack Herringtons's Youtube Video **\\\"How to build a resilient shared Header/Footer using Module Federation\\\"**, where he walks you through the process of creating a resilient federated header / footer using a mix of techniques (including **MF**), custom React error boundaries and Yarn workspaces.
\\n\\nLike always, the code for this example is at **my Github** in case you feel like checking that out.
\\n\\n/ ND\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Creating a Module federated UI library with Webpack 5\"],\"summary\":[0,\"In this post we'll look at how we can use Module Federation to create a component UI library using Module Federation and Webpack 5.\"],\"publishedAt\":[0,\"2020-10-29\"],\"tags\":[1,\"[[0,\\\"micro frontends\\\"],[0,\\\"module federation\\\"],[0,\\\"component library\\\"],[0,\\\"ui\\\"]]\"],\"image\":[0,\"/static/images/module-federation-ui/fedui.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"micro-frontends-module-federation-webpack.mdx\"],\"slug\":[0,\"micro-frontends-module-federation\"],\"body\":[0,\"import { H1, H2, H3, P, Tagline, HR } from '../../components/Base'\\n\\nThe release of Webpack 5 delivered something special besides performance improvements like improved tree-shaking and persistent caching across builds - an architectural possibility called **Module Federation**.
\\n\\nModule Federation (**MF**) enables applications to seamlessly consume and expose code in a way that hasn’t been possible before, paving the way for micro frontend composition in JS land and general federation of whatever you choose to pass through Webpack.
\\n\\nTo demonstrate how Module Federation works, lets get started tackling our simplified e-commerce scenario once again, but this time from React SPA land.
\\n\\nThe final application consists of a landing page with a bunch of products, a minicart and a checkout page. What's special about it is that each page is a standalone, isolated micro frontend.
\\n\\nAlthough not a requirement from a Module Federation point of view, we went the **monorepo** route along with **Yarn workspaces** to facilitate running all of the separate applications simultaneously.
\\n\\n\\n
\\n```json\\n \\\"private\\\": true,\\n \\\"scripts\\\": {\\n \\\"installDependencies\\\": \\\"yarn workspaces run deps\\\",\\n \\\"build\\\": \\\"yarn workspaces run build\\\",\\n \\\"start\\\": \\\"concurrently \\\\\\\"wsrun --parallel start\\\\\\\"\\\",\\n \\\"clean\\\": \\\"rm -fr node_modules sites/**/node_modules && yarn run clean:dist\\\",\\n \\\"clean:dist\\\": \\\"rm -fr node_modules sites/**/dist\\\"\\n },\\n \\\"workspaces\\\": [\\n \\\"sites/*\\\"\\n ],\\n```\\n
\\n\\nThe app shell is our main SPA, contains the routes, the Redux store and is the host for all of our remote applications.
\\n\\n\\n
\\n```jsx\\nimport 'team-shell/BaseStyles';\\nimport store from 'team-shell/Store';\\n\\nconst Shell = () => (\\n
\\n
\\n```jsx\\nconst Landing = React.lazy(() => import('team-landing/Landing'));\\nconst Checkout = React.lazy(() => import('team-checkout/Checkout'));\\nconst Cart = React.lazy(() => import('team-checkout/Cart'));\\n\\nconst LandingRoute = () => (\\n
\\n
\\n```jsx\\nconst reducer = (state = { items: [] }, { type, payload }) =>\\n produce(state, (draft) => {\\n switch (type) {\\n case 'cart/add': {\\n draft.items.push(payload);\\n return draft;\\n }\\n case 'cart/delete': {\\n const { id } = payload;\\n draft.items.splice(id, 1);\\n return draft;\\n }\\n default: {\\n return draft;\\n }\\n }\\n });\\n```\\n
\\n\\n\\n
\\n```javascript\\nnew ModuleFederationPlugin({\\n name: \\\"shell\\\",\\n filename: \\\"remoteEntry.js\\\",\\n remotes: {\\n \\\"team-shell\\\": \\\"shell@http://localhost:3000/remoteEntry.js\\\",\\n \\\"team-landing\\\": \\\"landing@http://localhost:3001/remoteEntry.js\\\",\\n \\\"team-checkout\\\": \\\"checkout@http://localhost:3002/remoteEntry.js\\\",\\n },\\n exposes: {\\n \\\"./Store\\\": \\\"./src/federated/store\\\",\\n \\\"./BaseStyles\\\": \\\"./src/styles/federated/base.css\\\"\\n },\\n shared: {\\n ...deps,\\n react: {\\n singleton: true,\\n requiredVersion: deps.react,\\n },\\n \\\"react-dom\\\": {\\n singleton: true,\\n requiredVersion: deps[\\\"react-dom\\\"],\\n },\\n },\\n}),\\n```\\n
\\n\\n\\n
The checkout team exposes the checkout page route, the buy button and the cart:
\\n\\n\\n
\\n```javascript\\nname: \\\"checkout\\\",\\nfilename: \\\"remoteEntry.js\\\",\\nremotes: {\\n \\\"team-shell\\\": \\\"shell@http://localhost:3000/remoteEntry.js\\\",\\n \\\"team-landing\\\": \\\"landing@http://localhost:3001/remoteEntry.js\\\"\\n},\\nexposes: {\\n \\\"./Checkout\\\": \\\"./src/federated/Checkout\\\",\\n \\\"./BuyButton\\\": \\\"./src/federated/BuyButton\\\",\\n \\\"./Cart\\\": \\\"./src/federated/Cart\\\",\\n},\\n```\\n
\\n\\n\\n```jsx\\nconst BuyButton = ({ payload, addToCart, children }) => (\\n \\n);\\n\\nexport default connect(null, (dispatch) => ({\\n addToCart: (payload) => dispatch({ type: 'cart/add', payload })\\n}))(BuyButton);\\n```\\n
\\n\\n\\n
\\n```jsx\\nconst Checkout = ({products}) => {\\n return (\\n (...map products)\\n )\\n};\\n\\nconst mapStateToPros = state => ({\\n items: state.items\\n})\\n\\nexport default connect(mapStateToPros)(Checkout);\\n```\\n
\\n\\nBesides its setup to share and consume UI, checkout also exposes itself as a standalone application. This setup makes it easy for the team to develop their application and also enables them to create catalogs of the Micro frontends they own and provide to the outside world.
\\n\\n\\n
\\n```jsx\\nimport { products } from 'team-landing/MockedProducts';\\n\\nconst Standalone = () => (\\n <>\\n
\\n
\\n
\\n
\\n
\\n```javascript\\nname: \\\"landing\\\",\\nfilename: \\\"remoteEntry.js\\\",\\nremotes: {\\n \\\"team-shell\\\": \\\"shell@http://localhost:3000/remoteEntry.js\\\",\\n \\\"team-landing\\\": \\\"landing@http://localhost:3001/remoteEntry.js\\\",\\n \\\"team-checkout\\\": \\\"checkout@http://localhost:3002/remoteEntry.js\\\",\\n},\\nexposes: {\\n \\\"./Landing\\\": \\\"./src/federated/Landing\\\",\\n \\\"./MockedProducts\\\": \\\"./src/federated/mocks/products\\\",\\n},\\n```\\n
\\n\\n\\n
\\n```jsx\\nimport BuyButton from \\\"team-checkout/BuyButton\\\"\\n\\nproducts.map((product, index) => {\\n return (\\n
\\n
\\n```javascript\\nshared: {\\n ...deps,\\n react: {\\n singleton: true,\\n requiredVersion: deps.react,\\n },\\n \\\"react-dom\\\": {\\n singleton: true,\\n requiredVersion: deps[\\\"react-dom\\\"],\\n },\\n},\\n```\\n
\\n\\nThis is telling Webpack - see all of my runtime dependencies specified inside **package.json** as shared dependencies against others (**deps**), but treat libraries like React and react-dom (libraries that don't allow multiple instantiations) - as singletons. Ensuring that Webpack only loads these libraries once.
\\n\\nModule federation gives us the opportunity to have multiple teams output small subsets of a site and consume them at runtime within Single Page Applications, to share UI components without NPM, to consume complete configurations, business logic etc, **anything you can run through Webpack is now shareable.**
\\n\\nThe nature of MF also makes it possible to federate parts of an existing application one feature at a time, offloading responsibilities in a team manner instead of continuously adding complexity to a monolithic app.
\\n\\nSource code for this post is as always available on my **Github**
\\n\\nSimplified SSR example at **Module Federation Examples**
\\n\\nConcepts & inspiration for the examples of this post comes from the only book out there today about Module Federation **\\\"The Practical Guide to Module Federation\\\"** by **Jack Herrington** & **Zach Jackson** _(Zach is the creator of Module Federation)_. It's really well written and full with information that'll surely help you going forward with Module Federation.
\\n\\nProgressive enhancement is a content first strategy that separates presentation from content, providing an essential baseline / functionality to the majority of users while at the same time serving a fuller experience to browsers supporting x technical requirement.
\\n\\nLet's continue to explore **micro-frontends and SCS** and its contract allowing multiple teams to share content with each other - **fragments**.
\\n\\nAs mentioned in an **earlier post**, there are many viable options to choose from when authoring fragments, all with pros and cons depending on which architectural trade-offs you're willing to take. For the context of this post, the trade-off I'm willing to take, is a runtime dependency to Alpine.js.
\\n\\nBefore diving into how we're going to build our cart fragment using progressive enhancement, let's quickly brush on what Alpine is.
\\n\\nThis is how the author of Alpine describes it:
\\n\\n\\n
With its wide range of APIs for enhancing markup, Alpine will take you a pretty long way for very little effort as you see in this video of the cart fragment in action along with its markup / js further down.
\\n\\nHere is the markup for the cart fragment, everything is wrapped within an anchor tag that gets prevented when Alpine takes over.
\\n\\n\\n```html\\n\\n \\n \\n\\n```\\n\\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n
\\n \\n \\n
Without alpine, this markups serves an **alternate version.**\\nInstead of buy buttons on the product page, we have direct links to each product page, and instead of the expanding minicart we have a link to the checkout page:
\\n\\n\\n
This could come from anywhere, Razor / EJS / **Static Svelte** you name it.
\\n\\nThe important thing to note is that we´ve progressively enhanced our markup using unobtrusive syntax, that will get ignored by the browser if the CDN serving Alpine goes down or if JS is disabled. The **active** property is used to show / hide the enhanced version.
\\n\\n\\n```html\\n
The buyItem method dispatches the buy event along with a payload describing the item that was clicked, this event is listened to by the cart fragment like so:
\\n\\n\\n```html\\nx-on:buy-event.window=\\\"updateCart($event)\\\"\\n```\\n
\\n\\nData object with properties and methods bound to the **x-data=\\\"cart()\\\"** call in our cart component.
\\n\\n\\n```html\\n\\n```\\n
\\n\\nBuilt in dispatch method extracted to send the **buy-event**
\\n\\n\\n```html\\n\\n```\\n
\\n\\nI am amazed over how much you can do with a progressive enhancement library like Alpine, and even though the cart example is simple, from what I've seen I'm willing to bet that choosing Alpine for something more advanced still would be viable.
\\n\\nAll in all - I give Alpine two thumbs up due to it being easy to reason about, that it has an overall better **DX story** over some of the other frontend libraries / frameworks out there and for that I see it being a good fit for a **micro-frontend architectural baseline**.
\\n\\nSo with that being said, in light of its possibilities and ease-of-use, I feel **Alpine.js** is a pretty **solid investment** of your time should you want to get into the world of frontend alternatives.
\\n\\nLastly, here's the link to the **example repo** if you're interested in checking out the code.
\\n\\n**/ND**\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Authoring progressive enhanced fragments with Alpine\"],\"summary\":[0,\"In this post we’ll explore progressive enhanced behavior & templating over static content.\"],\"publishedAt\":[0,\"2020-10-07\"],\"tags\":[1,\"[[0,\\\"progressive-enhancement\\\"],[0,\\\"performance\\\"],[0,\\\"micro-frontends\\\"],[0,\\\"scs\\\"],[0,\\\"alpine\\\"]]\"],\"image\":[0,\"/static/images/alpine/feature.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"prerendering-fragments-svelte.mdx\"],\"slug\":[0,\"prerendering-fragments-svelte\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline, HR } from '../../components/Base'\\nimport Option from '../../components/Option';\\nimport ProCons from '../../components/ProCons';\\nimport PrerenderExplanation from '../../components/PrerenderExplanation';\\n\\n\\nIn my prior blog post **Introduction to micro-frontends and SCS** I wrote about certain rules you want to follow when leveraging a composition strategy using fragments.
\\n\\nFragments are decoupled UI blocks, have no external dependencies (markup, styling and behavior) and are in charge of their own caching strategies.
\\n\\nOut of the three resources a fragment needs - markup, styling and behavior, the one where it's most apparent that you want to have good **DX** is when you need to do some sort of markup **templating**.
\\n\\nHere are some options rated on **DX** and general \\\"best practices\\\" of **SCS**
\\n\\n\\n\\n```javascript\\n// example:\\n\\nconst cartItems = `\\n```javascript\\n// example:\\n\\nconst cartItems = ...\\nReactDOM.render(\\n
\\n```javascript\\n// example:\\n\\n
\\n```javascript\\nServer -> transclusion on the client -> progressive enhance behavior\\n```\\n
\\n\\nPrerendering is basically about hydrating static content, so instead of using **SSR** (Server Side Rendering) where the page gets rendered on the server and then **hydrated** on the client, we swap out the server part by prerendering (statically generating) the page at build time and keep the hydrating part.
\\n\\nThere's this great user discussion at github (Svelvet repo) - **\\\"prerendering html files and hydrating\\\"** where you can read more in depth about some of the community efforts regarding prerendering, but in its most simplest form - you achieve prerendering by combining the output of a SSR build and a build for the browser.
\\n\\nBefore we look at the configuration files, let's have a look at the fragment we're building.
\\n\\n\\n```javascript\\n// Event from fragment consumer host\\ndocument.dispatchEvent(new CustomEvent('buyEvent'));\\n```\\n
\\n\\n\\n```HTML\\n\\nCart
\\n {#await promise}\\n Updating cart...\\n {:then data} {#if data}\\n \\n {#each data as item}\\n
\\n {:else} Cart is empty {/if} {:catch error}\\n {error}
\\n {/await}\\n
\\n Svelte's amazing templating engine makes the logic in this component pretty\\n self explanatory, but to recap what's happening:\\n
\\n\\nWriting this logic \\\"vanilla\\\" as in option #1 is ofc also doable but can get quite messy the more features you add, and chances are you lure yourself into writing custom abstractions that you may not want to own per fragment.
\\n\\nThe following Rollup config and prerender.js (further down) are modified versions of **akaSybe's** **svelte-prerender-example** to fit the requirement of creating fragment resources.
\\n\\n\\n```javascript\\n... imports\\nimport FConfig from './fragment.config.json'; // <- fragment config\\n\\nconst production = !process.env.ROLLUP_WATCH;\\nconst { dist, name } = FConfig;\\n\\nexport default [\\n {\\n /*\\n first pass:\\n */\\n\\n input: 'src/main.js',\\n output: {\\n format: 'iife',\\n name: 'app',\\n file: `${dist}/${name}.js`\\n },\\n plugins: [\\n svelte({\\n dev: !production,\\n hydratable: true\\n }),\\n resolve({\\n browser: true,\\n dedupe: (importee) =>\\n importee === 'svelte' || importee.startsWith('svelte/')\\n }),\\n commonjs()\\n ]\\n },\\n {\\n /*\\n second pass:\\n */\\n\\n input: 'src/App.svelte',\\n output: {\\n format: 'cjs',\\n file: `${dist}/.temp/ssr.js`\\n },\\n plugins: [\\n svelte({\\n dev: !production,\\n generate: 'ssr'\\n }),\\n resolve({\\n browser: true,\\n dedupe: (importee) =>\\n importee === 'svelte' || importee.startsWith('svelte/')\\n }),\\n commonjs(),\\n execute('node src/prerender.js') // <-\\n ]\\n }\\n];\\n```\\n
\\n\\n\\n```json\\n{\\n \\\"name\\\": \\\"fragmentName\\\",\\n \\\"dist\\\": \\\"dist\\\"\\n}\\n```\\n
\\n\\nWhen **prerender.js** is executed, it renders the application and grabs the html and css by using the CJS output of the second pass - **.temp/ssr.js**.
\\n\\nWe save our CSS & HTML resources (JS resource is created at first pass) and generate the inclusion html files.
\\n\\n\\n```javascript\\n... imports\\nconst FConfig = require('../fragment.config.json');\\n\\nconst { dist, name } = FConfig;\\n\\nconst App = require(path.resolve(process.cwd(), `${dist}/.temp/ssr.js`));\\n\\nconst baseTemplate = fs.readFileSync(\\n path.resolve(process.cwd(), 'src/template.html'),\\n 'utf-8'\\n);\\n\\n/*\\nbase template:\\n
\\n- fragmentName.css\\n- fragmentName.js\\n- fragmentName.html\\n- fragmentName.js.html\\n- fragmentName.css.html\\n
\\n\\nOur output consists of two types of resources - fragment resources, e.g:
\\n\\n\\n```html\\n\\n\\n
And inclusion files for your endpoints serving the generated resources.
\\n\\nThis is where caching comes to play, for instance - your fragment consumers only need to worry about requesting **/fragments/cart/cart.js.html** for the behavior part of the cart fragment since the caching for that file is handled by your team.
\\n\\n\\n```html\\n\\n\\n\\n\\n\\n```\\n
\\n\\nThere's a lot more to write about when it comes to fragment composition, an upcoming post will be a more in depth look into the **fourth option** I mentioned earlier where we can utilize the server for templating and progressively enhance behavior client side.
\\n\\nAgain, I encourage you to check out the **discussion** on github to get a feel where this is going.
\\nAdding **Typescript** to the mix **_seems to work_** to which should improve DX even more. Also - the same author behind the prerendering concept mentioned in this post has released a Rollup plugin called **rollup-plugin-svelte-ssr** which he states does the same thing but is easier to use.
\\n\\nFor those of you interested, my fork demoing the fragment stuff we covered in this post (with Typescript) can be found **here**.
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Prerendering static, hydratable fragments with Svelte\"],\"summary\":[0,\"In this post we’ll look at some of the different options we have when writing fragments, how they stand in terms on DX and take a deeper view into prerendering with Svelte.\"],\"publishedAt\":[0,\"2020-10-03\"],\"tags\":[1,\"[[0,\\\"svelte\\\"],[0,\\\"prerendering\\\"],[0,\\\"micro-frontends\\\"],[0,\\\"scs\\\"],[0,\\\"dx\\\"]]\"],\"image\":[0,\"/static/images/prerendering-fragments/sve.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"micro-frontends-scs.mdx\"],\"slug\":[0,\"micro-frontends-scs\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline, HR } from '../../components/Base'\\n\\nIt’s safe to say that frontend development has gotten pretty complicated over the years, even though I consider myself to be reasonably up to date with modern tech stacks, I’ll be the first one to admit that I’m far from an expert in a lot of it - nor do I strive to be.
\\n\\nIf we take a look at some of the things we come across - like Docker, React, Preact, Vue, Svelte, Typescript, Linters, testing libraries, Storybook, Redux, Recoil, Tailwind, CSS in JS libs, Flutter, React Native, Expo, Webpack, Rollup, Express, Koa, GraphQL, Apollo, URQL, ORMs, Next, Gatsby, Sapper & plain vanilla JS you quickly realize that stating something other than what I wrote above is simply impossible. Nobody knows everything (...except full stack devs ofc).
\\n\\nLuckily, adding **micro-frontends** to the mix doesn’t necessarily require you to learn a new framework or a library, or that you need to stop using any of the libraries above.
\\n\\nBut what it does ask of you is that you rethink how you look at frontend development and that you let go of the typical control associated with monolithic frontends.
\\n\\nIn return, together with **SCS** you end up having options you didn’t have before, some being:
\\n\\n\\n- Being able to scale to multiple teams, each being independent and owning their own SCS.\\n- Having autonomous release cycles per team - no more big bang releases.\\n- A way to opt out of the current frontend refactoring meta taking place every other year.\\n
\\n\\nMicro-frontends architecture is the adoption of micro-services extended all the way to the UI layer. Other typical characteristics are that they’re built vertically, owned by dedicated cross-functional teams and that they span from the database to its interface.
\\n\\nDue to the nature of our ever-changing frontend eco system, micro-frontends offers us the option to independently invest in certain key areas of our UI and to optimize throughput by grouping the right people together.
\\n\\nSo **why now?** why are micro-frontends getting popular 2020?\\nI think that one answer is that frontend development right now has gone full circle, from our main line of business being dressing things up and making HTML look good, to the age of preferring **CSR** (client side rendering) and now back to acknowledging that rendering on the server / serving static content is good.
\\n\\nSome of the general skepticism of micro-frontends comes from what I believe are misguided assumptions, where the contrast of wanting to build something like we've always have, to actually working towards the end goal of breaking up a monolith into smaller decoupled UIs, can easily lead us to perceive design advantages as disadvantages along the way - I’m referring to:
\\n\\n\\n- Global styling,\\n- Global state,\\n- Global common code\\n- How can we make a unified experience in terms of UI / UX\\n- And what about DX?\\n
\\n\\nAll of these _\\\"pain points\\\"_ are actual benefits in a micro-frontend architecture, so let’s go left to right and address each problem area with its simplest, most naive solution.
\\n\\nBut first - context:
\\n\\nLet’s say we have a larger team of people working on a monolithic frontend / backend, the end product is a food app made as a node powered isomorphic React SPA.
\\nHaving had issues for quite a while regarding the ability to scale and deploy, and their CTO fresh back from vacation reading **Team Topologies**, they decide to dive head first and adopt SCS and micro-frontends, resulting in changes made to the current team structure - now divided into two separate teams with clear responsibilities:
\\nIn charge of the start page which contains all products
\\nIn charge of the checkout page, but not only that - they’re now also in charge of the cart shown on the start page.
\\nAs an effect to these changes, their isomorphic SPA has now been chopped into two separate projects (repos), both now with independent CI/CD setup and there’s something in front handling routing between the two, laying the ground for everything to still behave like it used to.
\\n\\nSuddenly, with doubled costs and their main React competence jumping ship, they collectively decide to sunset isomorphism and serve views using EJS / Razor or some other view engine on the server (or static).
\\n\\nSome devs in team B panic at first since they're the team owning the more complicated checkout application with behavior. But they keep their cool when they realize they could move logic to the server and progressively enhance client side.
\\n\\nThings are going great, but they do have som some valid concerns...
\\nButtons - copy the markup and CSS, achieving visual consistency is only a challenge if there is something hindering the teams from communicating properly.\\nBase styles and media queries are infrastructure and should reside in some kind of baseline (**CDN**).
\\n\\nThe same thing can be achieved dispatching custom events.
\\n\\n\\n```javascript\\ndocument.dispatchEvent(new CustomEvent('name', { detail: 'payload' }));\\n```\\n
\\n\\n\\n```javascript\\ndocument.addEventListener('name', eventHandler);\\n```\\n
\\n\\nI would say - tell yourself that **D.R.Y is dead** and start by duplicating X.\\nAdding X to the common baseline is opening a door that you want to have closed.\\nChances are you'll start adding more and more things to it, and in the end building a frontend library that your teams are dependent of.
\\n\\n\\n (D.R.Y stands for Don't repeat yourself)\\n
\\n\\n\\n- CSS base and media queries.\\n- Typography.\\n- Polyfills.\\n
\\n\\nLet's answer that with the next segment - fragments.
\\n\\nFragments are composable UIs available through Server Side Inclusion (**SSI**) using Edge Side Includes (**ESI**) or Client Side Inclusion (**CSI**) using **h-include**. They are dependent of their own HTML, CSS and JS (optional) and are what teams produce / consume to enrich their pages.
\\n\\nIt's important to remember that your fragment needs to work on any host without needing anything from the host. Achieving this is crucial for the composition strategy, e.g a fragment containing an icon cannot assume that the hosting environment will provide that icon. Instead it should include the icon itself as an **inline SVG**. Same goes with text, css and behavior.
\\n\\nIt's the fragment producing team's responsibility to own the resources of the fragment (versioning) and its caching strategy. The consuming micro-frontend doesn't care, or doesn't need to care about cache busting since that's handled by the provider.
\\n\\nHaving two separate ways of working with fragment composition enables you to optimize for performance by using ESI and long cache strategies on the server and CSI for personalization / lazy loading on the client.
\\n\\nIn short, all of this means that team B, the one owning the checkout page, now have a way of exposing their cart as a fragment.
\\n\\nKnowing when to opt in or out of an architecture can be hard, and like with everything else - micro-frontends also have downsides as they introduce redundancy and costs associated with running x per team (stacks / ci pipelines etc...). This is something to be aware of although it's generally considered that these costs are worth it and cheaper than the cost of maintaining monolithic ventures over time should you want to invest in micro-frontends & SCS.
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Micro-frontends & SCS\"],\"summary\":[0,\"Micro-frontends architecture is the adoption of micro-services extended all the way to the UI layer. Due to the nature of our ever-changing frontend eco system, micro-frontends offers us the option to independently invest in certain key areas of our UI and to optimize throughput by grouping the right people together.\"],\"publishedAt\":[0,\"2020-09-15\"],\"tags\":[1,\"[[0,\\\"micro-frontends\\\"],[0,\\\"scs\\\"],[0,\\\"performance\\\"],[0,\\\"architecture\\\"]]\"],\"image\":[0,\"/static/images/micro-frontends-scs/frag.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"h-include.mdx\"],\"slug\":[0,\"h-include\"],\"body\":[0,\"\\nimport Types from \\\"./../../components/h-include/Types\\\";\\nimport { H1, H2, H3, P } from '../../components/Base'\\n\\nIn its most basic, **h-include** is simply a custom element that accepts a src attribute which\\nit will use to fetch content through AJAX, then depending on the outcome\\nof that call, it will transclude the content of the ajax response into itself.
\\n\\nThe outcome of the attempted inclusion is represented in class attributes included\\\\_(req status)
\\n\\nh-include ships with different types out of the box, here´s a short description of what they are
\\n\\nSince I started using h-include at a client I've gotten more involved in the project and have made some contributions that further mimics the esi spec, one of them being the usage of **WHEN** and **WHEN-FALSE-SRC**
\\n\\n\\n```html\\n
The predicate function supports namespacing meaning you could have different predicates per project
\\n\\n\\n```html\\n
**WHEN-FALSE-SRC** can also be used not only as a backup for when the predicate returns false but also to handle request errors
\\n\\n\\n```html\\n
Extending h-include to handle different use cases is quite simple to do, for instance - if we imagine needing an h-include element that adds a date as a data attribute to the included element - we could extend the H-include element prototype and override the connected callback method.
\\n\\n\\n```javascript\\nwindow.HInclude.HincludeDateElement = (function () {\\n var proto = Object.create(HInclude.HIncludeElement.prototype);\\n var addDate = function (element) {\\n element.setAttribute(\\\"data-date\\\", new Date());\\n };\\n\\n proto.connectedCallback = function () {\\n addDate(this);\\n };\\n\\n var HincludeDateElement = function () {\\n return Reflect.\\n construct(HTMLElement, arguments, HincludeDateElement);\\n };\\n HincludeDateElement.prototype = proto;\\n\\n Elements.define(\\\"h-include-date\\\", HincludeDateElement);\\n return HincludeDateElement;\\n})();\\n```\\n
\\n\\nThe h-include library refered in this article is written by Gustaf Nilsson Kotte and is based on hinclude.js by @mnot. Make sure to checkout the github repo if this sounds interesting, there's alot more that I haven't mentioned in this article.
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Declarative client-side transclusion with h-include\"],\"summary\":[0,\"h-include is a javascript library for including fragments client side (client side transclusion), perfect fit for micro-frontend architecture in combination with server-side transclusion technologies like ESI.\"],\"publishedAt\":[0,\"2020-09-12\"],\"tags\":[1,\"[[0,\\\"H-include\\\"]]\"],\"image\":[0,\"/static/images/h-include/h2.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"postcss-extract-media-queries.mdx\"],\"slug\":[0,\"postcss-extract-media-queries\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline, HR } from '../../components/Base'\\nimport Files from \\\"./../../components/postcss-extract/Files\\\";\\n\\nThere’s a neat little plugin for postCSS called **postcss-extract-media-query** and I think it's awesome. It generates separate CSS files for every media query you specify you want to extract.
\\n\\nWhen using postCSS without any options, it will look for a **postcss.config** file at the root of your project.\\nIn as in the example below, that is where you tell the plugin which queries specified in your css file that you want the plugin to act upon:
\\n\\n\\n```javascript\\n// gulp example:\\n.pipe(postcss()...\\n```\\n
\\n\\n\\n```javascript\\nmodule.exports = {\\n plugins: {\\n \\\"postcss-extract-media-query\\\": {\\n output: {\\n path: path.join(__dirname, \\\"public/css/optimized\\\"),\\n },\\n queries: {\\n \\\"screen and (min-width: 1024px)\\\": \\\"desktop\\\",\\n },\\n stats: false,\\n },\\n },\\n};\\n```\\n
\\n\\nUsing this config the plugin will extract all base styles and all desktop styles and put them in separate files:
\\n\\n\\n```css\\n/* base.css */\\n\\n.foo {\\n color: red;\\n}\\n@media screen and (min-width: 1024px) {\\n .foo {\\n color: green;\\n }\\n}\\n.bar {\\n font-size: 1rem;\\n}\\n@media screen and (min-width: 1024px) {\\n .bar {\\n font-size: 2rem;\\n }\\n}\\n```\\n
\\n\\n\\n```css\\n/* base.css */\\n\\n.foo {\\n color: red;\\n}\\n.bar {\\n font-size: 1rem;\\n}\\n```\\n
\\n\\n\\n```css\\n/* base.desktop.css */\\n\\n.@media screen and (min-width: 1024px) {\\n .foo {\\n color: green;\\n }\\n .bar {\\n font-size: 2rem;\\n }\\n}\\n```\\n
\\n\\nHaving two files means that devices with a resolution lower than what is specified in the media query will still download the resource, but the resource itself will not be render blocking since the browser knows that the resource doesn't need to be applied, and sets its priority level to lowest.
\\n\\n\\n```html\\n
\\n \\n \\nI think this paragraph copied from their README sums it up quite nicely:
\\n\\n\\n\\n Check out the repo\\n\\n
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"PostCSS media queries\"],\"summary\":[0,\"There’s a neat little plugin for postCSS called postcss-extract-media-query and I think it's awesome.\"],\"publishedAt\":[0,\"2020-06-14\"],\"tags\":[1,\"[[0,\\\"performance\\\"]]\"],\"image\":[0,\"/static/images/postcss.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"introducing-spritelove.mdx\"],\"slug\":[0,\"introducing-spritelove\"],\"body\":[0,\"\\nimport { H1, H2, P } from '../../components/Base'\\n\\nThe React Native version I released a couple of years ago on the App Store drew its second breath a couple of months back when I released app.spritelove.com, a much richer client intended for the desktop, unbound by the restrictions of JS-based drawing operations in RN.
\\n\\nI started this project because I was getting more into pixel graphics at the time and wanted to try building something like Piskel or Aceprite myself using React Native.
\\n\\n\\n To answer the question, building something that could compete with these two\\n giants wasn't remotely on the radar, but like with all personal projects the\\n expectations and scope tend to build up over time which is exactly what\\n happened with Spritelove to.\\n
\\n\\n\\n I actually talked with the author of Piskelapp about collaborating but we\\n dropped the idea since the way I was saving pixel data differed to much from\\n how Piskel was doing it.\\n
\\n\\n\\n Although the RN version I released felt ok, I's miles away from how the\\n desktop version is turning out now that I get to add features and without\\n having to worry about screen real estate as much as I did before.\\n
\\n\\nIn this initial post I will let the animations below introduce some of the things you're capable of doing with Spritelove, on future posts I will write about the code & concepts more in depth.
\\n\\nThis is in the Cypress docs, still it's something that can be said more than once.
\\n\\nAlthough this is a good default behavior, it can be something you scratch your head on for a second or two, so - here's a quick example of what doesn't work and how to fix it:
\\n\\n\\n```javascript\\ncontext('Foo', () => {\\n describe('something', () => {\\n it('sets a cookie', () => {\\n cy.setCookie('cookieA', 'a');\\n })\\n\\n it('gets a cookie', () => {\\n cy.getCookie('cookieA').should('have.property', 'value', 'a') // <- won't work\\n })\\n })\\n```\\n
\\n\\ncookieA is set in the first test but cleared in the context of the second test, unless you specifically tell Cypress to back off:
\\n\\n\\n```javascript\\ncontext('Foo', () => {\\n\\n beforeEach(function () {\\n Cypress.Cookies.preserveOnce('cookieA');\\n })\\n ...\\n```\\n
\\n\\n\\n\\n There's also an option that lets you preserve cookies by whitelisting them.\\n\\n
\\n\\n\\nWhen in doubt, read the docs.
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Cypress - preserve cookies between tests\"],\"summary\":[0,\"Cypress - preserve cookies between tests\"],\"publishedAt\":[0,\"2020-03-12\"],\"tags\":[1,\"[[0,\\\"cypress\\\"],[0,\\\"testing\\\"]]\"],\"image\":[0,\"/static/images/cls/cls_2.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"speedcurve-compare-tests.mdx\"],\"slug\":[0,\"speedcurve-compare-tests\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline } from '../../components/Base'\\n\\nLet's find two tests we want to compare, for this example I'm using a site (Le Monde) found in the newly released benchmark section of SpeedCurve.
\\n\\nYeah, there's a spike there we should compare, so click on the lowest point (Sun 23 Feb) and grab the id part that's available if you right click on the View Test link and click on Copy Link Address.
\\n\\nNow do the exact same thing for the higher point:
\\n\\nIt's in the deploy view where the magic happens, right click on Synthetic - Deploy and grab the base url:
\\n\\n\\n```html\\nhttps://speedcurve.com/benchmark/media-eu/deploy/\\n```\\n
\\n\\nNow use the ids like this: deploy/?previous=(id1)&latest=(id2)
\\n\\n\\n```html\\n// base https://speedcurve.com/benchmark/media-eu/deploy/ // previous test\\n200224_1X_51b5f12d0f73e8736436aa14f1f46c0a // latest test\\n200225_20_ae7bd97a07831b63d50042c18772bef8\\n```\\n
\\n\\n\\nCompare url\\n
\\n\\n\\n```html\\nhttps://speedcurve.com/benchmark/media-eu/deploy/\\n?previous=200224_1X_51b5f12d0f73e8736436aa14f1f46c0a\\n&latest=200225_20_ae7bd97a07831b63d50042c18772bef8\\n```\\n
\\n\\nPasting the url in the browser takes you to the deploy site where you now have access to the filmstrip / video comparison of the two tests and the nitty gritty WebPageTest data
\\n\\nVisit the compare page yourself to figure out what that spike was all about!
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Comparing tests in SpeedCurve\"],\"summary\":[0,\"Comparing tests in SpeedCurve\"],\"publishedAt\":[0,\"2020-03-10\"],\"tags\":[1,\"[[0,\\\"speedCurve\\\"],[0,\\\"Performance\\\"],[0,\\\"webPageTest\\\"]]\"],\"image\":[0,\"/static/images/speedcurve.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"cypress-azure-devops.mdx\"],\"slug\":[0,\"cypress-azure-devops\"],\"body\":[0,\"\\nimport { H1, H2, H3, P, Tagline } from '../../components/Base'\\n\\nWorking on a micro frontend architecture usually means splitting up a monolithic site and its sections into different parts, where each part (or individual site) can be owned and worked on independently by different teams, each with their own product owner, stakeholders and autonomous release trains, and yes you guessed it - integration tests.
\\n\\n\\n Going forward try to imagine a site where the /foo url is owned by team foo,\\n the /bar url by team bar and global tests are owned by both.\\n\\n\\nThe project scaffolding for supporting multiple sites is easy, just think of it as a bunch of test commands pointing to different folders like /integration/foo and /integration/bar.
\\n\\nThis way each site and team could have their own root folder housing all their tests.
\\n\\n\\n```json\\n{\\n \\\"test:all\\\": \\\"cross-env cypress run\\\",\\n \\\"test:foo\\\": \\\"cross-env cypress run --spec './cypress/integration/foo/*'\\\",\\n \\\"test:bar\\\": \\\"cross-env cypress run --spec './cypress/integration/bar/*'\\\"\\n}\\n```\\n
\\n\\nFolder structure
\\n\\n\\n```\\n├── integration\\n│ ├── global\\n│ │ └── foobar.spec.js\\n│ ├── foo\\n│ │ ├── foo.visuals.spec.js\\n│ │ ├── foo.interactions.spec.js\\n│ ├── bar\\n│ │ ├── bar.visuals.spec.js\\n│ │ ├── bar.interactions.spec.js\\n├── src\\n│ ├── Settings\\n│ │ └── index.js\\n├── support\\n│ ├── commands.js\\n│ └── index.js\\n```\\n
\\n\\nFor DX, it's important to setup a file that knows about global things like settings and environment variables.
\\n\\nCreate a file somewhere in your project and link to it in your Cypress support file
\\n\\n\\n```javascript\\n// support / index.js\\nimport \\\"../src/settings\\\";\\n```\\n
\\n\\nThis is how this settings file could look like:
\\n\\n\\n```javascript\\n// src / settings / index.js\\nconst settings = {\\n foo: Cypress.env(\\\"foo\\\"),\\n bar: Cypress.env(\\\"bar\\\"),\\n randomSetting: 3000,\\n};\\ncy.settings = settings;\\n\\n// cy.settings is available everywhere from now on\\n```\\n
\\n\\nCypress looks for a local cypress.env.json file if it cannot find the specified environment variables
\\n\\n\\n```json\\n{\\n \\\"foo\\\": \\\"https://sample-site-qa.com/foo\\\",\\n \\\"bar\\\": \\\"https://sample-site-qa.com/bar\\\"\\n}\\n```\\n
\\n\\nA sample test utilizing cy.settings:
\\n\\n\\n```javascript\\ncontext('Foo', () => {\\n before(() => {\\n cy.visit(cy.settings.foo);\\n });\\n\\n describe('something', () => {\\n it('does something successfully', () => {\\n // ...\\n })\\n })\\n\\n```\\n
\\n\\nLet's look at the main test pipeline.\\nFirst of, we set it up to trigger when our master branch changes.\\nThen after requiring NPM we specify that we want to pull in a variables group for QA (more about that further down).
\\n\\n\\n```yaml\\ntrigger:\\n - master\\n\\npool:\\n vmImage: \\\"ubuntu-latest\\\"\\n demands: npm\\n\\nvariables:\\n - group: \\\"site-qa\\\"\\n```\\n
\\n\\nNext, we setup a scheduled 45 minute cycle cron job and install the dependencies specified in the package-lock.
\\n\\nThe order of the cron syntax goes like this:
\\n\\n\\n- minutes\\n- hours\\n- days\\n- months\\n- days of week\\n
\\n\\n\\n```yaml\\nschedules:\\n - cron: \\\"*/45 * * * *\\\"\\n displayName: \\\"Run once every 45 minutes\\\"\\n branches:\\n include:\\n - master\\n always: \\\"true\\\"\\n\\nsteps:\\n - task: Npm@1\\n displayName: \\\"Npm CI\\\"\\n inputs:\\n command: \\\"custom\\\"\\n workingDir: cypress\\n verbose: true\\n customCommand: \\\"ci\\\"\\n\\n - task: Npm@1\\n displayName: \\\"Npm clean\\\"\\n inputs:\\n command: \\\"custom\\\"\\n workingDir: cypress\\n customCommand: \\\"run clean\\\"\\n```\\n
\\n\\nIn this step we use a script block to grab all variables available within the variable group we imported in step 1 and pass them along to Cypress by prefixing them with CYPRESS\\\\_
\\n\\n(doing so makes them available to us within our tests)
\\n\\nThen we run all of our tests, meaning every .spec file that Cypress finds.
\\n\\nThe reason to why we want to run all of our tests and not site specific tests is that we want the outcome of running all of our tests to be the predicate for if we publish an artifact or not.
\\n\\n\\n```yaml\\n- script: |\\n set CYPRESS_foo=$(foo)\\n set CYPRESS_bar=$(bar)\\n failOnStderr: true\\n workingDirectory: cypress\\n displayName: \\\"Set Cypress env variables\\\"\\n\\n- task: Npm@1\\n displayName: \\\"Npm run test:all\\\"\\n inputs:\\n command: \\\"custom\\\"\\n workingDir: cypress\\n customCommand: \\\"run test:all\\\"\\n```\\n
\\n\\nAfter running our tests we make sure to publish the test results and the captured videos regardless if the test pass or fail.
\\n\\n\\n```yaml\\n- task: PublishTestResults@2\\n displayName: \\\"Publish Test Results **/test-result-*.xml\\\"\\n condition: succeededOrFailed()\\n inputs:\\n searchFolder: \\\"$(System.DefaultWorkingDirectory)\\\"\\n testResultsFormat: \\\"JUnit\\\"\\n testResultsFiles: \\\"**/test-result-*.xml\\\"\\n failTaskOnFailedTests: false\\n\\n- task: CopyFiles@2\\n displayName: \\\"Copy videos\\\"\\n inputs:\\n SourceFolder: cypress/videos\\n TargetFolder: \\\"$(build.artifactstagingdirectory)\\\"\\n condition: succeededOrFailed()\\n```\\n
\\n\\nLastly, we publish the main artifict given all test pass.
\\n\\n\\n```yaml\\n- task: ArchiveFiles@2\\n displayName: \\\"Zip artifact\\\"\\n inputs:\\n rootFolderOrFile: \\\"$(Build.SourcesDirectory)\\\"\\n includeRootFolder: false\\n archiveFile: \\\"$(Build.ArtifactStagingDirectory)/Cypress.zip\\\"\\n\\n- task: PublishBuildArtifacts@1\\n displayName: \\\"Publish Artifact: cypress-drop\\\"\\n inputs:\\n PathtoPublish: \\\"$(Build.ArtifactStagingDirectory)\\\"\\n ArtifactName: \\\"cypress-drop\\\"\\n publishLocation: \\\"Container\\\"\\n```\\n
\\n\\nThis is how the testing phase of one of our micro sites could look:
\\n\\n\\n\\n\\n This phase would typically be after the deployment phase so that the tests are\\n run on the live environment.\\n\\n\\nAs you see on the image, we're using an Azure task group since the only thing different between our test phase for our foo and bar pipelines are which tests we should run (notice the npm script)
\\n\\nVariable / task groups are great when you need to reuse functionality across multiple pipelines.\\nLet's have a look at how both of these work.
\\n\\nYou create a variable group under Pipelines -> Library -> Variable groups.
\\n\\nIn our case we create two:
\\n\\n\\n- site-qa\\n- site-prod\\n
\\n\\nRemember our script phase in our YAML file? This is where the variables **foo** and **bar** are specified and given different values depending on the environment.
\\n\\n\\n\\nA task group is a set of reusable commands, written in either GUI blocks or with YAML.
\\n\\nAs I wrote earlier, the reason to why we need one of these is because we're going to have multiple pipelines and want to avoid writing the same thing over and over again. Right now we only have the foo and bar sites, but in the future this could scale into more sites.
\\n\\nOur task group will test one of our sites based on a dynamic argument, more specifically:
\\n\\n\\n- Download and extract the latest artifact\\n- Run npm CI\\n- Run a specific test script (command line)\\n- Publish test results\\n
\\n\\nThe dynamic element to our test group is setup like this
\\n\\n\\n```bash\\nset CYPRESS_foo=$(foo)\\nset CYPRESS_bar=$(bar)\\n\\nnpm run $(npm.script)\\n```\\n
\\n\\nAs you see from the code above, the task group is dependant on the pipeline using it to not only include the appropriate variable group but to also specify which test script the task group should initiate.
\\n\\nRemember that if everything goes as planned and all of our tests pass, the YAML file has instructions to create and publish a second artifact named **cypress-drop**.
\\n\\nThis artifact is what you could use as a trigger to setup a release pipeline that could run your tests against some other environment like prod or pre-prod, just remember to load in the correct variable group.
\\n\\nThe idea behind this post was to show a simple way of having one single repo host different tests for multiple sites and connecting the dots in Azure Devops.
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Cypress micro-frontend architecture on Azure Devops\"],\"summary\":[0,\"Learn to setup Cypress for a micro-frontend architecture on Azure Devops\"],\"publishedAt\":[0,\"2020-02-18\"],\"tags\":[1,\"[[0,\\\"azure devops\\\"],[0,\\\"cypress\\\"],[0,\\\"integration tests\\\"]]\"],\"image\":[0,\"/static/images/cypress.png\"]}],\"render\":[0,null]}],[0,{\"id\":[0,\"introducing-react-metro.mdx\"],\"slug\":[0,\"introducing-react-metro\"],\"body\":[0,\"import { H1, H2, H3, P, Tagline } from '../../components/Base'\\n\\nAbout two weeks ago I found myself needing to animate a sequence of React components as they mount / unmount, and since I couldn’t google up a lib that did just that, I decided to make one myself, for funsies.
\\n\\nThe idea behind Metro is simply to combine the power of TransitionGroup(Plus) and GSAP TweenMax, and provide a set of helper methods to enhance everyday data.
\\n\\nA set of components should animate away when interacted with, accentuating the one that got selected in some way. Then, when the animation finishes, dispatch something and go on to a new page.
\\n\\nSince animations are most often considered as ‘nice to have’, chances are you already have working components mapped with all the necessary data, like emojis. Therefore I knew that I wanted something that wouldn’t force me to rewrite my presentational components = **Metro.sequence**:
\\n\\n\\n\\nThe gist is to check whether the sequence should be shown or not, then build a sequence by providing your data and mapping your presentational component through the Metro.animation HOC.
\\n\\nThis is enough to animate your components as they mount / unmount, uses a default preset & they access their data through this.props.emoji
\\n\\nThe real power of Metro lives between your data and the map:
\\n\\n\\n\\nThe animationMap gives the developer total control of how each item in a sequence should be animated. When provided, Metro spreads your map on top of the default preset (which can also be overridden). What you do with your map is totally up to you. You just have to make sure to provide a map with a length that equals your data.
\\n\\nEven though the developer has total control of an animation through the use of custom animationMaps, I created a helper method called **Metro.generateFocusMap** for cases where you want to accentuate a specific item within your sequence without having to invest time and effort in writing a custom animationMap.
\\n\\nToday there’s only a couple of presets (all domino based) but the plan is to build a small library with all kinds of variations. Hopefully with the kind help of our generous community this will be a breeze! 💨💨
\\n\\nAs mentioned earlier you can override the default animation, this by passing Metro.sequence an animation object as the last argument. Remember that this setting will affect all your items equally, no matter how many items your sequence may contain:
\\n\\n\\n\\nThere are four of them and they are all optional, one for setting the element type, one for handling clicks and two for binding to sequence mount / unmount complete events:
\\n\\n\\n\\nA detail article coming soon, for now this will get you started:
\\n\\n\\n\\nNicolás Delfino
\\n\"],\"collection\":[0,\"blog\"],\"data\":[0,{\"draft\":[0,false],\"title\":[0,\"Introducing React Metro\"],\"summary\":[0,\"A tiny configurable wrapper for animating dom elements as they mount or unmount\"],\"publishedAt\":[0,\"2017-09-27\"],\"tags\":[1,\"[[0,\\\"React\\\"],[0,\\\"Animation\\\"]]\"],\"image\":[0,\"/static/images/cls/cls_2.png\"]}],\"render\":[0,null]}]]"],"tags":[1,"[[0,\"expo\"],[0,\"eas\"],[0,\"astro\"],[0,\"typescript\"],[0,\"content collection\"],[0,\"azure devops\"],[0,\"CI/CD\"],[0,\"react native\"],[0,\"performance\"],[0,\"speedcurve\"],[0,\"CLS\"],[0,\"web vitals\"],[0,\"micro-frontends\"],[0,\"composition\"],[0,\"classification\"],[0,\"micro frontends\"],[0,\"module federation\"],[0,\"component library\"],[0,\"ui\"],[0,\"webpack\"],[0,\"progressive-enhancement\"],[0,\"scs\"],[0,\"alpine\"],[0,\"svelte\"],[0,\"prerendering\"],[0,\"dx\"],[0,\"architecture\"],[0,\"H-include\"],[0,\"spritelove\"],[0,\"animation\"],[0,\"cypress\"],[0,\"testing\"],[0,\"speedCurve\"],[0,\"Performance\"],[0,\"webPageTest\"],[0,\"integration tests\"],[0,\"React\"],[0,\"Animation\"]]"]}" renderer-url="/_astro/client.38423ee9.js" ssr="" uid="1WXXhl">