<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Harvey's Blog]]></title><description><![CDATA[This is Harvey's blog. A blog in which he discusses his learnings and life.]]></description><link>https://blog.harveydelaney.com/</link><generator>Ghost 3.0</generator><lastBuildDate>Fri, 03 Apr 2026 15:30:02 GMT</lastBuildDate><atom:link href="https://blog.harveydelaney.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Setting Up Automated Database (PostgreSQL) Backups Using Node.js and Bash]]></title><description><![CDATA[<p>In the event of a hardware or software failure, you risk losing your application’s entire database along with your important data. So obviously you should have a disaster recovery plan in place that will cause the least user disruption and data loss.</p><p>To be sure you can recover your</p>]]></description><link>https://blog.harveydelaney.com/setting-up-automated-database-postgresql-backups-using-node-js-and-bash/</link><guid isPermaLink="false">614e8178e7468a00012e9073</guid><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sat, 25 Sep 2021 01:59:22 GMT</pubDate><content:encoded><![CDATA[<p>In the event of a hardware or software failure, you risk losing your application’s entire database along with your important data. So obviously you should have a disaster recovery plan in place that will cause the least user disruption and data loss.</p><p>To be sure you can recover your data with minimal data loss:</p><ul><li>Have a detailed backup of your database.</li><li>Restore it in production in case of failure.</li></ul><p>You can schedule automated database backups and save yourself from the hassle of doing them manually. This way if there is a database server failure, you can always use the latest backup to restore the database.</p><p>In this post, you will learn how to use Node.js to run the bash commands. These commands will prepare a backup of the <a href="https://blog.harveydelaney.com/setting-up-graphql-express-and-postgresql/">PostgreSQL</a> database. This database backup will be uploaded on another server using Node.js. Then we will write a cron job to schedule this database backup process so it can be performed in a defined window. Let’s get started.</p><h2 id="taking-postgresql-database-backup">Taking PostgreSQL database backup</h2><p>The <strong>pg_dump</strong> is a utility used to take the logical backups of a PostgreSQL database. It is automatically installed with the PostgreSQL installation. Below is the syntax to use the <strong>pg_dump</strong> command for backups.</p><pre><code> ubuntu@ubuntu:~$ pg_dump [connection options] [options] [database name]</code></pre><p><strong>NOTE:</strong> This post contains commands specific for <strong>PostgreSQL 13</strong>. For other versions, commands may vary.</p><p>The following connection options are used with the <strong>pg_dump</strong> command to take the database backup.</p><ul><li><strong>-U</strong> option specifies the database user used to run the command.</li><li><strong>-h</strong> is used to specify the hostname for the PostgreSQL server. It may be an IP address or a DNS name.</li><li><strong>-p</strong> is used to specify the port the database server is listening on.</li></ul><p>Other options for database backup include:</p><ul><li><strong>-f</strong> specifies the name of the output file.</li><li><strong>-F</strong> specifies the output file format.</li><li><strong>-d</strong> specifies the database name to have a backup of.</li></ul><p>If you wish to dig deep and explore more options, there is a list of them <a href="https://www.postgresql.org/docs/13/app-pgdump.html">here</a>.</p><p>Advancing ahead, in order to take backup of the entire database using the <strong>pg_dump</strong> utility along with the necessary options, use this syntax:</p><pre><code>ubuntu@ubuntu:~$ pg_dump -U admin -h localhost -p 5432 -f db_backup.tar -F t -d pg_database</code></pre><p>The above command will generate a file <strong>db_backup.tar</strong>, which will be the backup of the entire PostgreSQL database named <strong>pg_database</strong>. We will run this command using Node.js code and then upload the backup file to another server.</p><h2 id="running-the-bash-command-using-node-js">Running the bash command using Node.js</h2><p>In this section, we will write a Node.js code that will run the above mentioned bash “pg_dump” command to take the PostgreSQL database backup and then upload the backup file to another server.</p><p>First of all, install the required dependencies using the <strong>npm</strong>, which is a package manager for Node.js. We will use the <strong>dotenv</strong> and <strong>@getvim/execute</strong> packages. The <strong>dotenv</strong> is used to manage environment variables and we will use it to pass the database connection parameters. The <strong>@getvim/execute</strong> package will be used to run the bash commands in Node.js code.</p><p>ubuntu@ubuntu:~$ npm install dotenv @getvim/execute</p><p>Create a file <strong>.env</strong> in the root directory of the Node.js project and enter the database parameters like database user, host, and database name.</p><pre><code>DB_USER=admin
DATABASE=pg_db
DB_HOST=localhost
DB_PORT=5432</code></pre><p>Now create a file <strong>index.js</strong> and write the Node.js code to take the backup of the database.</p><pre><code class="language-javascript">// importing required modules
const { execute } = require('@getvim/execute');
const dotenv = require('dotenv').config();

// getting db connection parameters from environment file
const username = process.env.DB_USER;
const database = process.env.DATABASE;
const dbHost = process.env.DB_HOST;
const dbPort = process.env.DB_PORT;

// defining backup file name
const date = new Date();
const today = `${date.getFullYear()}-${date.getMonth()}-${date.getDate()}`;
const backupFile= `pg-backup-${today}.tar`;

// writing postgresql backup function
const takePGBackup = () =&gt; {
    execute(`pg_dump -U ${username} -h ${dbHost} -p ${dbPort} -f ${backupFile} -F t -d ${database}`);
    .then( async () =&gt; {
    	console.log(`Backup created successfully`);
    })
    .catch( (err) =&gt; {
    	console.log(err);
    });
}

// calling postgresql backup function
takePGBackup();</code></pre><p>The above code gets db parameters from the <strong>.env</strong> file and then takes the backup from the PostgreSQL database. Now this database backup needs to be compressed, as the original database backup may be very large in size.</p><p>Extend the existing <strong>takePGBackup()</strong> function and add the code to compress the backup file. Before writing the code, install the <strong>gzipme</strong> package that is used to compress the files.</p><pre><code>ubuntu@ubuntu:~$ npm install gzipme</code></pre><p>After installing the package, do not forget to import <strong>fs</strong> and <strong>gzipme</strong> packages into the <strong>index.js</strong> file using the <strong>require()</strong> methos.</p><pre><code class="language-javascript">const { execute } = require(‘@getvim/execute’);
const dotenv = require(‘dotenv’).config();
const compress = require(‘gzipme’);
const fs = require(‘fs’);
const takePGBackup = () =&gt; {

execute(`pg_dump -U ${username} -h ${dbHost} -p ${dbPort} -f ${backupFile} -F t -d ${database}`)
    .then( async () =&gt; {
        // add these lines to compress the backup file
        await compress(backupFile);
        fs.unlinkSync(backupFile);
        console.log("Zipped backup created");
    })
    .catch( (err) =&gt; {
   		console.log(err);
    });
}
// calling postgresql backup function
takePGBackup();</code></pre><p>So far, this code takes the backup from the database and compresses it into a ZIP file. Now write a function to upload this backup to another server. This function will make a POST request with the backup file in the body to another server.</p><p>Next on the to-do list is to install the <strong>axios</strong> and <strong>form-data</strong> modules. The <strong>axios</strong> modules will be used to make a POST request while the <strong>form-data</strong> module will be used to send the file in the body of the POST request.</p><pre><code>ubuntu@ubuntu:~$ npm install axios form-data</code></pre><pre><code class="language-javascript">const { execute } = require(‘@getvim/execute’);
const dotenv = require(‘dotenv’).config();
const compress = require(‘gzipme’);
const fs = require(‘fs’);
const axios = require(‘axios’);
const formData = require(‘form-data’);
// function to upload backups
const uploadBackups = (backupFile) =&gt;
{
    // making post request using axios
    const form = new formData();
    form.append('file', backupFile)
    axios.post('http://backup-server.url/api/upload-form', form, { headers: form.getHeaders()});
    .then(result =&gt; {
        fs.unlinkSync(backupFile);
        console.log(result.data);
    })
    .catch(err =&gt; {
    	console.log(err);
    })
}

// calling uploadBackups function
uploadBackups(backupFile);</code></pre><h2 id="schedule-the-backups-using-cron-job">Schedule the backups using cron job</h2><p>You want to be sure your data is backed up regularly, otherwise if an event causes you to lose your database, your backup will only be as current as the last time you remembered to back it up. With all the precautions you take, from elaborate passwords to <a href="https://cyberwaters.com/best-vpn-for-gaming/">gaming VPNs</a>, it would be a shame if you suffered a major setback simply because you didn’t schedule your backups.</p><p>Up till now, the code to take the automated backup and upload it to the server is complete. Now it is time to schedule the execution of this code. We will use cron job to schedule this task.</p><p>Cron job is a command line program used to schedule tasks on Linux machines. The task may be a command or a script and can be scheduled using a set of fields. This is the syntax to use the cron job:</p><p>&lt;m&gt; &lt;h&gt; &lt;dom&gt; &lt;mon&gt; &lt;dow&gt; &lt;cmd&gt;</p><ul><li><strong>minute:</strong> This is the first field from the left and defines the minute of the hour the job runs. It ranges from 0 to 59.</li><li><strong>hour:</strong> Second field from the left and specifies the hour. It has values from 0 to 23 where 0 is 12 at midnight.</li><li><strong>dayOfMonth:</strong> Third field from the left and specifies the day of the month the command runs. It has values from 1 to 31.</li><li><strong>month:</strong> Fourth field from the left and specifies the month the commands run. It has values from 1 to 12.</li><li><strong>dayOfWeek:</strong> Fifth field from the left and specifies the day of the week the task runs. It has values from 0 to 6 with 0 being Sunday.</li><li><strong>Command:</strong> The last field is the command that is executed on a specified schedule.</li></ul><p>In order to run this code using cron job, first find the location of the node binaries using the <strong>which</strong> command as cron job accepts the absolute path of the command.</p><pre><code>ubuntu@ubuntu:~$ which node</code></pre><p>Open the crontab in the terminal using the <strong>-e</strong> option along with the <strong>crontab</strong> command.</p><pre><code>ubuntu@ubuntu:~$ sudo crontab -e</code></pre><p>It will open a crontab. At the end of the file, use the following pattern to run the code every day at 8:00 AM.</p><pre><code>0 8 * * * /usr/local/bin/node &lt;absolute path to node script&gt;</code></pre><p>The asterisk (*) in the above pattern means every possible value for the field. There are some other options that can be used to customise the schedule of the cron job. Explore further and play around with all the different options <a href="https://crontab.guru/">here</a>. After scheduling the job, verify by listing all the scheduled jobs using the following command.</p><pre><code>ubuntu@ubuntu:~$ sudo crontab -l</code></pre><p>At the end of the file, it lists all the scheduled jobs. In order to remove the scheduled job, use the following command.</p><pre><code>ubuntu@ubuntu:~$ sudo crontab -r</code></pre><p>After applying any change to the crontab, restart the cron service.</p><pre><code>ubuntu@ubuntu:~$ sudo systemctl restart cron</code></pre><p>OR</p><pre><code>ubuntu@ubuntu:~$ sudo service cron reload</code></pre><h2 id="wrapping-up">Wrapping up</h2><p>In this post, we’ve looked at how to use the Node.js program to make database backups. We’ve identified how to schedule this task using cron job and how to use bash commands inside our Node.js code to automate database backup, compress the backup, and upload the compressed database backup to another server for disaster recovery.<br><br></p>]]></content:encoded></item><item><title><![CDATA[Creating your own Vue Component Library]]></title><description><![CDATA[<p>Previously, I wrote an article outlining the steps required to get a React component library up and running. You can read it here: <a href="https://blog.harveydelaney.com/creating-your-own-react-component-library/#creating-the-component-library">https://blog.harveydelaney.com/creating-your-own-react-component-library</a></p><p>It's had some pretty good feedback so I decided to branch out and create another component library template for Vue! I've not</p>]]></description><link>https://blog.harveydelaney.com/creating-your-own-vue-component-library/</link><guid isPermaLink="false">5f327b56d8c8ff0001203fae</guid><category><![CDATA[Vue]]></category><category><![CDATA[Component Library]]></category><category><![CDATA[Front-end]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Tue, 11 Aug 2020 12:28:31 GMT</pubDate><content:encoded><![CDATA[<p>Previously, I wrote an article outlining the steps required to get a React component library up and running. You can read it here: <a href="https://blog.harveydelaney.com/creating-your-own-react-component-library/#creating-the-component-library">https://blog.harveydelaney.com/creating-your-own-react-component-library</a></p><p>It's had some pretty good feedback so I decided to branch out and create another component library template for Vue! I've not had much experience with Vue before this article and was a great learning experience. </p><h1 id="vue-component-library-code">Vue Component Library Code</h1><p>If think you'll find it helpful to have the code up while you read through the article, have a look at the following GitHub repository I created:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/HarveyD/vue-component-library"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HarveyD/vue-component-library</div><div class="kg-bookmark-description">A project skeleton to get your very own Vue component library up and running using Rollup, Typescript + Vue - HarveyD/vue-component-library</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg"><span class="kg-bookmark-author">HarveyD</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars1.githubusercontent.com/u/5586128?s=400&amp;v=4"></div></a></figure><h1 id="component-library-overview">Component Library Overview</h1><p>We'll be using the following to create the library:</p><ul><li>Vue (obviously)</li><li>TypeScript</li><li>Rollup</li></ul><p>I'll also be showing you how to publish your component library to NPM and how to consume it.</p><h1 id="creating-the-component-library">Creating the Component Library</h1><h2 id="project-setup">Project Setup</h2><p>Initialise NPM - <code>npm init</code> to generate a basic <code>package.json</code> file.</p><p>Initialise Git - <code>git init</code>.</p><p>We're going to have the following structure for our component library:</p><pre><code>.gitignore
package.json
rollup.config.js
tsconfig.json
src/
  SampleComponent/
    SampleComponent.vue
  index.ts
  shims.tsx.d.ts
  shims.vue.d.ts</code></pre><p>We will be outputting our bundled + compiled files into the <code>lib</code> directory.</p><h2 id="vue">Vue</h2><p>Since we're building a Vue component library, we need to install Vue and it's required dependencies to get it working. Run:</p><pre><code>npm i -D vue vue-class-component vue-property-decorator @vue/compiler-sfc vue-template-compiler</code></pre><p>You'll notice we're installing these as <code>devDependencies</code> which won't be included when we install the component library in another project. We should expect the project that installs the component library already has Vue installed.</p><p>To make sure this is the case, we can specify Vue dependencies as <code>peerDependencies</code>. To do this, open <code>package.json</code> and add:</p><pre><code class="language-json">...
"peerDependencies": {
    "vue": "^2.6.11",
    "vue-class-component": "^7.2.3",
    "vue-property-decorator": "^8.4.2"
  },
  ...</code></pre><p>When the component library is installed as a dependency, but the consuming project does not have the <code>peerDependencies</code>, a warning will be output.</p><h3 id="vue-component">Vue Component</h3><p>What use is a component library without any components? Let's add our first <strong>Vue </strong>component. Within <code>src/SampleComponent/SampleComponent.vue</code> add:</p><figure class="kg-card kg-code-card"><pre><code class="language-vue">&lt;template&gt;
  &lt;div class="sample-component-container"&gt;
    &lt;h2&gt;{{ headingText }}&lt;/h2&gt;
    &lt;h3&gt;{{ bodyText }}&lt;/h3&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script lang="ts"&gt;
import { Component, Prop, Vue } from "vue-property-decorator";

@Component
export default class SampleComponent extends Vue {
  @Prop() private headingText: string = "";
  @Prop() private bodyText: string = "";
}
&lt;/script&gt;

&lt;style&gt;
.sample-component-container {
  padding: 40px;
  background-color: black;
  color: white;
}
&lt;/style&gt;
</code></pre><figcaption>src/SampleComponent/SampleComponent.vue</figcaption></figure><p>Next, let's expose this component in the root file of our library <code>src/index.ts</code>:</p><pre><code class="language-typescript">import SampleComponent from "./SampleComponent/SampleComponent.vue";

export { SampleComponent };
</code></pre><h2 id="typescript">TypeScript</h2><p>Install TypeScript by running:</p><pre><code>npm i -D typescript</code></pre><p>After installing TypeScript, create a <code>tsconfig.json</code> file. Within <code>tsconfig.json</code> and add:</p><pre><code class="language-json">{
  "compilerOptions": {
    "declarationDir": "lib",
    "target": "es5",
    "module": "es2015",
    "sourceMap": true,
    "declaration": true,
    "importHelpers": true,
    "strict": true,
    "experimentalDecorators": true
  },
  "include": ["src/**/*.ts", "src/**/*.vue"]
}
</code></pre><p>To get Vue types working with TypeScript, we need to introduce some shims. In <code>src/shims-tsx.d.ts</code> add:</p><pre><code class="language-typescript">import Vue, { VNode } from 'vue'

declare global {
  namespace JSX {
    // tslint:disable no-empty-interface
    interface Element extends VNode {}
    // tslint:disable no-empty-interface
    interface ElementClass extends Vue {}
    interface IntrinsicElements {
      [elem: string]: any
    }
  }
}
</code></pre><p>And in <code>src/shims-vue.d.ts</code> add:</p><pre><code class="language-typescript">declare module '*.vue' {
  import Vue from 'vue'
  export default Vue
}
</code></pre><h2 id="rollup">Rollup</h2><p>First, add Rollup and some additional Rollup plugins that'll help us get our component library transpiled and bundled:</p><pre><code>npm i -D rollup rollup-plugin-peer-deps-external @rollup/plugin-node-resolve @rollup/plugin-commonjs rollup-plugin-typescript2 rollup-plugin-vue</code></pre><p>In <code>rollup.config.js</code> add:</p><pre><code class="language-javascript">import peerDepsExternal from "rollup-plugin-peer-deps-external";
import resolve from "@rollup/plugin-node-resolve";
import commonjs from "@rollup/plugin-commonjs";
import typescript from "rollup-plugin-typescript2";
import vue from "rollup-plugin-vue";

import packageJson from "./package.json";

export default {
  input: "src/index.ts",
  output: [
    {
      format: "cjs",
      file: packageJson.main,
      sourcemap: true
    },
    {
      format: "esm",
      file: packageJson.module,
      sourcemap: true
    }
  ],
  plugins: [peerDepsExternal(), resolve(), commonjs(), typescript(), vue()]
};
</code></pre><p>Let's go through this config file and understand what it's doing.</p><h3 id="input">Input</h3><p>This points to <code>src/index.ts</code>. Rollup will build up a dependency graph from this entry point and then bundle all the components that are imported/exported. </p><h3 id="output">Output</h3><p>This is an array with two config objects. These two configs tell Rollup to output two bundles in two different JavaScript module formats:</p><ul><li><strong>CommonJS - CJS</strong></li><li><strong>ES Modules - ESM</strong></li></ul><p><strong>IMPORTANT</strong>: You'll notice in <code>output</code>, values from <code>package.json</code> are used. In <code>package.json</code>, add:</p><pre><code class="language-json">...
  "main": "lib/index.js",
  "module": "lib/index.esm.js",
...</code></pre><p>This tells packages that install our component library where the entry point of our component library is. We re-use these values in <code>rollup.config.js</code> to instruct Rollup where to output our bundled file.</p><h3 id="plugins">Plugins</h3><ul><li>peerDepsExternal - if you remember earlier, we specified some <code>peerDependencies</code>. This plugin will take these dependencies and assign them to the <code>external</code> field in Rollup. Any module in <code>external</code> is not added to the Rollup bundle. This will reduce our bundle size as we already know these dependencies will be present in any project that consumes our library.</li><li>resolve (<code>@rollup/plugin-node-resolve</code>) - efficiently bundles third party dependencies we've installed and use in <code>node_modules</code></li><li>commonjs (<code>@rollup/plugin-commonjs</code> - enables transpilation into CommonJS (CJS) format</li><li>typescript (<code>rollup-plugin-typescript2</code>) - transpiles our TypeScript code into JavaScript. This plugin will use all the settings we have set in <code>tsconfig.json</code></li><li>vue (<code>rollup-plugin-vue</code>) - allows us to create our Vue components as <a href="https://vue-loader.vuejs.org/spec.html" rel="noopener noreferrer">Single-File Components (SFCs)</a> and still have Rollup bundle them as expected</li></ul><h2 id="running-rollup">Running Rollup</h2><p>Now we need a way to be able to instruct Rollup to do its thing. In <code>package.json</code>, add: </p><pre><code class="language-json">...
  "scripts": {
    "build": "rollup -c"
  },
  ...</code></pre><p>The <code>-c</code> flag tells Rollup to use the <code>rollup.config.js</code> file we've created.</p><p>Try running it by running <code>npm run build</code>, you should see:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/08/image-2.png" class="kg-image"></figure><p>Then in <code>/lib</code> you should something like:<br> </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/08/image-3.png" class="kg-image"></figure><p>If this is true, your component library has successfully been configured. It's now time to use it!</p><p><em><strong>Note</strong></em>: it's probably worth having the following <code>.gitignore</code>:</p><pre><code>node_modules
lib</code></pre><h2 id="final-npm-config">Final NPM Config</h2><p>As a reference, here's what your final <code>package.json</code> should look like:</p><pre><code class="language-json">{
  "name": "vue-component-library",
  "main": "lib/index.js",
  "module": "lib/index.esm.js",
  "scripts": {
    "build": "rollup -c"
  },
  "peerDependencies": {
    "vue": "^2.6.11",
    "vue-class-component": "^7.2.3",
    "vue-property-decorator": "^8.4.2"
  },
  "devDependencies": {
    "vue": "^2.6.11",
    "vue-class-component": "^7.2.3",
    "vue-property-decorator": "^8.4.2",
    "@rollup/plugin-commonjs": "^14.0.0",
    "@rollup/plugin-node-resolve": "^8.4.0",
    "@vue/compiler-sfc": "^3.0.0-rc.5",
    "rollup": "^2.23.1",
    "rollup-plugin-peer-deps-external": "^2.2.3",
    "rollup-plugin-typescript2": "^0.27.2",
    "rollup-plugin-vue": "^5.1.6",
    "typescript": "^3.9.7",
    "vue-template-compiler": "^2.6.11"
  }
}
</code></pre><h1 id="publishing-the-component-library">Publishing the Component Library</h1><p>Now, it's time to publish our component library. We can either publish our library on public <a href="https://www.npmjs.com/">NPM</a> registry, or we can find a self-hosted private NPM registry alternative like <a href="https://github.com/verdaccio/verdaccio">Verdaccio</a>. After you've made your choice and configured NPM to point to the registry, run:</p><pre><code>npm publish</code></pre><p>NPM will then read <code>files</code> to see what files it should include when publishing. In our case it's <code>lib</code>, so it'll publish the bundled/transpiled code output by Rollup.</p><p>You should see something like:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/08/image-4.png" class="kg-image"></figure><p>For example, I've published a version of this Vue component library at: <a href="https://www.npmjs.com/package/harvey-vue-component-library">https://www.npmjs.com/package/harvey-vue-component-library</a></p><p><strong><em>Note:</em></strong> You have to make sure that the files are available in <code>lib</code> before publishing.</p><h1 id="using-the-component-library">Using the Component Library</h1><h2 id="locally">Locally</h2><p>We <strong>don't </strong>have to publish the component library to an NPM registry before installing our component library and test out our components. </p><p>Let's say you had a React project on your local machine called <code>harvey-test-app</code>. In <code>harvey-test-app</code> run (making sure the path is correct to your component library):</p><pre><code>npm i -D ../vue-component-library</code></pre><p>This creates <code>vue-component-library</code> as a symlinked dependency. Read more at: <a href="https://docs.npmjs.com/cli/link">https://docs.npmjs.com/cli/link</a></p><h2 id="from-npm-registry">From NPM Registry</h2><p>Let's say your component library is available at: <a href="https://www.npmjs.com/package/harvey-vue-component-library">https://www.npmjs.com/package/harvey-vue-component-library</a>, you would install it in another Vue project by running:</p><pre><code>npm i -D harvey-vue-component-library</code></pre><h2 id="using-components">Using Components</h2><p>Make sure that you register your Vue component. Then you can simple import and use components in another project like:</p><pre><code class="language-vue">&lt;template&gt;
  &lt;div id="app"&gt;
    &lt;h1&gt; Hello I'm consuming the component library &lt;/h1&gt;
    &lt;SampleComponent headingText="This is a test component" bodyText="Made with love by Harvey"/&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script lang="ts"&gt;
import { Component, Vue } from "vue-property-decorator";
import { SampleComponent } from "vue-component-library";

@Component({
  components: {
    SampleComponent
  }
})
export default class App extends Vue {}
&lt;/script&gt;
</code></pre><p>Here's a Code Sandpit with an example:</p><!--kg-card-begin: html--><iframe src="https://codesandbox.io/embed/romantic-keller-egbey?fontsize=14&hidenavigation=1&theme=dark" style="width:100%; height:500px; border:0; border-radius: 4px; overflow:hidden;" title="romantic-keller-egbey" allow="accelerometer; ambient-light-sensor; camera; encrypted-media; geolocation; gyroscope; hid; microphone; midi; payment; usb; vr; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-presentation allow-same-origin allow-scripts"></iframe><!--kg-card-end: html--><p></p><h1 id="github-repository">GitHub Repository</h1><p>Again, here's the GitHub repo I created with everything in this article:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/HarveyD/vue-component-library"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HarveyD/vue-component-library</div><div class="kg-bookmark-description">A project skeleton to get your very own Vue component library up and running using Rollup, Typescript + Vue - HarveyD/vue-component-library</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg"><span class="kg-bookmark-author">HarveyD</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars1.githubusercontent.com/u/5586128?s=400&amp;v=4"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Choosing a Usenet Provider - a Beginner's Guide]]></title><description><![CDATA[<p>I've recently started to use Usenets over torrents for downloading content. Read about my Usenet articles below: </p><ul><li><a href="https://blog.harveydelaney.com/switching-from-torrents-to-usenet-the-why-and-how/">Switching from Torrents to Usenets - The Why and How</a></li><li><a href="https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/">Automating your Usenet downloads</a></li></ul><p>One thing I spent the most time on in my Usenet setup was the research into what Usenet provider</p>]]></description><link>https://blog.harveydelaney.com/choosing-a-usenet-provider/</link><guid isPermaLink="false">5e9b8fc89786f60001093c0a</guid><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sun, 31 May 2020 11:21:39 GMT</pubDate><content:encoded><![CDATA[<p>I've recently started to use Usenets over torrents for downloading content. Read about my Usenet articles below: </p><ul><li><a href="https://blog.harveydelaney.com/switching-from-torrents-to-usenet-the-why-and-how/">Switching from Torrents to Usenets - The Why and How</a></li><li><a href="https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/">Automating your Usenet downloads</a></li></ul><p>One thing I spent the most time on in my Usenet setup was the research into what Usenet provider to use. I found that it is most important service that you will choose within your Usenet setup. Usenet providers often have paid subscription plans. Whenever I pay for a service (especially a subscription purchase) I want to make sure that I choose the best provider and plan that meets my needs.</p><p>This article will guide you through what I think the most important things to consider are when choosing a Usenet provider. I recommend you consult other articles to make sure what I've said here aligns with my recommendations here!</p><p>I've identified a number of "core" features/offerings that are the most important things to consider when choosing a Usenet provider. I'll also cover "nice to have" features/offerings which are some unessential that Usenet providers bundle within their plans to sweeten the deal.</p><h2 id="core">Core  </h2><ul><li>Download speeds</li><li>Download limits</li><li>Number of connections</li><li>Binary retention duration</li><li>Available server regions</li><li>Pricing and subscriptions</li></ul><h2 id="nice-to-haves">Nice to haves</h2><ul><li>Trial periods</li><li>Customer support</li><li>SSL</li><li>VPN</li><li>Newsreaders / Search / Browser</li></ul><h1 id="core-1">Core</h1><h2 id="download-limits">Download Limits</h2><p>The most important aspect to consider for a provider. This is the amount of data that you're able to download per month. Usenet providers will usually advertise this amount clearly. The amounts are usually specified in GB/month. Some select (more expensive) plans will offer unlimited download limits.</p><p>Try to select a plan that will closely meet your download amount needs (and you can save money!).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-39.png" class="kg-image"><figcaption>An example of a plan with a set download limit/month</figcaption></figure><h2 id="download-speeds">Download Speeds</h2><p>This is the maximum download speed that their servers will provide you. Please note, this is a maximum, not a minimum amount, so expect to see speeds below this "maximum" amount.</p><p>I find this to be the most important point due to what Usenet providers are used for. I don't think anyone has ever said that their download speeds were too fast and requested for them to be slower. Since Usenets provider are used by people to download content, people want to download their content quick!</p><p>Make sure you're choosing a provider that meets your download speed needs.</p><h2 id="number-of-connections">Number of Connections</h2><p>At first, you may not think this applies to you, but it does! This is NOT the number of simultaneous downloads that can occur, but the number of Usenet file downloads that can happen in parallel.</p><p>A file stored on the Usenet is split into multiple parts. Your download client will attempt to download these file parts concurrently. The higher the amount of connections, the higher the amount of files you can download at any single time. This is important for downloading files and even more important for downloading multiple files at the same time.</p><p>This has a direct impact on your download speeds. To help visualise this, I've experimented with downloading a <strong>single file</strong> using a maximum of 1 connection, 15 connections and 30 connections using my download client (NZBGet):</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-30.png" class="kg-image"></figure><h3 id="connections-1-">Connections (1)</h3><p>Using 1 connection, I was only able to reach speeds of ~1.3mbps:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-31.png" class="kg-image"></figure><h3 id="connections-15-">Connections (15)</h3><p>Using 15 connections, I was able to reach speeds of ~5 MBPS: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-32.png" class="kg-image"></figure><h3 id="connections-30-">Connections (30)</h3><p>Using 30 connections, I was able to reach speeds of ~5.6 MBPS (max of my ISP):</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-33.png" class="kg-image"></figure><p></p><p><em><strong>Please note</strong></em>, the above will vary based on the content you're downloading in addition to your network speeds, it's just a little guide :).</p><h2 id="binary-retention-duration">Binary Retention Duration</h2><p>A "binary" is a fancy word for a non-text file. This includes images, videos, executables etc. Once a file is uploaded to the Usenet network, a Usenet provider will attempt to store that file on their servers for a set amount of time (usually specified as days). </p><p>The higher this retention period is, the more likely you'll find older content. I've found that the upper bound of what's offered is ~4300 days. This means you if you join a Provider that offers that retention period, they will likely have files uploaded to the Usenet around 11.5 years ago.  </p><p>So, if you <strong>know </strong>that you're only going to be downloading recently released content, you could save money by choosing a provider that has a shorter retention period. However, if you're a power user, I highly recommend not skimping on this aspect.</p><p>I've experienced using a provider which had an advertised <strong>1100 day</strong> retention and a provider with a <strong>4300 day</strong> retention. My experience with the 1100 day retetion provider was quite poor (and I almost stopped using Usenet because of it). Most content I tried to download, I would run into failures. The failures were encountered because some "blocks" that made up a file were missing. This is likely due to the fact the retention period was on the lower end:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-38.png" class="kg-image"></figure><p>Once I switched to the 4300 day retention provider, my experience drastically improved. Most, if not all, of my content downloaded successfully the first time. For this reason, I recommend not cheaping out on a provider with a high retention rate and save yourself some sanity :).</p><h2 id="available-server-regions">Available Server Regions</h2><p>This is a point that often is overlooked, yet still important. Most people understand that the closer you are geographically to a server, the faster your latency and download speeds will be. This is also the case for Usenet providers - you want to be a close as possible to their servers.</p><p>A Usenet provider will have servers in a set number of regions. I've found that most Usenet providers don't clearly advertise what server regions they have available. But you'll find that most have US and EU servers.</p><p>You'll often have to dig through their support documentation to find out what servers are available, for example, I found the available regions (US, EU, NL, DE) for <a href="https://support.newshosting.com/kb/article/104-newshosting-nntp-server-information/">Newshosting here</a>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-34.png" class="kg-image"></figure><h2 id="pricing-and-plans">Pricing and Plans</h2><p>All of the above points are variable based on how much you're willing to pay! The more expensive a plan, the better your Usenet experience will be. </p><p>There are two different ways that you can gain access to a Usenet provider's services:</p><ul><li>Subscription</li><li>Block amounts</li></ul><p>Subscription models are the most common way to gain access to a Usenet provider. You'll pay a set amount per month to gain access to the service. You can also pay for a year of monthly payments upfront which will reduce the amount you pay per month.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-40.png" class="kg-image"><figcaption>An example of a cheap, monthly subscription plan</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-36.png" class="kg-image"><figcaption>An example of a more expensive unlimited subscription plan</figcaption></figure><p>Block amounts are plans that offer a fixed amount of storage for a one time payment. These plans allow you to use a Usenet provider's service up until you have downloaded the agreed upon "block" amount. I would be wary of the expiration of these blocks, make sure they don't expire earlier than you expect.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-35.png" class="kg-image"><figcaption>An example of a one time payment block account</figcaption></figure><p>Block amount plans are ideal for people new to Usenet. Whereas subscription plans are better for power users as they usually have unlimited downloads and amounts.</p><h1 id="nice-to-haves-1">Nice to haves</h1><p>There is a lot of competition in the Usenet provider space. Each provider can compete on the <strong>core </strong>offerings (that we covered previously), but it's become increasingly difficult to compete on price/core offerings alone (they have to pay for server and storage costs somehow!). So, to help convince people to use their service, providers offer additional benefits to help new customers choose their service over other providers.</p><h2 id="trial-periods">Trial Periods</h2><p>Most of the Usenet provider services are confident enough in their services to offer a free trial to all new customers. I highly recommend using these trials to see if the provider meets your needs. </p><p>These free trials will often have a set amount of days and download amounts (in GB) that you can use the service for before the trial ends. You <strong>will </strong>have to provide your credit card information for the trial. I recommend being conscious of these numbers if you don't want to accidentally get charged when you weren't ready to commit to a plan.</p><h3 id="ssl-security-">SSL (Security)</h3><p>I was considering categorising this under "core" offerings, but it's become so common now that I've moved it here. Double check that the provider you're signing up with provides downloads over SSL. This allows you to download your content over an encrypted connection - so no one can spy on what you're downloading! If the provider doesn't offer SSL, I recommend not using them.</p><p> You can usually identify this by seeing if they provide SSL Ports on their support page:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-37.png" class="kg-image"></figure><h3 id="vpn-virtual-private-network-">VPN (Virtual Private Network)</h3><p>A number of Usenet provider's monthly subscription plans come with VPN services bundled. This can be super handy for you if you have currently have a VPN, but if you do, then there's less incentive to sign up with that provider. </p><p>These VPN services are usually not as good as standalone services, but they are still nice to have if you don't have one already.</p><h3 id="newsreaders-usenet-search-browsers">Newsreaders / Usenet Search / Browsers</h3><p>Some Usenet providers also grant you access to their own Usenet searching software. Sometimes they call them "Newsreaders", browsers, search or something similar. I like to think of these  as a form of Usenet indexer (think Google, but for Usenet, but not as good). These searching tools simply allow you to browse the files within the Usenet. </p><p>I have found these bundled indexers to be quite poor. They don't accurately find what I'm looking for and always return unrelated results. However, they can be a good introduction into Usenets. It can be an easy way to start downloading Usenet content immediately. Since they're bundled for free within certain plans, there's no harm in giving them a try.</p><p>My personal recommendation here is that you find an independent Usenet indexing service. These services are specialised in organising, categorising and searching Usenet content. They also provide an API which allows you to automate your Usenet download setup (essential for me).</p><h3 id="customer-support">Customer support</h3><p>I've found that I have not needed to use customer support with any Usenet provider before. I would recommend looking at the provider's support page in addition to understanding if there's any live customer support provided. This may prove useful if you're new to using Usenets. </p><p>Although, I think most help and resources relating to Usenets that you will need can be found on the world wide web ;).</p><h1 id="conclusion">Conclusion</h1><p>That's it! That's all the points that I considered when selecting a Usenet provider. In summary, find a Usenet provider that most accurately meets your <strong>core</strong> needs. But also don't forget to consider if the nice to haves are... nice to have!</p>]]></content:encoded></item><item><title><![CDATA[Setting up Plex Media Server on your Unraid Server (2020)]]></title><description><![CDATA[<p>We've all tried to catalogue our downloaded media content manually on our machines. As our media content library grows, it becomes a headache to use and manage. Downloaded media content names are borderline illegible, you forget what you have and haven't watched and when you want to watch something, you</p>]]></description><link>https://blog.harveydelaney.com/setting-up-plex-media-server-on-your-unraid-server/</link><guid isPermaLink="false">5ea80a079786f60001093c71</guid><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sat, 30 May 2020 09:48:24 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2020/05/unraid-plex.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2020/05/unraid-plex.jpg" alt="Setting up Plex Media Server on your Unraid Server (2020)"><p>We've all tried to catalogue our downloaded media content manually on our machines. As our media content library grows, it becomes a headache to use and manage. Downloaded media content names are borderline illegible, you forget what you have and haven't watched and when you want to watch something, you have to scour through tens, hundreds or thousands of files. </p><p>Here's where Plex comes to the rescue.</p><blockquote>  Plex magically scans and organizes your files, automatically sorting your media beautifully and intuitively in your Plex library. </blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.plex.tv/your-media/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Your Media | Plex</div><div class="kg-bookmark-description">Stream your media with a consistent experience on all devices.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.plex.tv/wp-content/themes/plex/assets/img/favicons/apple-touch-icon-152x152.png" alt="Setting up Plex Media Server on your Unraid Server (2020)"><span class="kg-bookmark-publisher">Plex</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.plex.tv/wp-content/uploads/2018/03/hero-my-media-2-1024x576.jpg" alt="Setting up Plex Media Server on your Unraid Server (2020)"></div></a></figure><p><br>Think Netflix, but for your own media content. Plex groups similar media titles, remembers what you have, and haven't watched, where you are up to in videos, has excellent search functionality, provides additional information derived from media content metadata and much, much more.</p><p>Plex was one of the biggest reasons behind me buying and setting up my NAS. I wanted the server to hold all my media content and all of Plex's features helped me justify the costs associated with building the NAS. </p><p>In this article, I'm going to go through how I set up Plex on my Unraid NAS Server!</p><p>Note: This article assumes that you've already setup Unraid. <a href="https://blog.harveydelaney.com/building-up-my-unraid-nas-server/">If you haven't check out this article.</a></p><h1 id="create-an-account-on-plex">Create an account on Plex</h1><p>Navigate to <a href="https://www.plex.tv/">https://www.plex.tv/</a>, click <code>Sign up</code>, enter your email, password and create your account! All pretty standard stuff.</p><h1 id="setting-up-an-unraid-share">Setting up an Unraid share</h1><p>Before we install Plex on our server, we need to create an Unraid "Share" that is dedicated to holding our media content that we want Plex to catalogue.</p><p>Open up Unraid, click on the <code>Shares</code> tab then click <code>Add Share</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-21.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>On the <code>Share Settings</code> page, fill out <code>Share name</code> and <code>Comments</code> and click <code>Add Share</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-22.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>Once this share has been created, I recommend creating folders that separate the type of media content you wish to store. To do this, open Unraid in your file browser (\\{UNRAID_IP}) on Windows.</p><p>For example, I created folders called <code>Photos</code> and <code>Videos</code>. So I now have the two directories:</p><ul><li>\\{UNRAID_IP}\Media\Photos</li><li>\\{UNRAID_IP}\Media\Videos</li></ul><h2 id="installing-plex">Installing Plex</h2><h3 id="community-applications">Community Applications</h3><p>We're going to be using Docker to help us install/manage Plex. Unraid has excellent support for Docker and comes with it already set up (one of the biggest reasons why I chose the Unraid OS for my NAS).</p><p>Before we install Plex, we need to install an Unraid plugin which makes it super easy to search for and install Docker images/containers. The plugin is called <code>Community Applications</code>.</p><p>Navigate to <code>Plugins</code> -&gt; <code>Install Plugin</code>, add the following URL and click <code>Install</code>:</p><pre><code>https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg</code></pre><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-23.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>You will then have access to a new tab called <code>Apps</code>. </p><h3 id="plex-docker-image">Plex Docker Image</h3><p>Navigate to the new <code>Apps</code> tab, search for "Plex" and click the install icon for <code>Plex Media Server</code> by <code>plexinc</code> (the official Plex Docker image): </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-24.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>You'll be taken to a screen called <code>Add Container</code>. </p><p>There are three fields you'll have to add:</p><ul><li><strong>Container Path: /transcode (Host Path 2):</strong> <code>/tmp/</code></li><li><strong>Container Path: /data (Host Path 3):</strong> <code>/mnt/user/{YOUR_NEW_SHARE_NAME}</code></li><li><strong>Container Variable: PLEX_CLAIM (Key 1):</strong> Navigate to <a href="https://www.plex.tv/claim/">https://www.plex.tv/claim/</a>. Log in, then copy paste the Plex Claim code here.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/plex-docker-container.jpg" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>Click <code>Apply</code>, then the Docker image will download and the Docker container will be set up. </p><h1 id="setting-up-plex">Setting up Plex</h1><p>Now that we've setup our Plex Docker container, we have to perform some final steps within our Plex instance before we can start using it.</p><p>Navigate to <code>Docker</code>, left click the Plex logo and click <code>WebUI</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-25.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>You'll be redirected to Plex server: http://{UNRAID_IP}:32400 (which I recommend bookmarking in your browser).</p><p>Now we need to have Plex create libraries for our 2 content directories, <code>Videos</code> and <code>Photos</code>. </p><p>Click the settings icon in the top right:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-27.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>In the side menu, under your new server click <code>Library</code>. Then click <code>Add Library</code>.</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/plex-settings-1.jpg" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>First, let's setup a Plex library for <code>Videos</code>. In the popup, select the <code>TV programmes</code> library type, enter a name for your library then go to <code>Add folders</code>: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/plex-tv-add.jpg" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>Click <code>Browse for Media Folder</code> and under <code>data</code>, you'll find the two folders we created in our <code>Media</code> share. Click <code>Videos</code> and click <code>Add</code>.</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-28.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>Then click <code>Add Library</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/plex-tv-add-folders.jpg" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>The reason why it's a bit tricky to find these folders is because we've mapped the <code>data</code> directory in Plex to our Media share (if you look at the Docker container settings).</p><p>Repeat the process for <code>Photos</code>, but using the <code>Photos</code> library type. Go back home, and you should see the new Photos and Videos library. If you have any content in these directories, I recommend running <code>Scan Library Files</code>, so the Plex catalogue gets updated (you should do this anytime you add new content):</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-29.png" class="kg-image" alt="Setting up Plex Media Server on your Unraid Server (2020)"></figure><p>That's it! Plex is now set up on your Unraid server.</p><p>Now, move all your existing media content to these new directories we created in our Unraid share. Don't forget to run <code>Scan Library Files</code> anytime you add new content.</p><p><strong>Note: </strong>You can also visit <a href="https://www.plex.tv/">https://www.plex.tv/</a>, login and click <code>Launch</code> to watch your media content (as opposed to opening the Unraid Plex web instance).</p><h2 id="next-steps">Next steps</h2><p>I've written a few articles which provide instructions on how to set up software that <strong>automates </strong>the downloading of media content. Check them out if you're interested in setting up such a thing:</p><ul><li><a href="https://blog.harveydelaney.com/installing-radarr-sonar-and-deluge-on-your-unraid-setup/">Installing Radarr, Sonarr and Deluge on your Unraid Server</a></li><li><a href="https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/">Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Switching from Torrents to Usenets - The Why and How]]></title><description><![CDATA[This article is a beginner friendly guide to using Usenets. It covers what benefits Usenets have over torrents and explains how to start downloading content no time with them!]]></description><link>https://blog.harveydelaney.com/switching-from-torrents-to-usenet-the-why-and-how/</link><guid isPermaLink="false">5ec8d14c1ffd5c00010ccc10</guid><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sat, 30 May 2020 05:11:51 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2020/05/usenet-vs-torrents.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2020/05/usenet-vs-torrents.jpg" alt="Switching from Torrents to Usenets - The Why and How"><p>Everyone knows about Torrents. There's just about any kind of file that are shared in a P2P manner. Torrents only work because of the generosity of people donating their computing and networking resources by seeding (uploading) content for others to download.</p><p>As you probably know (if you've found this article) there exists something called the Usenet. Usenets can be used to achieve the same things torrents do, but with a number of benefits!</p><p>I discovered Usenets a few months ago while surfing the web and decided to give them a go. I found Usenets to provide a much better means to downloading media content. I haven't since  looked back at torrents. </p><p>This article is a beginner friendly guide to using Usenets. I'll be covering what Usenets are, weighing the pros and cons of <strong>torrents vs Usenets</strong> and why/how I switched from using torrents to Usenet. I'll be using Newshosting, NZBGeek and NZBGet to download content.</p><h1 id="overview">Overview</h1><h2 id="torrents">Torrents</h2><p>Before we go into Usenets, let's cover what we know about torrents - how people use them and the pros/cons they have.</p><h3 id="usage-flow">Usage Flow</h3><p>Here's how the average torrent user would download content:</p><ol><li>Browse your favourite torrent indexer and look for your desired content</li><li>Either download or get the magnet URL (a URL pointing to where the torrent file lives) for a torrent file</li><li>Open the torrent file or enter the magnet URL into your torrent client</li><li>Wait for the download to complete</li><li>Enjoy the downloaded content</li><li>Seed your downloaded content so others in the network can download it too (optional)</li></ol><h3 id="pros-and-cons">Pros and Cons</h3><p>I've constructed a table of my views of the pros and cons of torrents below. Please note, these are only my opinions:</p><!--kg-card-begin: html--><table>
    <tbody style="font-size: 20px">
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Torrenting is free. It doesn't require any paid subscriptions to any services for you to download your content!</td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">
             Setup and configuration to use Torrents is simple and can be automated!      
            </td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">
             Torrents are well known method of downloading content. More people know about torrents and have a basic understanding of how they work already.
            </td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Torrents rely on seeders. Low amount of seeders will result in slow download speeds.</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Seeding is usually required while/before downloading</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Torrents can have malicious or low quality files. Although, this largely depends on the indexers you use and I've personally found this to be very rare if you ignore all .exe files</td>
        </tr>
    </tbody>
</table><!--kg-card-end: html--><h4 id="vpn">VPN</h4><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2021/06/Screen-Shot-2021-06-18-at-8.04.05-am.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>If you're using torrents, I recommend using a VPN to keep your torrenting activity private. My voice of VPN is <strong><a href="https://privadovpn.com/#a_aid=harveyd&amp;a_bid=e690309e">PrivadoVPN</a>.</strong> I've found it to be cheaper than other VPN services and hasn't slowed down my torrent downloads at all. They provide a free option (10GB free every month) which you can try out here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://privadovpn.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Fast and Secure VPN You Can Trust | PrivadoVPN</div><div class="kg-bookmark-description">PrivadoVPN is the fastest, most private VPN service on the planet. Protect yourself online wherever you go with our fast &amp; easy-to-use VPN that you can trust.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://privadovpn.com/img/privado-icon.ico" alt="Switching from Torrents to Usenets - The Why and How"><span class="kg-bookmark-publisher">PrivadoVPN</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://privadovpn.com/img/privadoVPN-logo.png" alt="Switching from Torrents to Usenets - The Why and How"></div></a></figure><h2 id="usenets">Usenets</h2><p>The Usenet has a lot of history behind it. In summary, it was originally designed as a bulletin-board service. Usenet eventually became a popular place to store and sort any kind of file. An organisation called Newzbin created the NZB file which pointed to where files existed on the Usenet. A whole ecosystem around Usenet and the NZB file then grew until it became what it is today. </p><p>Usenets are different to torrents in that files are stored on centralised servers.  downloaded from centralised servers (your Usenet provider), as opposed to from other multiple other "peers" like you do in Torrents.</p><h3 id="usage-flow-1">Usage flow</h3><ol><li>You log onto your Usenet indexer service and search for the files you wish to download</li><li>You obtain the NZB (<code>.nzb</code>) file associated with the content. The NZB file is equivalent to the <code>.torrent</code> file.</li><li>You open the NZB file in your Usenet downloader</li><li>The Usenet downloader communicates with the provider to access the file(s) associated with the NZB</li><li>The file is downloaded</li></ol><p>As you can see, the flow of using Usenets are very similar to torrents! I'd say the main difference is that you get faster, higher quality downloads from Usenets, but at a price.</p><h3 id="pros-and-cons-1">Pros and Cons</h3><!--kg-card-begin: html--><table>
    <tbody style="font-size: 20px">
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Usenets providers provide unlimited download speeds. You are only limited by your network</td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">
             Most Usenets providers have SSL ports so no one can snoop on what you are downloading and your IP address is kept private      
            </td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Usenet providers, indexers and downloaders all have a large amount of support, documentation and automation functionality available as well as having large, active communities supporting them</td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Don't have to seed (upload) before/while downloading</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">To use Usenets, you need a subscription to a Usenet provider and indxer service. These subscription costs money</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Usenets are a less popular, unfamiliar, unknown alternative to torrenting. There may be some apprehension in using Usenets for these reasons (new things can be scary) </td>
        </tr>
    </tbody>
</table><!--kg-card-end: html--><h1 id="services">Services</h1><p>Before we start, we'll need to choose and configure <strong>three </strong>services:</p><ul><li>Usenet indexer (paid)</li><li>Usenet provider (paid)</li><li>Usenet downloader (free)</li></ul><h2 id="usenet-indexer">Usenet Indexer</h2><p>A Usenet indexer can be thought of as the torrent equivalent of a torrent indexer! These services provide a web interface or a programmatic means to search for NZB files (via API). For this article, we'll be focusing on the web interface way to find NZB files.</p><p>A Usenet indexer works similar to how search engines work. They scour the Usenet and use metadata available in the NZB's header to catalogue and organise files it discovers.</p><p>Using a Usenet indexer is not mandatory. Some Usenet providers give you a "Newsreader" or their own web interface to search files. Although, I find with some services that it can be tricky to navigate and find the content you're looking for. I recommend using an indexer, as they're highly specialised and are designed to making your search for content easier!</p><h2 id="usenet-providers">Usenet Providers</h2><p>A Usenet provider is the most important service that we need. It provides servers that fetch the files from the Usenet and serves them to us. </p><p>When looking at choosing a provider, we should be aware of <strong>four </strong>main things:</p><ol><li><strong>Download speeds -</strong> The maximum download speed that their servers will provide you. Please note, this is a maximum, not a minimum value, so expect to see speeds below this "maximum" amount.</li><li><strong>Download limits</strong> - This is the amount of data that you can download from your provider per month.</li><li><strong>Retention periods </strong>- This is the number of days that the provider will keep a file after it's posted. The higher this retention period is, the more access to older files you'll have.</li><li><strong>Connections</strong> - At first, you may not think this applies to you, but it does! As previously mentioned, a file stored on the Usenet is split into multiple parts. Your download client will attempt to download these file parts concurrently. The higher the amount of connections, the higher the amount of files you can download at any single time. </li></ol><p>With these points in mind, let's choose a provider!</p><h2 id="recommended-providers">Recommended Providers</h2><h4 id="newshosting"><a href="https://www.newshosting.com/partners/?a_aid=harvey-delaney&amp;a_bid=5ecfe99b">Newshosting</a></h4><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>Newshosting is my personal choice and what I'll be using for the article. Newshosting is one of the most popular Usenet providers out there (and for good reasons). They have been in the game since 1999, have super long retention period (4299 days!), an easy to use web interface, unlimited downloads and uncapped speeds.</p><p>On certain Newshosting plans, a zero-log VPN service called <a href="https://privado.io/">PrivadoVPN</a> comes for free. I've been using PrivadoVPN and have found the speeds, available regions  and VPN desktop app to be top notch.</p><p><strong><a href="https://www.usenetserver.com/partners?a_aid=harvey-delaney&amp;a_bid=5725b6ed">I'm using the special discounted Newhosting <s>$12.95 USD</s> $8.33 USD monthly subscription plan - exclusive to this article!</a></strong></p><p>If you're only looking to give Usenets a go, I highly recommend using <a href="https://www.newshosting.com/?a_aid=harvey-delaney">Newshosting's free 30GB 14 day trial.</a></p><p>Two other providers I've used before and would recommend are:</p><ul><li><a href="https://signup.easynews.com/best-usenet-search/?a_aid=harvey-delaney&amp;a_bid=ef2f9ea1">Easynews - <s>$14.95</s> $7.50/month (7 free day trial included)</a></li><li><a href="https://www.usenetserver.com/partners?a_aid=harvey-delaney&amp;a_bid=5725b6ed">UsenetServer -  <s>$9.99</s> $7.95/month (14 day 10GB free trial included)</a></li></ul><h3 id="recommended-indexers">Recommended Indexers</h3><h4 id="nzbgeek-harvey-s-pick-"><a href="https://nzbgeek.info/">NZBGeek</a> (Harvey's Pick)</h4><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-6.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>My personal choice, I chose NZBGeek because it was on sale for Black Friday as well as having good reviews and a good reputation for quality indexes. It's made searching for content easy and has most of what I'm looking for. </p><p>If you're not looking to use a dedicated Usenet indexer, <a href="https://signup.easynews.com/best-usenet-search/?a_aid=harvey-delaney&amp;a_bid=ef2f9ea1">I recommend using Easynews which comes with an inbuilt indexer (web search only).</a></p><p>Other indexers I would recommend looking at are:</p><ul><li><a href="https://ninjacentral.co.za/">NinjaCentral</a></li><li><a href="http://www.dereferer.org/?https://www.miatrix.com/">Miatrix</a></li><li><a href="https://www.gingadaddy.com/">GingaDaddy</a></li><li><a href="https://drunkenslug.com/">DrunkenSlug</a> (if you can get a referral)</li></ul><h2 id="usenet-download-client">Usenet Download Client</h2><p>A Usenet download client is an application that takes an NZB file and works with your Usenet provider to download the files onto your computer. </p><p>There are two main contenders in the Usenet downloader space. <strong>Sabnzbd </strong>and <strong>NZBGet. </strong>Both have a large community, lots of features and lots of support. You can't go wrong choosing either. </p><p>I personally went with NZBGet and will be using it for this article.</p><p><a href="https://nzbget.net/">Download NZBGet here</a>.</p><p><a href="https://sabnzbd.org/">Download Sabnzbd here.</a></p><h1 id="using-the-services">Using the services</h1><h3 id="obtaining-provider-credentials">Obtaining Provider Credentials</h3><p>Once you have signed up with your Usenet provider of choice, you'll either be e-mailed, or have the credentials available on their web portal. These credentials are necessary to start downloading content and are:</p><ul><li>Server Address</li><li>Port</li><li>Username</li><li>Password</li></ul><p>As previously mentioned, I'm using <a href="https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/www.newshosting.com/partners/?a_aid=harvey-delaney&amp;a_bid=5ecfe99b">Newshosting</a>. Your <strong>Username </strong>and <strong>Password </strong>are what you used to sign up to Newshosting. <a href="https://support.newshosting.com/home/">Server Address and Port can be found on their support page:</a></p><p> </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-4.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><h3 id="configuring-usenet-download-client">Configuring Usenet Download Client</h3><p>Once you have downloaded and installed NZBGet, open it! It will open in your browser at the address: <code>127.0.0.1:6789</code>. Bookmark this page and tag it as NZBGet.</p><p>Once NZBGet is opened, navigate to Settings:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-5.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>Navigate to <code>Paths</code> and update the <code>MainDir</code> and <code>DestDir</code> fields to where you want NZBGet to download your content to:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-16.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>Then navigate to News-Servers:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-6.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>You'll need to change the following values:</p><ul><li><strong>Name </strong>- any name you want to use to identify this server e.g. Newhosting</li><li><strong>Host </strong>- the Server Address we obtained earlier e.g. </li><li><strong>Port </strong>- the port we obtained earlier. Usually it's <code>119</code> for unencrypted and <code>563</code> for SSL (encrypted). I highly recommend using <code>563</code>.</li><li><strong>Username </strong>- username we use to log into our Provider service.</li><li><strong>Password </strong>- password we use to log into our Provider service.</li><li><strong>Encryption </strong>- switch to <code>No</code> if the port value is <code>119</code> or <code>Yes</code> if the port value is <code>563</code>.</li><li><strong>Connections </strong>- enter the maximum amount of connections supported by your provider. My Newshosting subscription had a max of 60!</li></ul><p>Scroll down and click <code>TestConnection</code> to make sure we've configured everything correctly. If it succeeds, click <code>Save all changes</code> at the bottom left of the screen:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-8.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>You'll then need to reload NZBGet:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-14.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>NZBGet supports having multiple providers. So if you decide to add another one, you can simply click <code>Add another Server</code> and repeat the steps.</p><h3 id="using-usenet-indexer">Using Usenet Indexer</h3><p>Log into your indexer, and then search for your desired content: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-13.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>Click on the search result that you think will have the closest match to what you're looking for:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-12.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>Inside, you need to find the "Download" icon which looks like a cloud. There's two different ways of obtaining the NZB file you need:</p><ol><li>Download the NZB file onto your computer</li><li>Copy the URL to where the NZB file lives</li></ol><p>I prefer going with option 2, but it's up to you!</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-17.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><h3 id="downloading-content">Downloading Content</h3><p>Now with your NZB file, or URL to NZB file, open up NZBGet and go to the <code>Downloads</code> tab. Click <code>+ Add</code> in the top left:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-18.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>In the popup:</p><ol><li>If you have the NZB URL, paste it into <code>Add from URL</code>.</li><li>If you have the file, upload it <code>Add local files</code>.</li></ol><p>Then click <code>Submit</code>. </p><p>NZBGet will do it's thing by communicating with your Usenet provider and start the download!</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-19.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>Once the download has complete, navigate to where we setup NZB to save our downloads to and we should see our fresh content there! </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-20.png" class="kg-image" alt="Switching from Torrents to Usenets - The Why and How"></figure><p>That's it! Hopefully the whole process wasn't as daunting as you originally thought. As previously mentioned, I don't mind paying the subscription to gain access to a more reliable, faster, secure way of downloading media content!</p><h2 id="automating-your-usenet-setup">Automating your Usenet setup</h2><p>If you find that performing your Usenet downloads manually is becoming a hassle, read the next article in the series to help automated your media downloading workflow: <a href="https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/">https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/</a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Creating your own mini Redux in React using useReducer, React Context and TypeScript]]></title><description><![CDATA[<p>I was working on building out an <a href="https://atomicdesign.bradfrost.com/chapter-2/">organism level</a> component for the RateSetter component library. This component used a number of <strong>atom/molecule </strong>level components from the library in addition to having a sizeable amount of logic to maintain it's state (it's a medium sized form with a number of</p>]]></description><link>https://blog.harveydelaney.com/creating-your-own-mini-redux-in-react/</link><guid isPermaLink="false">5e71c78ebd5dd300013430f6</guid><category><![CDATA[React]]></category><category><![CDATA[redux]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Mon, 13 Apr 2020 06:08:08 GMT</pubDate><content:encoded><![CDATA[<p>I was working on building out an <a href="https://atomicdesign.bradfrost.com/chapter-2/">organism level</a> component for the RateSetter component library. This component used a number of <strong>atom/molecule </strong>level components from the library in addition to having a sizeable amount of logic to maintain it's state (it's a medium sized form with a number of different API calls).</p><p>Normally to solve a problem like this, I would look to use <a href="https://redux.js.org/introduction/getting-started">Redux</a> and potentially <a href="https://redux-saga.js.org/">Redux Saga</a> for global state and side effects management for my application/component.</p><p>However, this component was meant to be distributed across a number of other React projects. The other projects may or may not be using Redux and Redux Saga and I didn't want to enforce the usage of these libraries in order to be able to use this component.</p><p>I had experimented with using <a href="https://reactjs.org/docs/context.html">React Context</a> before but found that the Contexts I built became too large and maintainable that I just ended up switching to Redux + Redux Saga.</p><p>From surfing the web, I found inspiration to utilise a very handy React hook: <code>useReducer</code> in combination with <strong>React Context</strong> to build my own mini Redux! </p><p>The basic idea of this approach is that I'd create a <code>Provider</code> which would export:</p><ul><li>The current state of the reducer (our "store")</li><li>The <code>dispatch</code> method given to us by <code>useReducer</code>.</li></ul><p>That way any component that sat under the <code>Provider</code> would have access to the "global" state and also be able to <code>dispatch</code> actions which would update the global state! </p><p>I've also used <code>useEffect</code> in the store to handle basic side effects. For example, when a value changes in the store, it can fire an API request to retrieve some value, then save it in the store.</p><h1 id="project">Project</h1><p>For this article, I'm going to be creating a contrived project that will:</p><ol><li>Allow the user to <strong>increment </strong>and <strong>decrement </strong>a count which starts at 0</li><li>Allow the user to select an option. This option will dictate how much each increment/decrement will add/subtract from the current value</li><li>On change of values, post the value to an API (a side effect). The API we'll be using is: <a href="https://docs.magicthegathering.io/#api_v1cards_get">https://docs.magicthegathering.io/#api_v1cards_get</a>. It will return the details of a <a href="https://magic.wizards.com/en">Magic the Gathering</a> card with the ID we supply: <a href="https://api.magicthegathering.io/v1/cards/1">https://api.magicthegathering.io/v1/cards/</a>{ID}</li><li>Save the card data in our store and display it</li></ol><p>As previously mentioned, to help us create this project with a global state, we're going to be using <strong><code>useReducer</code> </strong>and <strong><code>React Context</code></strong>. I love <strong>TypeScript </strong>and will be using it to provide static type checking for our mini Redux solution.</p><p>I've made the project we're going to build available on GitHub if looking at the code directly helps you more:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/HarveyD/mini-redux-example"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HarveyD/mini-redux-example</div><div class="kg-bookmark-description">An example project created for: https://blog.harveydelaney.com/creating-your-own-mini-redux-in-react/ - HarveyD/mini-redux-example</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg"><span class="kg-bookmark-author">HarveyD</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars1.githubusercontent.com/u/5586128?s=400&amp;v=4"></div></a></figure><h2 id="project-structure">Project Structure</h2><pre><code>package.json
src/
  App.tsx
  store/
    store.actions.tsx
    store.tsx
    store.types.ts
  components/
    CardDetails.tsx
    CardInput.tsx</code></pre><h1 id="store">Store</h1><p>Let's start off by creating the global state of our project. But before that, let's identify all the <strong>Actions</strong> that our application will need. I've come up with:</p><ol><li>Incrementing the current count by the <code>change</code> value</li><li>Decrementing the current count by the <code>change</code> value</li><li>Seting the <code>change</code> value</li><li>Seting the API response</li></ol><h2 id="actions-store-actions-ts">Actions - <code>store.actions.ts</code></h2><p>First, let's create the actions we will be using for our application. In <code>store.actions.ts</code> add all of the following:</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import { ICardDetails } from "./store.types";

export enum ActionType {
  IncrementId = "counter/increment",
  DecrementId = "counter/decrement",
  SetChangeValue = "value/change/set",
  SetCardDetails = "api/set"
}

interface IIncrementId {
  type: ActionType.IncrementId;
}

interface IDecrementId {
  type: ActionType.DecrementId;
}

interface ISetChangeValue {
  type: ActionType.SetChangeValue;
  payload: number;
}

interface ISetCardDetails {
  type: ActionType.SetCardDetails;
  payload: ICardDetails;
}

export type Actions =
  | IIncrementId
  | IDecrementId
  | ISetChangeValue
  | ISetCardDetails;

export const IncrementId = (): IIncrementId =&gt; ({
  type: ActionType.IncrementId
});

export const DecrementId = (): IDecrementId =&gt; ({
  type: ActionType.DecrementId
});

export const SetChangeValue = (value: number): ISetChangeValue =&gt; ({
  type: ActionType.SetChangeValue,
  payload: value
});

export const SetCardDetails = (response: ICardDetails): ISetCardDetails =&gt; ({
  type: ActionType.SetCardDetails,
  payload: response
});
</code></pre><figcaption>src/store/store.actions.ts</figcaption></figure><p>First we're defining the interfaces of our four actions. Having a payload is optional, and is the data our future reducer will be receiving.</p><p>Defining an interface for each action may seem like it doesn't produce much value and adds unnecessary overhead. But the value that is obtained by doing this is from our IDE's intellisense feature. By utilising <a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#union-types">TypeScript's Discriminated Unions</a> (as you can see in <code>Action</code>), our IDE is able to infer the type of our <code>payload</code> given the type of action. I'll go into more detail about this when we create our reducer.</p><p>We then create an action for each interface that we defined. If the actions requires a payload, we accept it as a parameter and assign it to <code>payload</code>. Creating all our actions in this way allows us to dispatch actions like:</p><pre><code class="language-typescript">dispatch(SetCardDetails(details));</code></pre><p>as opposed to:</p><pre><code class="language-typescript">dispatch({ type: ActionType.SetCardDetails, payload: details });</code></pre><h2 id="store-store-tsx">Store - <code>store.tsx</code></h2><p>Now with our actions created, let's get to writing our store. In <code>store.tsx</code> add:</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import React, { createContext, useReducer, useEffect } from "react";

import { SetCardDetails, Actions, ActionType } from "./store.actions";
import { ICardDetails } from "./store.types";

interface IStoreState {
  id: number;
  changeValue: number;
  cardDetails: ICardDetails | null;
}

interface IAppContext {
  state: IStoreState;
  dispatch: React.Dispatch&lt;Actions&gt;;
}

const initialState: IStoreState = {
  id: 1,
  changeValue: 1,
  cardDetails: null
};

const store = createContext&lt;IAppContext&gt;({
  state: initialState,
  dispatch: () =&gt; null
});

const { Provider } = store;

const reducer = (state: IStoreState, action: Actions) =&gt; {
  const { id: count, changeValue } = state;

  switch (action.type) {
    case ActionType.IncrementId:
      return {
        ...state,
        id: count + changeValue
      };
    case ActionType.DecrementId:
      return {
        ...state,
        id: count - changeValue
      };
    case ActionType.SetChangeValue:
      return {
        ...state,
        changeValue: action.payload
      };
    case ActionType.SetCardDetails:
      return {
        ...state,
        cardDetails: action.payload
      };
    default:
      return state;
  }
};

const AppProvider = ({ children }: { children: JSX.Element }) =&gt; {
  const [state, dispatch] = useReducer(reducer, initialState);

  useStoreSideEffect(state, dispatch);

  return &lt;Provider value={{ state, dispatch }}&gt;{children}&lt;/Provider&gt;;
};

const useStoreSideEffect = (
  state: IStoreState,
  dispatch: React.Dispatch&lt;Actions&gt;
) =&gt; {
  useEffect(() =&gt; {
    fetch(`https://api.magicthegathering.io/v1/cards/${state.id}`)
      .then(async (res) =&gt; {
        const data: { card: ICardDetails } = await res.json();
        dispatch(SetCardDetails(data.card));
      })
      .catch((err) =&gt; {
        // do some error handling!
        console.error(`Failed to load card with ID: ${state.id}`);
      });
  }, [state.id, dispatch]);
};

export { store, AppProvider };
</code></pre><figcaption>src/store/store.tsx</figcaption></figure><p>Let's step through this file to understand what's going on:</p><p>We define the shape of our store state, and create an initial state for it.</p><p>We define what we want to be shared by our Context (state + dispatch) and use <code>createContext</code> to create the initial state of our Context. <code>() =&gt; null</code> gets around type errors for: <code>React.Dispatch</code>.</p><p>We extract <a href="https://reactjs.org/docs/context.html#contextprovider">Provider</a> from the <code>createContext</code>. We use it to create a Higher Order Component called <code>AppProvider</code>. </p><p>We define our reducer to be used in <code>useReducer</code>. It accepts a "previous" state and an action (from the Union Type we've defined). It handles each action type we've created and depending on what type/payload was passed, returns a "new" state. We're careful to not mutate the state and always return a new state given the action type + payload. </p><p>Since we utilised Discriminated Types, our IDEs Intellisense is able to identify what the payload within certain cases must be:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-9.png" class="kg-image"></figure><p>In AppProvider, we utilise <code>useReducer</code> to maintain the state of our store. We pass it the reducer function we just created and the initial store state we've created before. From <code>useReducer</code> we get <code>state</code> and <code>dispatch</code>. <code>state</code> holds the current state of our store, and <code>dispatch</code> is a function that accepts an action of type <code>Actions</code> which will be fed to our reducer to update the store state.</p><p><a href="https://reactjs.org/docs/hooks-reference.html#usereducer">Read more about useReducer here.</a>  </p><p>Then we simply pass <code>state</code> and <code>dispatch</code> to our Provider wrapper.</p><h3 id="side-effects">Side Effects</h3><p>We utilise the power of <code>useEffect</code> to perform side effects on changes in our store. </p><p>The <code>useStoreSideEffect</code> function takes <code>state</code> and <code>dispatch</code>, "listens" for an update to the <code>id</code> field saved in our store. Once there's an update, it'll perform the API request to the MTG API to retrieve the card details. Once the request is finished, the <code>SetCardDetails</code> action is dispatched to save it in our store. </p><p>You can probably see how this is pretty handy. It's definitely not as good as Redux side effect management libraries such as <code>Thunk</code> or <code>Saga</code>, but it's still pretty good and super simple to setup.</p><h1 id="components">Components</h1><p>With our store and actions all complete, it's time to utilise them within our app.</p><h2 id="cardinput-tsx"><code>CardInput.tsx</code></h2><p>This component will allow the user to increment/decrement (using buttons), update the <code>change</code> value (via a dropdown) and display the current ID. We utilise <code>useContext</code> to retrieve our <code>state</code> and <code>dispatch</code> from the Provider.</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import React, { useContext, useCallback } from "react";
import { store } from "../store/store";
import {
  IncrementId,
  DecrementId,
  SetChangeValue
} from "../store/store.actions";

const CardInput = () =&gt; {
  const {
    state: { id, changeValue },
    dispatch
  } = useContext(store);

  const decrementEvent = useCallback(() =&gt; dispatch(DecrementId()), [dispatch]);
  const incrementEvent = useCallback(() =&gt; dispatch(IncrementId()), [dispatch]);
  const changeValueEvent = useCallback(
    (event: React.ChangeEvent&lt;HTMLSelectElement&gt;) =&gt;
      dispatch(SetChangeValue(Number(event.currentTarget.value))),
    [dispatch]
  );

  return (
    &lt;section&gt;
      &lt;button type="button" onClick={decrementEvent}&gt;
        -
      &lt;/button&gt;
      Current ID: {id}
      &lt;button type="button" onClick={incrementEvent}&gt;
        +
      &lt;/button&gt;
      &lt;label htmlFor="change-select"&gt;&lt;/label&gt;
      &lt;select
        id="change-select"
        value={changeValue}
        onChange={changeValueEvent}
      &gt;
        {[1, 2, 3, 4, 5].map((val) =&gt; (
          &lt;option key={val} value={val}&gt;
            {val}
          &lt;/option&gt;
        ))}
      &lt;/select&gt;
    &lt;/section&gt;
  );
};

export default CardInput;
</code></pre><figcaption>src/components/CardInput.tsx</figcaption></figure><p><code>CardDetails.tsx</code> </p><p>This is a simple component that grabs the card details from our store and renders them. Again, we utilise <code>useContext</code> to get this data.</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import React, { useContext } from "react";
import { store } from "../store/store";

const CardDetails: React.FC = () =&gt; {
  const {
    state: { cardDetails }
  } = useContext(store);

  if (!cardDetails) {
    return null;
  }

  return (
    &lt;section&gt;
      &lt;h2&gt;{cardDetails.name}&lt;/h2&gt;
      &lt;h3&gt;{cardDetails.manaCost}&lt;/h3&gt;
      &lt;div&gt;{cardDetails.text}&lt;/div&gt;
    &lt;/section&gt;
  );
};

export default CardDetails;
</code></pre><figcaption>src/CardDetails.tsx</figcaption></figure><h2 id="app-tsx"><code>App.tsx</code></h2><p>To be able to utilise <code>useContext</code> and access <code>state</code> and <code>dispatch</code>, we need to wrap our app with <code>AppProvider</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import React from "react";
import { AppProvider } from "./store/store";
import CardInput from "./components/CardInput";
import CardDetails from "./components/CardDetails";

const CardApp = () =&gt; (
  &lt;&gt;
    &lt;CardInput /&gt;
    &lt;CardDetails /&gt;
  &lt;/&gt;
);

const App = () =&gt; (
  &lt;AppProvider&gt;
    &lt;CardApp /&gt;
  &lt;/AppProvider&gt;
);

export default App;
</code></pre><figcaption>App.tsx</figcaption></figure><p>By running our app and clicking increment a few times, we can see it's retrieving card details:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-10.png" class="kg-image"></figure><p>Again, this project is a very contrived example that I whipped up pretty quickly. Some important enhancements we'd want to add to the project would be: </p><ul><li>loading indicators</li><li>error handling</li><li>validation </li><li>debouncing  </li></ul><p>Hope this article was enough to help you get started building out your own mini Redux!</p>]]></content:encoded></item><item><title><![CDATA[Extending create-react-app to make your own React CLI Scaffolding Tool]]></title><description><![CDATA[<p>At RateSetter, we've recently spun up a bunch of new React projects. Our React projects use <code>create-react-app</code> with specific config for the app to be deployed and hosted by our platform.</p><p>To spin up a new frontend project, teams would often clone preexisting React projects, delete irrelevant files and update</p>]]></description><link>https://blog.harveydelaney.com/extending-create-react-app-to-make-your-own-react-cli-scaffolding-tool/</link><guid isPermaLink="false">5e4716dabd5dd30001342e70</guid><category><![CDATA[React]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sun, 05 Apr 2020 13:29:15 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2020/04/react-cli.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2020/04/react-cli.jpg" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"><p>At RateSetter, we've recently spun up a bunch of new React projects. Our React projects use <code>create-react-app</code> with specific config for the app to be deployed and hosted by our platform.</p><p>To spin up a new frontend project, teams would often clone preexisting React projects, delete irrelevant files and update files with the new name of the app. This resulted in problems such as git history not being cleaned from the cloned project, teams wasting time cleaning up the project and errors arising from when config wasn't correctly updated for the new project.</p><p>I saw a real need for a frontend project scaffolding tool.  The tool needed to run <code>create-react-app</code> and then <strong>augment </strong>it with all the configuration and templates necessary to get a frontend app up and running at RateSetter.</p><p>I've never made a CLI tool before this, so it was an excellent opportunity to learn how to do so. This article focuses on how I built it, and what I learned from building it.</p><h1 id="building-the-scaffolding-tool">Building the Scaffolding Tool</h1><p>I've created a GitHub repository for the scaffolding tool. It might help to have this open while following this article:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/HarveyD/create-frontend-app"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HarveyD/create-frontend-app</div><div class="kg-bookmark-description">Contribute to HarveyD/create-frontend-app development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"><span class="kg-bookmark-author">HarveyD</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars1.githubusercontent.com/u/5586128?s=400&amp;v=4" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"></div></a></figure><p>Feel free to clone, fork, star or suggest improvements for it :).</p><h2 id="overview">Overview</h2><p>I designed the CLI tool to have the following flow:</p><ol><li>Ask the user for the app name (must follow <code>create-react-app</code>'s naming convention)</li><li>Present the user with a list of available front-end libraries/frameworks</li><li>Present a list of options that the user can choose from to augment their frontend application with</li><li>Run <code>create-react-app</code> followed by:</li><li>Installing additional NPM dependencies </li><li>Adding/modifying <code>package.json</code> entries </li><li>Adding templates</li></ol><p>This article will focus on creating the CLI tool by augmenting <code>create-react-app</code> - a very popular <strong>React </strong>scaffolding too. Since we are pragmatic engineers, we will be structuring our CLI in an extensible manner to allow future support of other frontend libraries/frameworks (Angular and React).</p><p>For the React side of the tool, we're going to provide the user with <strong>three </strong>options they can augment their React app with:</p><ul><li>Adding Redux and Redux Saga</li><li>Adding a Mock API server. This will be a simple Node/Express server that can run alongside the app to provide mock API responses</li><li>Adding static code analysis tools (Prettier, TsLint, StyleLint)</li></ul><h2 id="scaffold-dependencies">Scaffold Dependencies</h2><p>I've uncreatively called the tool: <code>create-frontend-app</code>. Create that directory and initialise Git (<code>git init</code>) and NPM (<code>npm init</code>). Then, add the following project structure:</p><pre><code>.gitignore
package.json
index.js
react/
    config/
        mockApi/
            templates/
                ...
            index.js
        redux/
            templates/
                ...
            index.js
        staticCodeAnalysis/
            index.js
    reactApp.js
    </code></pre><p>We will need to install three Node packages:</p><ul><li><a href="https://www.npmjs.com/package/inquirer">inquirer</a> - provide our users with an interactive CLI interface</li><li><a href="https://www.npmjs.com/package/ora">ora</a> - loading spinner to display while our CLI is running commands</li><li><a href="https://www.npmjs.com/package/fs-extra">fs-extra</a> - extension utils to <code>fs</code> that helps make our CLI code more concise</li><li><a href="https://www.npmjs.com/package/colors">colors</a> - to provide our users with a <em>colourful </em>CLI experience :)</li></ul><p>In <code>frontend-scaffold-cli</code> run:</p><pre><code>npm install --save inquirer ora fs-extra</code></pre><h2 id="option-config-setup">Option Config Setup</h2><p>As previously mentioned, we're going to allow the user to choose to add up to three optional augmentation options:</p><ul><li>Adding Redux/Redux Saga</li><li>Adding static code analysis tools (Prettier/StyleLint/EsLint)</li><li>Adding a mock API server</li></ul><p>My aim for this tool was to have one config file for each augmentation option that that drives what each will provide the app.  I've created the following (<strong>TypeScript</strong>) schema of how I thought I could best handle the requirement of:</p><ol><li>Installation of additional NPM dependencies</li><li>Additional/modified <code>package.json</code> entries</li><li>Additional/updated project files</li></ol><pre><code class="language-typescript">interface IConfig {
	name: string;
	description?: string;
	dependencies: string[],
	devDependencies: string[],
	packageEntries: Array&lt;{
    	    key: string;
            value: string
	}&gt;,
	templates: Array&lt;{
            path: string;
            file: IFile;
	}&gt;
}</code></pre><ul><li><code>name</code> is the name of the config.</li><li><code>description</code> is a description of what selecting this augmentation will provide to the user. It will be output to the user and asked as a question.</li><li><code>dependencies</code> / <code>devDependencies</code> are a list of NPM packages that will be installed into the project</li><li><code>packageEntries</code> is a key value pair list which will be added to the project's <code>package.json</code></li><li><code>templates</code> are a list of <code>path/file</code> entries. The CLI will iterate through each, create a new file using the specified <strong>template </strong>at the <strong>path </strong>location. </li></ul><p>The config I've created for the <strong>Mock API </strong>option looks like:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const apiIndex = require("./templates/apiIndex");
const mockController = require("./templates/mockController");

module.exports = {
  name: "withMockApi",
  question: "Do you want to include a local mock API using Node and Express?",
  dependencies: [],
  devDependencies: ["express", "body-parser"],
  packageEntries: [
    { key: "proxy", value: "http://localhost:9001" },
    {
      key: "scripts.dev",
      value: "run-p start mock-api",
    },
  ],
  templates: [
    { path: "mock-api/index.js", file: apiIndex },
    { path: "mock-api/mockController.js", file: mockController },
  ],
};
</code></pre><figcaption>react/config/mockApi/index.js</figcaption></figure><p>You'll notice that we're importing <code>apiIndex</code> from <code>"./templates/apiIndex"</code>. This is a template file that should be added into the project and looks like:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">module.exports = `
const express = require('express');
const app = express();
const bodyParser = require('body-parser');
const port = 9001;

app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

const mockController = require('./mockController');

app.get(
  '/mock-endpoint',
  mockController.mockGetEndpoint
);

app.post(
  '/mock-endpoint',
  mockController.mockPostEndpoint
);

app.listen(port, () =&gt; console.log('Example app listening on port: ' + port + '!'));
`;
</code></pre><figcaption>react/config/mockApi/apiIndex.js</figcaption></figure><p>It simply exports the contents of the template as a string. This string content will be used to construct template files in projects created by the tool. All the other template files have similar formats, but I won't display them here because there are too many!</p><p>You can see all the config files for the other two options in my GitHub repo:</p><ul><li>Redux/Redux Saga: <a href="https://github.com/HarveyD/create-frontend-app/tree/master/react/config/redux">https://github.com/HarveyD/create-frontend-app/tree/master/react/config/redux</a></li><li>Static Code Analysis: <a href="https://github.com/HarveyD/create-frontend-app/tree/master/react/config/staticCodeAnalysis">https://github.com/HarveyD/create-frontend-app/tree/master/react/config/staticCodeAnalysis</a></li></ul><p>We then need to create an <code>index.js</code> file in root of <code>/config</code> to import and export the three config files:</p><figure class="kg-card kg-code-card"><pre><code>const withStaticCodeAnalysis = require("./staticCodeAnalysis");
const withRedux = require("./redux");
const withMockApi = require("./mockApi");

module.exports = [withStaticCodeAnalysis, withRedux, withMockApi];
</code></pre><figcaption>react/config/index.js</figcaption></figure><h2 id="the-cli-entry-point">The CLI Entry-point</h2><p>With our config all sorted, we now have to construct the base of our CLI. This entry point needs to extract two pieces of information from our user: the desired app name and framework/library. In our entry point (<code>index.js</code>) add the following:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">#!/usr/bin/env node
const inquirer = require("inquirer");
const reactApp = require("./react/reactApp");

const askAppQuestions = () =&gt; {
  const questions = [
    {
      type: "input",
      name: "appName",
      message:
        "What name do you want to give your app (should be in kebab case format: `your-app-name`)?"
    },
    {
      type: "list",
      name: "appType",
      message: "What framework do you want to use?",
      choices: ["react", "angular", "vue"]
    }
  ];
  return inquirer.prompt(questions);
};

const appDict = {
  react: reactApp
};

const run = async () =&gt; {
  const answer = await askAppQuestions();
  const { appName, appType } = answer;

  // Todo: Perform some validation on appName here to make sure it's kebab case
  if (!appName || appName.length &lt;= 0) {
    console.log(`Please enter a valid name for your new app.`.red);
    return process.exit(0);
  }

  const app = appDict[appType];

  if (!app) {
    console.log(
      `App type: ${appType} is not yet supported by this CLI tool.`.red
    );
    return process.exit(0);
  }

  const appDirectory = `${process.cwd()}/${appName}`;

  const res = await app.create(appName, appDirectory);

  if (!res) {
    console.log("There was an error generating your app.".red);
    return process.exit(0);
  }

  return process.exit(0);
};

run();
</code></pre><figcaption>index.js</figcaption></figure><p>Here we're using <code>inquirer</code> to prompt users to:</p><ol><li>Enter an app name</li><li>Select an app type from 3 options: React, Angular and Vue</li></ol><p>It'll find the next part of the CLI tool using the app type selected, which it'll pass the current directory and app name.</p><p>As you can see, <code>appDict</code> only has an entry for React at the moment as we're only focusing on React in this article.</p><h2 id="react-cli">React CLI</h2><p>If React is selected,<code>index.js</code> will run the function in <code>react/reactApp.js</code> with the name and directory of our app. Let's build it out.</p><p><code>react/reactApp.js</code> is a relatively big file, so we'll be breaking it down. First let's add the dependencies required:</p><figure class="kg-card kg-code-card"><pre><code>require("colors");
const shell = require("shelljs");
shell.config.silent = true;
const inquirer = require("inquirer");
const fse = require("fs-extra");
const set = require("lodash.set");
const ora = require("ora");

const reactConfigList = require("./config");</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>Now, we create the default function to be exported by <code>reactApp.js</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">module.exports = async (appName, appDirectory) =&gt; {
  const selectedConfigList = await askQuestions(appName, appDirectory);

  await createReactApp(appName);
  await installPackages(selectedConfigList);
  await updatePackageDotJson(selectedConfigList);
  await addTemplates(selectedConfigList);
  await commitGit();

  console.log(
    `Created your new React app with settings: ${selectedConfigList
      .map(_ =&gt; _.name)
      .join(", ")}. cd into ${appName} to get started.`.green
  );

  return true;
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>This function is performing a number of steps. Each step is separated out into a function and returns a promise (hence why we're <code>await</code>ing them). We'll flesh out the functions one at a time.</p><h3 id="askquestions">askQuestions</h3><p>The first thing we need to do is to iterate through our list of augmentation options and ask the user if they want to include it within their app. </p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const askQuestions = async () =&gt; {
  const selectedConfigList = [];

  const questions = reactConfigList.map(config =&gt; ({
    type: "list",
    name: config.name,
    message: config.question,
    choices: ["yes", "no"]
  }));

  const answers = await inquirer.prompt(questions);

  reactConfigList.forEach(config =&gt; {
    const matchingAnswer = answers[config.name];

    if (matchingAnswer &amp;&amp; matchingAnswer === "yes") {
      selectedConfigList.push(config);
    }
  });

  return selectedConfigList;
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>Here we're transforming our list of configs into a list of <code>inquirer</code> questions. We then ask the user each question sequentially. We then return a list of options that were selected as <code>yes</code>. This list is passed to subsequent steps in the CLI.</p><h3 id="createreactapp">createReactApp</h3><p>The most essential function of our CLI tool is this function! We need to run <code>create-react-app</code> to have something we can augment!</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const createReactApp = appName =&gt; {
  const spinner = ora("Running create-react-app...").start();

  return new Promise((resolve, reject) =&gt; {
    shell.exec(
      `npx create-react-app ${appName}`,
      () =&gt; {
        const cdRes = shell.cd(appName);

        if (cdRes.code !== 0) {
          console.log(`Error changing directory to: ${appName}`.red);
          reject();
        }

        spinner.succeed();
        resolve();
      }
    );
  });
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>We're utilising <code>ora</code> to show the user a spinner while <code>create-react-app</code> is doing it's thing.</p><p>We utilise <code>shelljs</code> to execute <code>npx</code>, which simply runs <code>create-react-app</code> with the app name provided to the CLI tool in <code>index.js</code>. This is wrapped in a <strong>Promise </strong>due to <code>shelljs</code> being asynchronous, but only providing a callback to indicate its completion.</p><p>Once it's done we need to change the current directory to this freshly created React app, stop the spinner and resolve our Promise.</p><h3 id="installpackages">installPackages</h3><p>This function installs all the dependencies and dev dependencies our augmentations require.</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const installPackages = async configList =&gt; {
  let dependencies = [];
  let devDependencies = [];

  configList.forEach(config =&gt; {
    dependencies = [...dependencies, ...config.dependencies];
    devDependencies = [...devDependencies, ...config.devDependencies];
  });

  await new Promise(resolve =&gt; {
    const spinner = ora("Installing additional dependencies...").start();

    shell.exec(`npm install --save ${dependencies.join(" ")}`, () =&gt; {
      spinner.succeed();
      resolve();
    });
  });

  await new Promise(resolve =&gt; {
    const spinner = ora("Installing additional dev dependencies...").start();

    shell.exec(`npm install --save-dev ${devDependencies.join(" ")}`, () =&gt; {
      spinner.succeed();
      resolve();
    });
  });
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>Here, for each selected augmentation provided from <code>askQuestions</code>, we're grabbing the list of NPM dependencies it requires and installing them! We use <code>shelljs</code> to run the NPM command to install the dependency! We perform the same thing for dev dependencies.</p><h3 id="updatepackagedotjson">updatePackageDotJson</h3><p>This function updates <code>package.json</code> by adding entries, or modifying them.</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const updatePackageDotJson = (configList) =&gt; {
  const spinner = ora("Updating package.json scripts...");

  const packageEntries = configList.reduce(
    (acc, val) =&gt; [...acc, ...val.packageEntries],
    []
  );

  return new Promise((resolve) =&gt; {
    const rawPackage = fse.readFileSync("package.json");
    const package = JSON.parse(rawPackage);

    packageEntries.forEach((script) =&gt; {
      // Lodash `set` allows us to dynamically set nested keys within objects
      // i.e. scripts.foo = "bar" will add an entry to the foo field in scripts
      set(package, script.key, script.value);
    });

    fse.writeFile("package.json", JSON.stringify(package, null, 2), function (
      err
    ) {
      if (err) {
        spinner.fail();
        return console.log(err);
      }

      spinner.succeed();
      resolve();
    });
  });
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>First we grab all the entry modifications we need to perform from our selected config list. Then we use <code>fse</code> to grab the existing contents of <code>package.json</code> and parse it. We loop over the list of entries, and use <code>lodash</code> set to output the key value pair in <code>package.json</code>. We then use <code>fse</code> to overwrite the old <code>package.json</code> with the new, mutated <code>package.json</code>.</p><h3 id="addtemplates">addTemplates</h3><p>This function adds the template files associated with the augmentation option to our new project.</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const addTemplates = configList =&gt; {
  const spinner = ora("Adding templates...");

  const templateList = configList.reduce(
    (acc, val) =&gt; [...acc, ...val.templates],
    []
  );

  return new Promise(resolve =&gt; {
    templateList.forEach(template =&gt; {
      // outputFile creates a directory when it doesn't exist
      fse.outputFile(template.path, template.file, err =&gt; {
        if (err) {
          return console.log(err);
        }
      });
    });

    spinner.succeed();
    resolve();
  });
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>Similar to the <code>updatePackageDotJson</code> function, we merge the templates we need to add from all the selected augmentation configs. For each, we utilise <code>fse</code> to output the contents of the template at the desired path.</p><h3 id="commitgit">commitGit</h3><p><code>create-react-app</code> intialises a GIT repository. Since we added and modified a number of files we will have a number of untracked changes. We want our user to have a fresh experience, free of these untracked files. So we simply add all the files and commit them!</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const commitGit = () =&gt; {
  const spinner = ora("Committing files to Git...");

  return new Promise(resolve =&gt; {
    shell.exec(
      'git add . &amp;&amp; git commit --no-verify -m "Secondary commit from Create Frontend App"',
      () =&gt; {
        spinner.succeed();
        resolve();
      }
    );
  });
};</code></pre><figcaption>react/reactApp.js</figcaption></figure><p>That's it! View the file in it's entirety at: <a href="https://github.com/HarveyD/create-frontend-app/blob/master/react/reactApp.js">https://github.com/HarveyD/create-frontend-app/blob/master/react/reactApp.js</a></p><h2 id="running">Running</h2><p>Let's test out our new CLI tool. Run <code>node index.js</code> or <code>npm run start</code> (if <code>main</code> in <code>package.json</code> points to <code>index.js</code>) and you should see:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-2.png" class="kg-image" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"></figure><p>Followed by our three options:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-1.png" class="kg-image" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"></figure><p>It'll then run all the steps:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-3.png" class="kg-image" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"></figure><p>If everything went okay, you should see <code>test-app</code> within your CLI tool:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-4.png" class="kg-image" alt="Extending create-react-app to make your own React CLI Scaffolding Tool"></figure><p>It should have all the dependencies, templates and <code>package.json</code> entries we specified! </p><h2 id="publishing-and-consuming">Publishing and Consuming</h2><p>Before publishing, make sure to add the following to your <code>package.json</code>:</p><figure class="kg-card kg-code-card"><pre><code>  ...
  "bin": {
    "": "./index.js"
  },
  ...</code></pre><figcaption>package.json</figcaption></figure><p>This allows the user to execute the script globally once the CLI tool is installed globally via NPM!</p><p>Now all there's left to do is publish your new CLI tool to NPM! To find out how to do this, follow: <a href="https://blog.harveydelaney.com/setting-up-a-private-npm-registry-publishing-ci-cd-pipeline/">https://blog.harveydelaney.com/setting-up-a-private-npm-registry-publishing-ci-cd-pipeline/</a></p><p>After publishing your library to an NPM registry, you should be able to install it by running: <code>npm i -g create-frontend-app</code>. </p><p>Then by running <code>create-frontend-app</code>, you should be presented with the CLI's questions! After answering the questions and letting the tool run, your new project should be created in the directory you ran<code>create-frontend-app</code>.</p>]]></content:encoded></item><item><title><![CDATA[Integrating React Components into an Angular 2+ Project]]></title><description><![CDATA[Learn how to integrate React Components into your Angular 2+ project!]]></description><link>https://blog.harveydelaney.com/integrating-react-components-into-an-angular-2-project/</link><guid isPermaLink="false">5e82e25fbd5dd30001343217</guid><category><![CDATA[React]]></category><category><![CDATA[Angular]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Tue, 31 Mar 2020 10:45:29 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2020/04/react-in-angular-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2020/04/react-in-angular-1.jpg" alt="Integrating React Components into an Angular 2+ Project"><p>In my final few weeks at RateSetter, I was working on one last, <a href="https://atomicdesign.bradfrost.com/chapter-2/">organism</a> level component to add to our <a href="https://blog.harveydelaney.com/creating-your-own-react-component-library/">RateSetter Component Library</a>.</p><p>We had created multiple "vehicle lookup" UIs that were implemented differently across four different projects (two React and two Angular 2+). When we wanted to add features to this vehicle lookup UI, we found it difficult and time consuming to go through each project, understand how the vehicle lookup was implemented and then update it. </p><p>We decided to create one component that would be consumed by each of the UI projects. This allowed us to write code once in the component library, publish it and then simply update the component library dependency in each project. It resulted in more rapid, bug-free development.</p><p>I whipped up the vehicle lookup component using React (with TypeScript), Hooks and Context and added it to our component library. Integrating the component within React projects was trivial, but integrating it within Angular 2+ projects was an unknown for me.</p><p>After a bit of research and experimenting, I found that integrating a React Component within an Angular 2+ project is surprisingly simple! Follow along to find out how I did it.</p><p><strong>Note: </strong>This article uses <code>react + react-dom @ 16.13.1</code>  and <code>Angular @ 9.1.0</code>. I used <code>@angular/cli</code> to spin up a basic Angular project called <code>react-integration-app</code>.</p><h1 id="integration-">Integration!</h1><h2 id="react-component">React Component</h2><p>Let's say you wanted to integrate the following React component into an Angular 2+ project:</p><pre><code class="language-javascript">import React, { useCallback, useState } from 'react';

export interface IFeelingFormProps {
  name: string;
  onSubmit: (feelingUpdate: string) =&gt; void;
}

const FeelingForm: React.FC&lt;IFeelingFormProps&gt; = ({ name, onSubmit }) =&gt; {
  const [currentFeeling, setCurrentFeeling] = useState('');

  const onFeelingChange = useCallback(
    (event: React.ChangeEvent&lt;HTMLInputElement&gt;) =&gt; {
      setCurrentFeeling(event.currentTarget.value);
    },
    []
  );

  const onSubmitEvent = useCallback(() =&gt; {
    onSubmit(`${name} is feeling: ${currentFeeling}`);
  }, [name, currentFeeling]);

  return (
    &lt;form onSubmit={onSubmitEvent}&gt;
      &lt;label htmlFor="feeling-input"&gt;How are you feeling?&lt;/label&gt;
      &lt;input
        id="feeling-input"
        onChange={onFeelingChange}
        value={currentFeeling}
      /&gt;
      &lt;button type="submit"&gt;Send feeling&lt;/button&gt;
    &lt;/form&gt;
  );
};

export default FeelingForm;
</code></pre><h2 id="dependencies">Dependencies</h2><p>To get React components rendering in your Angular project, we need to install React. In <code>react-integration-app</code> run:</p><pre><code>npm i --save react react-dom</code></pre><p>It also helps to have React types installed:</p><pre><code>npm i -D @types/react @types/react-dom</code></pre><p>Also, don't forget to install your component library!</p><h2 id="angular-wrapper">Angular Wrapper</h2><p>In your Angular project, create a new directory in <code>/src</code> called <code>react-feeling-form</code> and in it create <code>react-feeling-form.component.ts</code>. In it, add:</p><pre><code class="language-typescript">import {
  Component,
  OnChanges,
  Input,
  Output,
  EventEmitter,
  AfterViewInit
} from '@angular/core';
import * as React from 'react';
import * as ReactDOM from 'react-dom';
import { FeelingForm } from '@harvey/harvey-component-library';
import { IFeelingFormProps } from '@harvey/harvey-component-library/build/feeling-form/feeling-form';

@Component({
  selector: 'app-react-feeling-form',
  template: '&lt;div [id]="rootId"&gt;&lt;/div&gt;'
})
export class ReactFeelingFormComponent implements OnChanges, AfterViewInit {
  @Input() name: string;
  @Output() submitEvent = new EventEmitter&lt;string&gt;();

  public rootId = 'feeling-form-root';
  private hasViewLoaded = false;

  public ngOnChanges() {
    this.renderComponent();
  }

  public ngAfterViewInit() {
    this.hasViewLoaded = true;
    this.renderComponent();
  }

  private renderComponent() {
    if (!this.hasViewLoaded) {
      return;
    }

    const props: IFeelingFormProps = {
      name,
      onSubmit: (res: string) =&gt; this.submitEvent.emit(res)
    };

    ReactDOM.render(
      React.createElement(FeelingForm, props),
      document.getElementById(this.rootId)
    );
  }
}
</code></pre><p>Before trying to understand what's being done here, have a good read through <a href="https://reactjs.org/docs/react-without-jsx.html">React without JSX</a>. </p><p>We've created a template with a <code>div</code> element that will act as our container to mount our React component on. We're binding the <code>id</code> attribute to the <code>rootId</code> field in our Angular component to make it easier to find in the DOM.</p><p>We're accepting accepting props that match what our React component requires - <code>@Input()</code> for <code>name</code> and an <code>@Output()</code> Event Emitter for our <code>onSubmit</code> callback. </p><p><code>ngAfterViewInit</code> is used to inform our component that our container (view) has loaded and is ready to have the component mounted on it. It also performs the first render.</p><p>If we don't check the view has loaded before attempting to mount our React component (which will happen because <a href="https://angular.io/guide/lifecycle-hooks"><code>ngOnChanges</code> runs before <code>ngAfterViewInit</code></a>),  we'll run into this error:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/03/image.png" class="kg-image" alt="Integrating React Components into an Angular 2+ Project"></figure><p><code>ngOnChanges</code> is used to "re-render" our React component. Anytime one of the Angular inputs change, so should the props provided to our React component. From <a href="https://reactjs.org/docs/react-dom.html">React's documentation</a>:</p><blockquote>If the React element was previously rendered into <code>container</code>, this will perform an update on it and only mutate the DOM as necessary to reflect the latest React element.</blockquote><p><code>React.createElement</code> is used to create our React component, passing through the props our Angular component received. This is identical to <code>&lt;FeelingForm {...props} /&gt;</code> in JSX. </p><p><code>ReactDOM.render</code> takes our freshly created React component and mounts it to our container element (which is retrieved from the DOM by <code>rootId</code>).</p><p><strong>Important: </strong>We will also need to add the following to<strong> <code>tsconfig.json</code></strong>:</p><pre><code>...
"skipLibCheck": true,
"allowSyntheticDefaultImports": true
...</code></pre><p><code>skipLibCheck</code> will prevent TypeScript compiler issues when that occur when React is present. <code>allowSyntheticDefaultImports</code> is needed to prevent errors due to how React is exported.</p><p>Also, don't forget to add the new component to your <code>app.module.ts</code> Declaration field.</p><h2 id="using-the-wrapper">Using the wrapper</h2><p>You would then use the wrapper component like any other Angular component. For example:</p><pre><code class="language-typescript">import { Component } from '@angular/core';

@Component({
  selector: 'app-root',
  template: `
    &lt;div className="app-container"&gt;
      &lt;h1&gt;Below is the React Component!&lt;/h1&gt;
      &lt;app-react-feeling-form
        [name]="'Harvey'"
        (submitEvent)="submitEvent($event)"
      &gt;&lt;/app-react-feeling-form&gt;
    &lt;/div&gt;
  `
})
export class AppComponent {
  submitEvent($event: string) {
    alert($event);
  }
}
</code></pre><p>After running <code>ng server</code> you should see:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/03/image-1.png" class="kg-image" alt="Integrating React Components into an Angular 2+ Project"></figure><p>And by clicking <code>Send feeling</code> you'll see the alert:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/03/image-2.png" class="kg-image" alt="Integrating React Components into an Angular 2+ Project"></figure><h2 id="component-library-styles">Component Library Styles</h2><p>If your component library exports styles as a standalone CSS file, you'll need to add it to <code>angular.json</code> under the <code>styles</code> field, for example:</p><pre><code>...
 "styles": ["src/styles.scss", "./node_modules/@harvey/harvey-component-library/build/styles.css""]
...</code></pre>]]></content:encoded></item><item><title><![CDATA[Setting up a Private NPM Registry and Publishing CI/CD Pipeline]]></title><description><![CDATA[Learn how to set up a private NPM registry and create a BuildKite NPM publishing CI/CD pipeline.]]></description><link>https://blog.harveydelaney.com/setting-up-a-private-npm-registry-publishing-ci-cd-pipeline/</link><guid isPermaLink="false">5de2eddb1234610001cfd55d</guid><category><![CDATA[Continuous Integration and Continuous Delivery (CI/CD)]]></category><category><![CDATA[buildkite]]></category><category><![CDATA[npm]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Mon, 30 Dec 2019 07:00:01 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2020/01/npm.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2020/01/npm.jpg" alt="Setting up a Private NPM Registry and Publishing CI/CD Pipeline"><p>This article follows on from how I setup a React component library at work. Read it here: <a href="https://blog.harveydelaney.com/creating-your-own-react-component-library/">https://blog.harveydelaney.com/creating-your-own-react-component-library/</a>.</p><p>With the RateSetter component library all set up, we now needed a way to privately host, distribute and manage versioning it. Private hosting was required as we only wanted engineers at RateSetter to see and use the package. Versioning was essential as we needed to be able to manage this evolving library being used in a number of projects throughout the company. The component library being an NPM package met all these requirements.</p><p><strong>Note: </strong>For this article<strong> </strong>I'll be creating a <strong>scoped NPM package:</strong></p><blockquote>Scopes are a way of grouping related packages together, and also affect a few things about the way npm treats the package.</blockquote><p>Read more about<strong> </strong><a href="https://docs.npmjs.com/misc/scope">scoped packages here</a>. </p><h1 id="choosing-an-npm-registry">Choosing an NPM Registry</h1><p>First up, we have to choose how we want to host our registry. There are two options for hosting a private NPM registry for your organisation:</p><h2 id="paid">Paid</h2><p>The benefits of paid service is that you receive a working solution out of the box. There is little to no configuration required. You also receive great documentation and support that help you use the service. The downside is that you have to pay for the service and risk having your packages removed if they violate the services T&amp;Cs or compromised in the unlikely event of a breach.</p><p>Some of these paid services are:</p><ul><li><a href="https://www.npmjs.com/products">NPM</a> ($7 / user / month)</li><li><a href="https://inedo.com/proget/pricing">ProGet</a> ($1000 / year)</li><li><a href="https://www.myget.org/"><a href="https://www.myget.org/">MyGet</a></a> ($165 / year)</li><li><a href="https://bintray.com/">JFrog Bintray</a> ($150 / month)</li></ul><p>It's important to note that while some services are more expensive that others, they offer additional features like hosting other package libraries like NuGet (.Net), Maven (Java) and Gems (Ruby).</p><h2 id="free">Free</h2><p>Benefits of free services are that they are... free and all the packages are hosted on your servers. The downside is that there is usually overhead with setting up these services as they are required to be self-hosted.</p><p>Some examples of free services are:</p><ul><li><a href="https://verdaccio.org/">Verdaccio</a></li><li>ProGet (self-hosted)</li><li><a href="https://github.com/cnpm/cnpmjs.org">CNPMJS</a></li><li><a href="https://github.com/rlidwka/sinopia">Sinopia</a></li><li><a href="https://docs.npmjs.com/misc/registry">Building your own NPM registry</a></li></ul><h1 id="setting-up-the-private-npm-registry">Setting up the Private NPM Registry</h1><p>We chose ProGet to host our private NPM registry that would initially host the component library, and later, all other future RateSetter NPM packages.</p><p>The main reason behind this decision was that we were currently self-hosting ProGet which was being used for our NuGet packages. It made sense to just use this and add an NPM registry alongside the NuGet registry.</p><p>However, if we didn't have ProGet already setup, I would have chosen Verdaccio. It's very popular (8.5k GitHub stars), actively maintained as of (21/12/2019), has many questions asked/answered about it on Stack Overflow in addition to <a href="https://verdaccio.org/docs/en/installation">excellent documentation on it's website</a>.</p><h1 id="manual-publishing">Manual Publishing</h1><p>As mentioned at the start of this article, we will be publishing to a scoped package registry. For this example, I'm using the scope: <code>@harvey</code> and the package name: <code>react-component-library</code>. </p><p>You'll need some information handy from your NPM registry before we can publish our first package:</p><ul><li>Your package repository</li><li>Registry URL</li><li>Email, Username, Password that you wish to use to log in to the registry</li></ul><p>With your registry URL, run:</p><p><code>npm config set @harvey:registry <a href="http://nuget.rsdev:10009/npm/npm-rs/">http://YOUR_REGISTRY:PORT</a></code> </p><p>This command saves an NPM config that points all scoped registry requests to our private NPM registry.</p><p>Now create a new user by running:</p><p><code>npm adduser --registry <a href="http://nuget.rsdev:10009/npm/npm-rs/">http://YOUR_REGISTRY:PORT</a></code></p><p><em><strong>Note: </strong>you have have seen other articles use: <code>npm login</code>. <code>npm login</code> is an alias to <code>adduser</code> and behaves exactly the same way.</em></p><p>Follow the interactive tool by inputting a Username, Password and Email address. The command will then communicate with the registry to create configuration that are saved under <code>.npmrc</code>. <code>.npmrc</code> (aka NPM Runtime Configurations) can be found at: <code>/Users/Username/.npmrc</code> (on Windows). For example, for me it created: <a href="//localhost:4873/:_authToken=%22MaDSxMyURlERcdThvWbg6A==%22"><code>//localhost:4873/:_authToken="MaDSxMyURlERcdThvWbg6A=="</code></a>. This file and it's contents are used by NPM to authenticate all requests made to registries (public and private).</p><p>Read more about <code>.npmrc</code> here: <a href="https://docs.npmjs.com/files/npmrc">https://docs.npmjs.com/files/npmrc</a></p><p>Now in <code>package.json</code> in your package repository, change the name from <code>package-name</code> to <code>@scope/package-name</code>, for example:</p><p><code>"name": "react-component-library",</code> =&gt; <code>"name": "@harvey/react-component-library",</code></p><p>NPM will then know to publish to your scoped, private registry as opposed to the public NPM registry. </p><p>Now running <code>npm publish</code> in your repository's root directory should work:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image-10.png" class="kg-image" alt="Setting up a Private NPM Registry and Publishing CI/CD Pipeline"></figure><h1 id="automated-publishing-pipeline-">Automated Publishing (Pipeline)</h1><p>Manual publishing can work if you're working solo or as a small team. But it'll get to a point where it becomes too hard to keep track of what versions of the package had been deployed. It is also risky and error prone as developers will have permissions to publish on their local machines and could publish packages without running tests or without having their code reviewed through a pull request.</p><p>For this section, I'll be going through how I automated NPM publishing by creating our NPM CI publishing pipeline using <strong><a href="https://buildkite.com/">BuildKite</a></strong>.</p><p>First we need to create a BuildKite agent, BuildKite pipeline and link the pipeline to our library repo on GitHub. Read my previous article to find out how: <a href="https://blog.harveydelaney.com/setting-up-buildkite-and-your-first-ci-pipeline-in-2-hours/">https://blog.harveydelaney.com/setting-up-buildkite-and-your-first-ci-pipeline-in-2-hours/</a>.</p><p>We will be defining our <a href="https://buildkite.com/docs/pipelines/defining-steps">pipeline steps through <code>.yml</code></a> and <code>.bash</code> scripts in the <code>.buildkite</code> directory of our package repository.</p><p><strong>Note: </strong>To have BuildKite read from the .buildkite directory, you need to add <code>buildkite-agent pipeline upload</code> to the <strong>Commands to Run</strong> section in your pipeline:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/Screen-Shot-2019-12-30-at-5.48.57-pm.png" class="kg-image" alt="Setting up a Private NPM Registry and Publishing CI/CD Pipeline"></figure><p>Our pipeline will have two steps:</p><ol><li>Run linting and tests</li><li>Build, package and publish the package to our private NPM registry</li></ol><h2 id="step-1-linting-and-tests">Step 1 - Linting and Tests</h2><p>The first step on your BuildKite pipeline will be a simple one liner: <code>npm run lint &amp;&amp; npm run test</code>.</p><p>These commands help make sure code warnings/errors aren't present and components are behaving as expected before the package is published to the registry. You can add other checks such as Prettier to this step as well.</p><p>Our <code>pipeline.yml</code> under <code>.buildkite</code> will be: </p><pre><code class="language-yaml">steps:
  - label: "Install and Lint"
    command: "npm run lint &amp;&amp; npm run test"</code></pre><h2 id="step-2-build-and-publishing">Step 2 - Build and Publishing</h2><p>Before Step 2, we will be adding a <a href="https://buildkite.com/docs/pipelines/block-step"><strong>Block Step</strong></a>. This <strong>Block Step</strong> will allow the engineer to elect what kind of update they pushed to the component library. We use <a href="https://semver.org/">Semantic Versioning</a> to help us determine what type of release it should be (<code>Major</code> | <code>Minor</code> | <code>Patch</code>):</p><pre><code class="language-yaml">steps:
  - label: "Install and Lint"
    command: "npm run lint &amp;&amp; npm run test"
          
  - block: Publish to NPM
    fields:
    - select: "NPM Package Version"
      key: "npm-semver-type"
      required: true
      options:
        - label: "Major"
          value: "major"
        - label: "Minor"
          value: "minor"
        - label: "Patch"
          value: "patch"</code></pre><p>The above <code>.yaml</code> file creates a block step that presents three options to the engineer: Major, Minor and Patch. The selection is saved to the BuildKite <a href="https://buildkite.com/docs/pipelines/build-meta-data">build meta-data</a> which can be accessed in proceeding steps.</p><p>Now we need to add the final step, <strong>Publish</strong>:</p><pre><code class="language-yaml">steps:
  - label: "Install and Lint"
    command: "npm run lint &amp;&amp; npm run test"
          
  - block: Publish to NPM
    fields:
    - select: "NPM Package Version"
      key: "npm-semver-type"
      required: true
      options:
        - label: "Major"
          value: "major"
        - label: "Minor"
          value: "minor"
        - label: "Patch"
          value: "patch"

  - label: "Publish"
    command: "bash ./.buildkite/publish.bash"</code></pre><p>As you can see, our final step is running a bash command as we need to run a number of commands. The <code>publish.bash</code> file will be:</p><pre><code class="language-bash">#!/usr/bin/env bash

# INCREMENT PACKAGE VERSION
semverIncrementType="$(buildkite-agent meta-data get npm-semver-type)"
npm version $semverIncrementType

# INSTALL REQUIRED PACKAGES
npm ci
npm run build

# CONFIGURE NPM SETTINGS
echo "@harvey:registry=http://YOUR_REGISTRY:PORT" &gt;&gt; .npmrc
echo "unsafe-perm=true"
echo "//YOUR_REGISTRY:PORT/:_password:YOUR_NPM_PASSWORD" &gt;&gt; .npmrc
echo "//YOUR_REGISTRY:PORT/:username:YOUR_NPM_USERNAME" &gt;&gt; .npmrc
echo "//YOUR_REGISTRY:PORT/:email:YOUR_NPM_EMAIL" &gt;&gt; .npmrc
echo "//YOUR_REGISTRY:PORT/:always-auth=false" &gt;&gt; .npmrc

# Publish!
npm publish

# UPDATE PACKAGE VERSION IN REPOSITORY
git push
</code></pre><p>Let's break this bash script down. First, we're grabbing the version from the block step by using <code>buildkite-agent meta-data get</code> and using that to increment the version of the package appropriately - <a href="https://docs.npmjs.com/updating-your-published-package-version-number">read more about <code>npm version</code> here</a>.</p><p>We then get our package ready to be published by running <code>npm ci &amp;&amp; npm run build</code> which will produce our built package in the output folder of <code>/build</code>.</p><p>The next block of <code>echo</code>s redirecting to an <code>.npmrc</code> file may seem a bit strange. It's how I decided configure the build instance to setup the correct NPM settings for publishing to my private NPM registry. It creates all the settings that we created for in the<strong> Manual Publishing </strong>section above. It was done like this as <code>npm adduser</code> is an interactive command that can't be interact with in our pipeline. You could also replace <code>_password</code>, <code>username</code> and <code>email</code> with a single <code>_authToken</code> field instead.</p><p>Next, the script runs the <code>npm publish</code> command!</p><p>Finally, we need to update our repository to have the latest version by simply pushing to it (requires the BuildKite agent to have write permissions to your repository). We only have to push because <code>npm version</code> adds a commit with the latest package version in <code>package.json</code> and <code>package-lock.json</code>. This will only occur if publishing is successful.</p><h1 id="end-result">End Result</h1><p>Once an update has been pushed to your library repository, BuildKite should read the config we created in <code>.buildkite</code> in the repository then generate and kick off the pipeline:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/Screen-Shot-2019-12-30-at-5.48.06-pm.png" class="kg-image" alt="Setting up a Private NPM Registry and Publishing CI/CD Pipeline"></figure><p>Clicking <strong>Publish to NPM </strong>should present the modal: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/Screen-Shot-2019-12-30-at-5.51.02-pm.png" class="kg-image" alt="Setting up a Private NPM Registry and Publishing CI/CD Pipeline"></figure><p>On clicking continue, the <code>publish.bash</code> script should be run. If everything is successful, you should see your new package was published, in addition to the <code>package.json</code> version being incremented appropriately:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/Screen-Shot-2019-12-30-at-5.54.35-pm.png" class="kg-image" alt="Setting up a Private NPM Registry and Publishing CI/CD Pipeline"></figure><p>Now go and enjoy the benefits of having your own private NPM registry and NPM publishing CI/CD pipeline!</p>]]></content:encoded></item><item><title><![CDATA[Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid]]></title><description><![CDATA[Learn how to automate your Usenet media content downloads using Unraid, Sonarr, Radarr!]]></description><link>https://blog.harveydelaney.com/configuring-your-usenet-provider-and-indexer-with-sonarr-radarr/</link><guid isPermaLink="false">5de307a21234610001cfd569</guid><category><![CDATA[Unraid]]></category><category><![CDATA[radarr]]></category><category><![CDATA[sonarr]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sun, 01 Dec 2019 10:22:32 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2019/12/unraid-usenet-radarr-sonarr.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2019/12/unraid-usenet-radarr-sonarr.jpg" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"><p>Last year I installed Unraid on my NAS in addition to Sonarr/Radarr/Deluge. This setup helped me automatically download and manage my media content. <a href="https://blog.harveydelaney.com/installing-radarr-sonar-and-deluge-on-your-unraid-setup/">Read more about how I set it up here</a>.</p><p>Recently, I was surfing the web and discovered Usenets. To me, Usenets were a new, different method to download media content! I decided to weigh up the pros and cons of <strong>torrents</strong> and <strong>Usenets</strong> by giving Usenets a go!</p><p><strong>NOTE: </strong>this article assumes you've already set up Unraid/Sonarr/Radarr, follow the above article if you haven't.</p><h1 id="torrent-setup">Torrent Setup</h1><p>You probably know what torrents are and how they work. If not, <a href="https://techterms.com/definition/torrent">read more here.</a></p><p>A typical torrent setup using Sonarr/Radarr might work as follows: </p><ol><li>Use Sonarr/Radarr to select a media file to download</li><li>Sonarr/Radarr communicates with torrent indexers</li><li>Sonarr/Radarr adds the torrent file to Deluge and kicks off the download</li><li>Deluge downloads the content and once complete, will copy the file over to a completed directory</li><li>Sonarr/Radarr detects the file has been complete, renames and copies the file to the appropriate media directory used by Plex</li></ol><p>My take on the pros and cons of using torrents are:</p><!--kg-card-begin: html--><table>
    <tbody style="font-size: 20px">
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Torrenting is free. It doesn't require any paid subscriptions to any services for you to download your content!</td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">
             Setup and configuration to use Torrents is simple and can be automated!      
            </td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">
             Torrents are well known method of downloading content. More people know about torrents and have a basic understanding of how they work already.
            </td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Torrents rely on seeders. Low amount of seeders will result in slow download speeds.</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Seeding is usually required while/before downloading</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Torrents can have malicious or low quality files. Although, this largely depends on the indexers you use and I've personally found this to be very rare if you ignore all .exe files</td>
        </tr>
    </tbody>
</table><!--kg-card-end: html--><h1 id="usenet-setup">Usenet Setup</h1><p>Usenet has a lot of history behind it. In summary, it were originally designed as a bulletin-board service. Usenet eventually became a popular place to store and sort any kind of file. An organisation called Newzbin created the NZB file which pointed to where files existed on the Usenet. A whole ecosystem around Usenet and the NZB file then grew until it became what it is today. Usenets are different to torrents in that files are downloaded from a single server, as opposed to from other multiple other "peers" like you do in Torrents.</p><p> <a href="https://www.lifehacker.com.au/2010/08/how-to-get-started-with-usenet-in-three-simple-steps/">Read more about the history of Usenets and how they work here.</a></p><p>An automated Usenet download flow would work as follows:</p><ol><li>Use Sonarr/Radarr to select a file or wait for a file to be released</li><li>Sonarr/Radarr communicates with a <strong>Usenet Indexer</strong> to find a matching file</li><li>Using the index, Sonarr/Radarr sends the file location to a <strong>Usenet Downloader</strong></li><li>The <strong>Usenet Downloader</strong> communicates with a <strong>Usenet Provider</strong> which serves the content to the downloader</li><li>Once the download is complete, it will copy the file to a <strong>completed</strong> directory</li><li>Sonarr/Radarr detects the file has been complete, renames and copies the file to the appropriate media directory</li></ol><p>My take on the pros and cons of Usenets are:</p><!--kg-card-begin: html--><table>
    <tbody style="font-size: 20px">
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Usenets providers provide unlimited download speeds. You are only limited by your network</td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">
             Most Usenets providers have SSL ports so no one can snoop on what you are downloading and your IP address is kept private      
            </td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Usenet providers, indexers and downloaders all have a large amount of support, documentation and automation functionality available as well as having large, active communities supporting them</td>
        </tr>
        <tr style="color: darkgreen">
            <td>+</td>
            <td style="text-align: left">Don't have to seed (upload) before/while downloading</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">To use Usenets, you need a subscription to a Usenet provider and indxer service. These subscription costs money</td>
        </tr>
        <tr style="color: darkred">
            <td>-</td>
            <td style="text-align: left">Usenets are a less popular, unfamiliar, unknown alternative to torrenting. There may be some apprehension in using Usenets for these reasons (new things can be scary) </td>
        </tr>
    </tbody>
</table><!--kg-card-end: html--><p>In both approaches, there are positives and negatives. As previously mentioned, I decided to give Usenets a go to see if I would like them better than torrents.</p><p>Before we get started, we'll need to pick a <strong>provider</strong> and an <strong>indexer</strong> to use. There are many indexers and providers out there, I'll just be suggesting a few of them. I would encourage you to do your own research as well.</p><h1 id="recommended-usenet-providers">Recommended Usenet Providers</h1><h3 id="newshosting-harvey-s-recommendation-"><a href="https://www.newshosting.com/partners/?a_aid=harvey-delaney&amp;a_bid=5ecfe99b">Newshosting</a> (Harvey's Recommendation)</h3><p></p><!--kg-card-begin: html--><img src="https://blog.harveydelaney.com/content/images/2020/05/image.png" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"><!--kg-card-end: html--><p>Newshosting is my personal choice and what I'll be using for the article. Newshosting is one of the most popular Usenet providers out there (and for good reasons). They have been in the game since 1999, have super long retention period (4299 days), an easy to use web interface, unlimited downloads and uncapped speeds. Another plus was that it integrated seamlessly with Sonarr and Radarr.</p><p>On certain Newshosting plans, a zero-log VPN service called <a href="https://privado.io/">PrivadoVPN</a> comes for free. I've been using PrivadoVPN and have found the speeds, available regions  and VPN desktop app to be top notch.</p><p><strong><a href="https://www.usenetserver.com/partners?a_aid=harvey-delaney&amp;a_bid=5725b6ed">I'm using the special discounted Newhosting <s>$12.95 USD</s> $7.95 USD monthly subscription plan - exclusive to this article!</a></strong></p><p>If you're only looking to give Usenets a go, I highly recommend using <a href="https://www.newshosting.com/?a_aid=harvey-delaney">Newshosting's free 30GB 14 day trial.</a></p><p>Two other providers I've used before and would recommend are:</p><ul><li><a href="https://signup.easynews.com/best-usenet-search/?a_aid=harvey-delaney&amp;a_bid=ef2f9ea1">Easynews - <s>$14.95</s> $7.50/month (7 free day trial included)</a></li><li><a href="https://www.usenetserver.com/partners?a_aid=harvey-delaney&amp;a_bid=5725b6ed">UsenetServer -  <s>$9.99</s> $7.95/month (14 day free trial included)</a></li></ul><p><a href="https://blog.harveydelaney.com/choosing-a-usenet-provider/">Read this article to learn more about choosing the best Usenet provider for you.</a></p><h1 id="recommended-usenet-indexers">Recommended Usenet Indexers</h1><h3 id="nzbgeek-harvey-s-pick-"><a href="https://nzbgeek.info/">NZBGeek</a> (Harvey's Pick)</h3><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-6.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>My personal choice, I chose NZBGeek because it was on sale for Black Friday as well as having good reviews and a good reputation for quality indexes.</p><p>Other indexers I would recommend are: </p><ul><li><a href="https://ninjacentral.co.za/">NinjaCentral</a></li><li><a href="https://www.miatrix.com/">Miatrix</a></li><li><a href="https://www.gingadaddy.com/">GingaDaddy</a></li><li><a href="https://drunkenslug.com/">DrunkenSlug</a> (if you can get a referral)</li></ul><h1 id="usenet-downloaders">Usenet Downloaders</h1><p>Sonarr/Radarr are configured to use a <a href="https://github.com/Sonarr/Sonarr/wiki/Supported-DownloadClients">number of download clients</a>.  Sonarr/Radarr support four different Usenet clients:</p><ul><li><a href="http://sabnzbd.org/">Sabnzbd</a></li><li><a href="https://blog.harveydelaney.com/p/3ee9123b-dfec-4a83-bdcc-13ef9fb67653/Nzbget">Nzbget</a></li><li>Pneumatic</li><li>UsenetBlackhole</li></ul><p>I was tossing up between <strong>Sabnzbd </strong>and <strong>NZBGet </strong>(due to both having a high amount of features/support/community). I ended up going with NZBget as I preferred the UI of it slightly more and will be using it for this article.</p><h1 id="unraid-setup">UnRAID Setup</h1><p>In UnRAID, navigate to <strong>Plugins</strong> and open the <strong>Community Applications</strong> plugin (assumed to have been installed already). Search for <code>nzbget</code> and click <code>Install</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Use the default port (6789) and set<code>Host Path 2</code> to <code>/mnt/user/Downloads/NZB</code> and hit <code>Done</code>: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image-1.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Once the Docker image has been downloaded and container set up, open NZBGet in your browser. You'll be prompted for a username and password which are:</p><ul><li>Username: <code>nzbget</code> </li><li>Password: <code>tegbzn6789</code></li></ul><p>Then in the top menu select <code>Settings</code>. Then select News-Servers on the left: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image-2.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Your Usenet provider should provide four essential pieces of information:</p><ul><li>Server address</li><li>Port (usually <code>119</code> for unencrypted and <code>563</code> for encrypted)</li><li>Username</li><li>Password</li></ul><p>All three of these details were available in Newshosting immediately after I joined the monthly subscription plan. Enter them in under the correct fields:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-3.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Set <strong>Encryption</strong> to <strong>Yes </strong>and change the port from <strong>119 -&gt; 563 </strong>if you want to encrypt your Usenet download traffic (highly recommended).</p><p>Test the connection and then click save all changes in the bottom left and you'll be prompted to restart NZBGet. Restart and NZBGet will be all ready to use!</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/05/image-2.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Since we're configuring both <strong>Sonarr </strong>and <strong>Radarr </strong>to work with NZBGet, we need to add two categories on NZBGet. Do this by going to <code>Settings -&gt; Categories -&gt; Add Another Category</code>. Enter <strong>Radarr </strong>in the <strong>Name </strong>field then click Save:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-33.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p><strong>Repeat this for Sonarr as Category2.</strong></p><h1 id="radarr-and-sonarr-setup">Radarr (and Sonarr) Setup</h1><p>With our Usenet download client and indexer (doesn't require setup) all ready, now we just have to configure Sonarr/Radarr to use them.</p><p>I'll just be going through how to set Radarr up - Sonarr will be identical, you'll just have to repeat the steps.</p><h2 id="add-indexer">Add Indexer</h2><p>Step one is to add the indexer. Navigate to <code>Settings -&gt; Indexers -&gt; Add Indexer (+)</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image-3.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Under <code>Usenet -&gt; Newznab -&gt; Presets</code>, find your indexer of choice (I use NZBGeek). Then simply add the API Key you got from your indexer, click <code>Test</code> and if everything is okay, click Save:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image-4.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Now if you navigate to any content you have and perform a manual search - you will see a number of results from NZBGeek:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/nzbresults.jpg" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><h2 id="add-download-client">Add Download Client</h2><p>Navigate to <code>Settings -&gt; Download Client -&gt; Add new client (+)</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-31.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Then select <strong>NZBGet </strong>(first option):</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/12/image-7.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>Then in the popup, add:</p><ul><li><strong>Host:</strong> IP of your Unraid Server</li><li><strong>Port:</strong> 6789</li><li><strong>Username</strong>: <code>nzbget</code> </li><li><strong>Password</strong>: <code><code>tegbzn6789</code></code></li></ul><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-34.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p><strong>Note: </strong>you'll notice that <code>Category</code> has <strong>Radarr</strong>. If you are setting up Sonarr, you need to make it <strong>Sonarr</strong> which will map to the correct category that we added to NZBGet previously.</p><p>Test it and if successful, click <strong>Save</strong>.</p><p>That's it! You've setup your Usenet indexer and Usenet download client.</p><p><em><strong>NOTE: Remember to repeat the above steps for Sonarr!</strong></em></p><h1 id="testing-it-out">Testing it out</h1><p>In Radarr, search for some new content and click <code>Add and Search</code>. After a few seconds, you should see your new media file start downloading in NZBGet: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-36.png" class="kg-image" alt="Configuring your Usenet Provider and Indexer with Sonarr/Radarr on Unraid"></figure><p>After the download has complete, NZBGet will move the file to the <code>/downloads/completed/radarr</code> directory. Radarr will then copy the file over to your <code>Media</code> directory automatically. You can then open up your Plex server, refresh your library and watch your new media content!</p><h2></h2>]]></content:encoded></item><item><title><![CDATA[Maintaining code formatting and quality automatically on your front-end projects using Prettier, ES/TSLint and StyleLint]]></title><description><![CDATA[Learn how to quickly setup Prettier, TSLint and StyleLint to enforce consistent code formatting and high code quality on your front-end TypeScript projects.]]></description><link>https://blog.harveydelaney.com/maintaining-code-formatting-and-quality-automatically/</link><guid isPermaLink="false">5dd87a881234610001cfd29e</guid><category><![CDATA[Front-end]]></category><category><![CDATA[typescript]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sun, 24 Nov 2019 08:04:53 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2019/12/prettier-tslint-stylelint-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2019/12/prettier-tslint-stylelint-1.jpg" alt="Maintaining code formatting and quality automatically on your front-end projects using Prettier, ES/TSLint and StyleLint"><p>At work, I've spun up a handful of React projects that have a number of engineers working on it (onshore and offshore). When reviewing code, I'm very particular about how code is formatted and it's quality. This is probably an artifact of me always using formatters and linters in my projects, so it's become a personal standard of mine to make sure these things are done correctly. </p><p>While reviewing pull requests, I find myself sometimes pointing out code formatting (inconsistent spacing, incorrect indentation, etc) and quality issues (using <code>any</code>, using <code>var</code> etc). I try to avoid doing so as I don't think it should be part of the code review - everyone knows how frustrating it is to have formatting issues pointed out.  So I began thinking of ways that I could encourage adherence to formatting/quality standards without having to point them out or fix them myself. The answer was to automatically <strong>enforce </strong>(not just recommend) by using a collection of formatters/linters.</p><h1 id="prettier"><a href="https://prettier.io/">Prettier</a></h1><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-9.png" class="kg-image" alt="Maintaining code formatting and quality automatically on your front-end projects using Prettier, ES/TSLint and StyleLint"></figure><p>My absolute favourite code formatter is Prettier! I've used Prettier on every single JavaScript project I've created at home and at work. It's an opinionated code formatter who's opinion I happen to strongly agree with. It formats all code very clean, easy to read format. To start off with I just used the Prettier VSCode extension which I could just run over code that I'm working on. Sometimes I forgot to run it so I then move onto adding it to run on file save, and added a script to run over all my files in <code>package.json</code>.</p><p>From their website, Prettier is described as:</p><blockquote>Prettier is an opinionated code formatter. It enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.</blockquote><p>It has support for JavaScript, JSX, Angular, Vue, Flow, TypeScript, CSS, Less, and SCSS, HTML, JSON, GraphQL, Markdown and YAML.</p><h2 id="installation-and-configuration">Installation and configuration</h2><p>Installing Prettier is simple, run:</p><pre><code>npm i --D prettier</code></pre><p>There are a number of formatting rules that Prettier provides, I'm a fan of just using the default ruleset. However, I still think it's handy to have a Prettier config file just to make sure everyone is using the same Prettier config and in case new formatting rules need to be added. Create <code>prettierrc.json</code> file then place the following in it:</p><pre><code>{
  "trailingComma": "es5",
  "tabWidth": 4,
  "semi": false,
  "singleQuote": true
}</code></pre><p>I would recommend installing the Prettier VSCode extension so you can see formatting errors in real-time.</p><p>Now we have to add an NPM script to run Prettier over all of our files and a script that runs a check over our files. In <code>package.json</code> add:</p><pre><code>"prettier": "prettier --write src/**/*.{js,tsx,scss}"
"prettier:check": "prettier --list-different src/**/*.{js,tsx,scss}"
</code></pre><p>Make sure to update these scripts to run on files types that you're using in your project. I use <code>prettier:check</code> in the pre-commit hook (to be discussed later) and have the <code>prettier</code> option to just allow people to run Prettier over all their code and do it's magic.</p><h1 id="es-tslint">ES/TSLint</h1><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-12.png" class="kg-image" alt="Maintaining code formatting and quality automatically on your front-end projects using Prettier, ES/TSLint and StyleLint"></figure><p>For a majority of my projects I use <a href="https://palantir.github.io/tslint/">TSLint</a> by Palantir. However, I would use ESLint for linting any JavaScript I write. TSLint is described as:</p><blockquote>TSLint is an extensible static analysis tool that checks <a href="http://www.typescriptlang.org/">TypeScript</a> code for readability, maintainability, and functionality errors. It is widely supported across modern editors &amp; build systems and can be customized with your own lint rules, configurations, and formatters.</blockquote><p>TSLint can be used as a code formatter, but I prefer using Prettier for that. We will only be enabling rules that pertain to code quality issues. Examples of these issues might be that the <code>any</code> type is being used, a <code>console.log</code> is present or using <code>let</code> when <code>const</code> should be used.</p><h2 id="installation-and-configuration-1">Installation and configuration</h2><p>Install TSLint by running:</p><pre><code>npm i -D tslint</code></pre><p>Then create a <code>tslint.json</code> file and place the following config inside:</p><pre><code class="language-json">{
  "defaultSeverity": "error",
  "extends": ["tslint:recommended"],
  "jsRules": {},
  "rules": {
    "object-literal-sort-keys": false,
    "ordered-imports": false,
    "quotemark": [true, "single", "jsx-double"],
    "trailing-comma": false,
    "semicolon": [true, "always", "ignore-bound-class-methods"],
    "arrow-parens": [true, "ban-single-arg-parens"],
    "interface-over-type-literal": false
  },
  "rulesDirectory": []
}
</code></pre><p>Obviously feel free to add/remove rules as you prefer, but make sure that the rules here don't conflict with Prettier, for example, the <code>quotemark</code> rule by default wants double, whereas we've configured Prettier to use single quotemark.</p><p>We also need to introduce an NPM script to run TSLint over all our files, in <code>package.json</code> scripts add:</p><pre><code>"tslint:check": "tslint -c tslint.json 'src/**/*.{ts,tsx}'"</code></pre><p>I would also recommend installing the TSLint VSCode extension, again, to see linting errors in real-time.</p><h1 id="stylelint"><a href="https://stylelint.io/">StyleLint</a></h1><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-11.png" class="kg-image" alt="Maintaining code formatting and quality automatically on your front-end projects using Prettier, ES/TSLint and StyleLint"></figure><p>I'm a huge fan of using CSS preprocessors like Sass over CSS (I just cannot live without nesting) and I was using <a href="https://github.com/sasstools/sass-lint">SassLint</a> for all my linting needs. SassLint worked fine, but I wanted to use a more generic style linter that could be used across multiple stylesheet type. A bit of research lead me to StyleLint. I haven't looked back since.</p><p>It has support for SCSS, Sass, Less and SugarSS. It's described as being:</p><blockquote>A mighty, modern linter that helps you avoid errors and enforce conventions in your styles.</blockquote><p>StyleLint provides hundreds of options that you can configure it with. It also has a number of great "base" rule-sets that you can simply install from NPM, add one line in <code>package.json</code> and have everything just work. For example, there are base rule-sets for CSS, SCSS and LESS. It's also super easy to extend and override these rule-sets.</p><h2 id="installation-and-configuration-2">Installation and configuration</h2><p>We'll just be configuring StyleLint for <code>css</code>. Install StyleLint using:</p><pre><code>npm i -D stylelint</code></pre><p>The easiest way to get up and running with StyleLint is to extend the basic CSS rule-set. This requires an additional install:</p><pre><code>npm i -D stylelint-config-recommended</code></pre><p>Now create a <code>.stylelintrc.json</code> and in it place:</p><pre><code class="language-json">{
  "extends": "stylelint-config-recommended"
}</code></pre><p>You can read all the basic rules provided by this <a href="https://github.com/stylelint/stylelint/blob/master/docs/user-guide/rules.md#possible-errors">here</a>. You can also easily extend/override rules by adding a rules key like:</p><pre><code class="language-json">{
  "extends": "stylelint-config-recommended",
  "rules": {
    "at-rule-no-unknown": [ true, {
      "ignoreAtRules": [
        "extends"
      ]
    }],
    "block-no-empty": null,
    "unit-whitelist": ["em", "rem", "s"]
  }
}</code></pre><p>Again, StyleLint has a VSCode extension I recommend installing. Now let's add a <code>package.json</code> script:</p><pre><code>"stylelint:check": "stylelint \"src/**/*.css\"",</code></pre><h1 id="pre-commit-hooks">Pre-commit hooks</h1><p>With Prettier, ES/TSLint and StyleLint installed and configured, we just need a way to enforce these have been run before the pull request is created!</p><p>An excellent NPM library that was recommended to me by a co-worker is called <a href="https://www.npmjs.com/package/pre-commit">pre-commit</a>. <code>pre-commit</code> allows you to run an NPM script before a git commit is performed. When this script is run and an error is thrown, the commit won't occur. This is incredibly useful and I've utilised it to enforce the engineer has run Prettier, Es/TsLint and StyleLint before submitting their code for review.</p><p>To get it working we first have to install it: </p><pre><code>npm i -D pre-commit</code></pre><p>Before we use <code>pre-commit</code>, create a new <code>package.json</code> script:</p><pre><code>"lint-all": "npm run prettier:check &amp;&amp; npm run tslint:check &amp;&amp; npm run stylelint:check"</code></pre><p>This script will run all of our formatters/linters sequentially and output any errors. This is the script we will get pre-commit to run.</p><p>In <code>package.json</code> add:</p><pre><code>  "pre-commit": [
    "lint-all"
  ],</code></pre><p>Now, anytime a <code>git commit</code> is run, Prettier, TSlint and StyleLint will all be run over your code. If there are any errors from them, the commit will fail and the errors will be output. Only once all errors are resolved will the commit work.</p><p>Hope this article helps you maintain your front-end project's code quality + formatting without having to point these our in pull requests!</p>]]></content:encoded></item><item><title><![CDATA[Creating a React Component Library using Rollup, Typescript, Sass and Storybook]]></title><description><![CDATA[Learn how how you can quickly and easily set up your own React Component library using Rollup, TypeScript, Sass and Storybook.]]></description><link>https://blog.harveydelaney.com/creating-your-own-react-component-library/</link><guid isPermaLink="false">5dca718c0c13ba0001b95585</guid><category><![CDATA[React]]></category><category><![CDATA[typescript]]></category><category><![CDATA[Front-end]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sat, 23 Nov 2019 01:45:06 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2020/07/react-component-library-2.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2020/07/react-component-library-2.png" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"><p>At work we have a number of front-end projects that are worked on by different project teams. Each of these projects are designed by our internal designer and had a lot common with them (same inputs, buttons designs etc). Up until now, we had no shared style-sheets or components. Components across projects were created from scratch, rewritten or copy pasted over each time. I saw a real need for an internal component library. This component library would enable teams to pull down an NPM package, import components, provide some props and have their components ready to use.</p><p>I strongly believed the component would help speed up front-end development by removing the need to write/copy paste components over to new projects. I thought it would also be great to learn how to create, manage and help adopt a component library used in production across multiple engineering teams and projects. So I created a React component library! </p><p>I wrote this article to share my experience around researching and building the component library. It''ll also cover how you can build your own React component library!</p><h3 id="table-of-contents">Table of Contents</h3><!--kg-card-begin: html--><div style="
    background-color: #f5f5f5;
    padding-top: 15px;
    padding-bottom: 15px;
    border: 1px solid #7b7b7b;
  ">
  <ul>
    <li>
      <a href="#component-library-overview">Component Library Overview </a>
    </li>
    <li>
      <a href="#component-library-research">Component Library Research </a>
      <ul>
        <li>
          <a href="#existing-component-libraries">Existing Component Libraries</a>
        </li>
        <li>
          <a href="#component-library-creation-tools">Component Library Creation Tools</a>
        </li>
      </ul>
    </li>
    <li>
      <a href="#custom-library-technology-choices">Custom Library Technology Choices</a>
    </li>
    <li>
      <a href="#creating-the-component-library">Creating the Component Library</a>
      <ul>
        <li>
          <a href="#adding-typescript">Adding TypeScript</a>
        </li>
        <li>
          <a href="#adding-rollup">Adding Rollup</a>
        </li>
        <li>
          <a href="#adding-storybook">Adding Storybook</a>
        </li>
        <li>
          <a href="#adding-jest-and-react-testing-library">Adding Jest and React Testing Library</a>
        </li>
        <li><a href="#final-npm-config">Final NPM Config</a></li>
      </ul>
    </li>
    <li>
      <a href="#publishing-the-component-library">Publishing the Component Library</a>
    </li>
    <li>
        <a href="#using-the-component-library">Using the Component Library</a>
          <ul>
            <li>
              <a href="#installing-from-npm-registry">Installing from NPM Registry</a>
            </li>
            <li>
              <a href="#installing-locally">Installing Locally</a>
            </li>
        </ul>
    </li>
    <li><a href="#adding-more-components">Adding More Components</a></li>
    <li>
      <a href="#introducing-code-splitting-optional-">Introducing Code Splitting</a>
    </li>
    <li>
      <a href="#further-steps">Further steps</a>
    </li>
  </ul>
</div>
<!--kg-card-end: html--><p></p><h1 id="component-library-overview">Component Library Overview</h1><p>There were a number of requirements I had for the component library. Those requirements being that it should:</p><ul><li>Use React</li><li>Use Sass</li><li>Use TypeScript (and bundle/export TypeScript types)</li><li>House a number of components</li><li>Have each component thoroughly tested (using Jest/React Testing Library)</li><li>Bundle and transpile the components and be able to be published to an internal (or external) NPM registry as a single package</li><li>Have components that are able to be displayed and interacted with on an internal (or external) website without having to set the library up locally</li><li>Be versionable using <a href="https://semver.org/">Semantic Versioning</a></li></ul><h2 id="just-give-me-the-component-library-code-">Just give me the component library code!</h2><p>Sometimes it's just easier to clone a repository as opposed to following along with the article creating everything from scratch, I get it. For your convenience I've created a GitHub repository which has all the configuration needed for creating your own component library:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/HarveyD/react-component-library"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HarveyD/react-component-library</div><div class="kg-bookmark-description">Contribute to HarveyD/react-component-library development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicon.ico" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"><span class="kg-bookmark-author">HarveyD</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars1.githubusercontent.com/u/5586128?s=400&amp;v=4" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></div></a></figure><p>I've created branches for different variants of the component library that people have requested:</p><ul><li><a href="https://github.com/HarveyD/react-component-library/tree/styled-components">With Styled Components</a></li><li><a href="https://github.com/HarveyD/react-component-library/tree/webpack">With Webpack</a></li><li><a href="https://github.com/HarveyD/react-component-library/tree/code-splitting">With Code Splitting</a></li></ul><p>Give it a star if it helps you out :). If you feel like it can be improved, please raise a pull request or a GitHub issue!</p><h1 id="component-library-research">Component Library Research</h1><h2 id="existing-component-libraries">Existing Component Libraries</h2><p>Before I created our own component library, I had a look around to see what other React component libraries were out there and whether we could use them. Turns out there's quite a few great libraries such as:</p><ul><li><a href="https://react-bootstrap.github.io/">React Bootstrap</a></li><li><a href="https://github.com/palantir/blueprint">Blueprint</a></li><li><a href="https://material-ui.com/">Material UI</a></li><li><a href="https://bitbucket.org/atlassian/atlaskit-mk-2/src/master/">Atlasskit</a></li><li><a href="https://github.com/rebassjs/rebass">Rebass</a></li><li><a href="https://www.codeinwp.com/blog/react-ui-component-libraries-frameworks/">and much more...</a></li></ul><p><a href="https://www.deanpham.com/">Our designer</a> had created the designs for our components with specific styles that none of these libraries could easily achieve without us going into the internals of the components and altering them. So I decided it would be better to create our own components.</p><h2 id="component-library-creation-tools">Component Library Creation Tools</h2><p>I then had a look around at a number of React project skeletons I could use to bootstrap the component library. Here are the libraries I considered:</p><ul><li><a href="https://www.google.com/search?q=create+react+app&amp;rlz=1C1CHBF_en-GBAU730AU730&amp;oq=create+react+app&amp;aqs=chrome..69i57.1911j0j7&amp;sourceid=chrome&amp;ie=UTF-8">Create React App</a> - "Set up a modern web app by running one command"</li><li><a href="https://www.npmjs.com/package/create-react-library">Create React Library</a> - "CLI for creating reusable react libraries"</li><li><a href="https://github.com/insin/nwb">NWB</a> - "a toolkit for React, Preact, Inferno &amp; vanilla JS apps, React libraries and other npm modules for the web"</li></ul><p>All of these libraries had excellent documentation, support and a number of features that made it easy to get different flavours of React project/libraries up and running. But there was always an issue I ran into that hindered my ability to configure the library to meet my requirements (since they abstracted config away).</p><p><strong>Create React App</strong> is designed for creating a web application, not a library. <strong>NWB </strong>was promising, but didn't have great support for TypeScript when I used it. <strong>Create React Library</strong> had inflexible config, making it tricky to get my library transforming and bundling the Sass files correctly.</p><p>Due to all the above not matching the requirements I had for the component library, I decided to write all the config from scratch! This approach gave me with much more control around the config of the project.</p><h2 id="custom-library-technology-choices">Custom Library Technology Choices</h2><p>Before I began building out the React component library, I needed to choose the languages and libraries to use to build the library. As you've seen in the article's title, I've chosen TypeScript, Sass, Rollup and Storybook. Here's my rationale behind choosing them.</p><h3 id="typescript"><a href="https://www.typescriptlang.org/">TypeScript</a></h3><p>TypeScript is my go to language for all my front-end projects. TypeScript provides type safety over your functions (and components). I've found that this type safety has helped me:</p><ul><li>Catch runtime errors at compile time</li><li>Understand how to use functions/components without looking at the code itself</li><li>Write code quicker (thanks to my IDE's autocomplete)</li><li>Refactor code faster and with more confidence</li><li>Write code that is easier to read and understand</li></ul><p>Building the component library using TypeScript allows you to easily bundle the types of your components for no extra work! Anyone who is using TypeScript and installs/uses the library will be eternally grateful to you (it is also kind of expected nowadays).</p><p>There is an overhead of creating types/interfaces for all your functions/components, but it is something that will save you ALOT of time later down the track. </p><h3 id="sass"><a href="https://sass-lang.com/">Sass</a></h3><p>Sass is my go to library for writing CSS. It's a CSS pre-processor that augments CSS with a number of handy features. It allows you to write CSS with variables, nesting, mixins, loops, functions, imports and more! The features I use most are <strong>variables </strong>and <strong>nesting</strong>.</p><p>Sass has always helped me write CSS faster, that's less verbose, is more maintainable and with less repetition.</p><p>If you don't want to use Sass, I'll also be showing you how to use <a href="https://github.com/css-modules/css-modules(">CSS Modules</a>, <a href="http://lesscss.org/">LESS</a>, or <a href="https://stylus-lang.com/">Stylus</a> to build your component library. </p><p>If you want to use <code>styled-components</code>,<a href="https://github.com/HarveyD/react-component-library/tree/styled-components"> check out this branch of my component library repo.</a></p><h3 id="rollup"><a href="https://rollupjs.org/guide/en/">Rollup</a></h3><p>My decision for a JavaScript module bundler to use was between <a href="https://webpack.js.org/">Webpack</a> and Rollup. I decided to use Rollup for the component library after researching what the differences between Webpack and Rollup are. <a href="https://medium.com/webpack/webpack-and-rollup-the-same-but-different-a41ad427058c">This article</a> goes into depth about these differences. The take-home is that:</p><blockquote>Use Webpack for apps, and Rollup for libraries</blockquote><p>This isn't a hard and fast rule, it's more of a guideline (which I have opted to follow). Rollup can be used to build apps and <a href="https://webpack.js.org/guides/author-libraries/">Webpack can be used to build libraries</a>.</p><p>I also recommend reading <a href="https://medium.com/jsdownunder/rollup-vs-webpack-javascript-bundling-in-2018-b35758a2268">this Medium article</a> and <a href="https://stackoverflow.com/questions/43219030/what-is-flat-bundling-and-why-is-rollup-better-at-this-than-webpack">this Stack Overflow post</a> that further discuss the difference between Rollup and Webpack.</p><h3 id="storybook"><a href="https://storybook.js.org/">Storybook</a></h3><p>I found out about Storybook in an interview I had with a web agency a few years ago. They explained they were using Storybook to help them create, experiment and display their React components. After the interview, I did my own research and experimentation with Storybook and fell in love with the tool. Storybook felt like a perfect fit for the component library.</p><p>Storybook provides a sand-boxed environment for your front-end projects that helps you to develop your components in isolation. The sandbox environment encourages engineers to create components that aren't tied in with logic within your application (e.g. coupled with and reliant on a Redux store). This results in components that are more generic and more re-usable.</p><p>Storybook also has the ability to be <a href="https://storybook.js.org/docs/basics/exporting-storybook/">exported as a static webpage</a>. You can easily host these files which will allow everyone in your organisation to see how components look/interact without having to clone and setup the repository locally. This is perfect to help out our product manager and designer friends!</p><p>For example, I've exported the component library Storybook files built in this article and currently host it on my <a href="https://www.harveydelaney.com/react-component-library">Express server here</a>. Here it is in an iframe:</p><!--kg-card-begin: html--><iframe src="https://www.harveydelaney.com/react-component-library" width="100%" height="640px"></iframe><!--kg-card-end: html--><h1 id="creating-the-component-library">Creating the Component Library</h1><p>First, we have to initialise NPM (<code>npm init</code>), set the <strong>name </strong>field to<strong> </strong><code>react-component-library</code> and initialise Git (<code>git init</code>).</p><p>We're creating a React component library and need React to help us build our components:</p><pre><code>npm i --save-dev react react-dom @types/react</code></pre><p>We will also configure <code>react</code> and <code>react-dom</code> as <a href="https://nodejs.org/es/blog/npm/peer-dependencies/">Peer Dependencies</a>. Having them as peer dependencies will mean that once our library is installed in another project, they won't be automatically installed as dependencies. Instead, NPM will provide soft assertion by outputting a warning if a matching version of the dependencies haven't been installed alongside our component library. </p><p>We will later introduce a Rollup plugin called <a href="https://www.npmjs.com/package/rollup-plugin-peer-deps-external">rollup-plugin-peer-deps-external</a>. This plugin prevents packages listed in <code>peerDependencies</code> from being bundled with our component library (reducing our bundle size by 10<a href="https://reactjs.org/blog/2017/09/26/react-v16.0.html#reduced-file-size">9kb</a>).</p><p>Create an entry in <code>package.json</code> called <code>peerDependencies</code> and add <code>react</code> and <code>react-dom</code>. I've specified the version target as <code>&gt;=16.8.0</code> as we want a version of React that <a href="https://reactjs.org/docs/hooks-intro.html">supports React Hooks</a> (released in 16.8.0):</p><figure class="kg-card kg-code-card"><pre><code class="language-json">...
"peerDependencies": {
  "react": "&gt;=16.8.0",
  "react-dom": "&gt;=16.8.0"
}
...</code></pre><figcaption>package.json</figcaption></figure><p>Now let's create the skeleton of our library by creating the following files and directories:</p><figure class="kg-card kg-code-card"><pre><code>.storybook/
  main.js
.gitignore
package.json
rollup.config.js
tsconfig.json
jest.config.js
jest.setup.ts
src/
  TestComponent/
    TestComponent.tsx
    TestComponent.types.ts
    TestComponent.scss
    TestComponent.stories.tsx
    TestComponent.test.ts
  index.ts
</code></pre><figcaption>Folder Structure</figcaption></figure><p>We will output our transpiled, bundled files within a <code>build</code> directory.</p><p>In <code>TestComponent.tsx</code> we will have a very simple component: </p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import React from "react";

import { TestComponentProps } from "./TestComponent.types";

import "./TestComponent.scss";

const TestComponent: React.FC&lt;TestComponentProps&gt; = ({ theme }) =&gt; (
  &lt;div
    data-testid="test-component"
    className={`test-component test-component-${theme}`}
  &gt;
    &lt;h1 className="heading"&gt;I'm the test component&lt;/h1&gt;
    &lt;h2&gt;Made with love by Harvey&lt;/h2&gt;
  &lt;/div&gt;
);

export default TestComponent;
</code></pre><figcaption>src/TestComponent/TestComponent.tsx</figcaption></figure><p>With the prop types imported from and defined in <code>TestComponent.types.ts</code>:</p><pre><code class="language-javascript">export interface TestComponentProps {
  theme: "primary" | "secondary";
}
</code></pre><p>And in <code>TestComponent.scss</code> we will have:</p><figure class="kg-card kg-code-card"><pre><code class="language-css">.test-component {
    background-color: white;
    border: 1px solid black;
    padding: 16px;
    width: 360px;
    text-align: center;
    
    .heading {
        font-size: 64px;
    }

    &amp;.test-component-secondary {
        background-color: black;
        color: white;
    }
}</code></pre><figcaption>src/TestComponent/TestComponent.scss</figcaption></figure><p><code>src/index.ts</code> will be used as the entry point for Rollup. We will use a pattern called <strong><a href="https://github.com/basarat/typescript-book/blob/master/docs/tips/barrel.md">Barrel Exports</a></strong> to expose our components in the entry point. We do this by importing, then exporting all our components. Components exported here will be bundled by Rollup. In this file add:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">import TestComponent from "./TestComponent/TestComponent";

export { TestComponent };
</code></pre><figcaption>src/index.tsx</figcaption></figure><p>We will leave <code>TestComponent.stories.js</code> and <code>TestComponent.test.tsx</code> empty for now.</p><h2 id="adding-typescript">Adding TypeScript</h2><p>Run <code>npm i -D typescript</code> and in <code>tsconfig.json</code> add:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  "compilerOptions": {
    "declaration": true,
    "declarationDir": "build",
    "module": "esnext",
    "target": "es5",
    "lib": ["es6", "dom", "es2016", "es2017"],
    "sourceMap": true,
    "jsx": "react",
    "moduleResolution": "node",
    "allowSyntheticDefaultImports": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"],
  "exclude": [
    "node_modules",
    "build",
    "src/**/*.stories.tsx",
    "src/**/*.test.tsx"
  ]
}
</code></pre><figcaption>tsconfig.json</figcaption></figure><p>This is the config I recommend using. It's important to take note of <code>"declaration": true</code> and <code>"declarationDir": "build"</code>. This will generate and place the types of our components within our <code>build</code> folder. Feel free to adjust this config to your liking.</p><h2 id="adding-rollup">Adding Rollup</h2><p>Now we need to install Rollup in addition to some plugins required for our component library to be transpiled and bundled correctly:</p><pre><code>npm i -D rollup rollup-plugin-typescript2 @rollup/plugin-commonjs @rollup/plugin-node-resolve rollup-plugin-peer-deps-external rollup-plugin-postcss node-sass</code></pre><p>In <code>rollup.config.js</code> add: </p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">import peerDepsExternal from "rollup-plugin-peer-deps-external";
import resolve from "@rollup/plugin-node-resolve";
import commonjs from "@rollup/plugin-commonjs";
import typescript from "rollup-plugin-typescript2";
import postcss from "rollup-plugin-postcss";

const packageJson = require("./package.json");

export default {
  input: "src/index.ts",
  output: [
    {
      file: packageJson.main,
      format: "cjs",
      sourcemap: true
    },
    {
      file: packageJson.module,
      format: "esm",
      sourcemap: true
    }
  ],
  plugins: [
    peerDepsExternal(),
    resolve(),
    commonjs(),
    typescript({ useTsconfigDeclarationDir: true }),
    postcss()
  ]
};
</code></pre><figcaption>rollup.config.js</figcaption></figure><p>Lets run through the everything in this config file.</p><h3 id="input"><code>input</code> </h3><p>points to <code>src/index.ts</code>. Rollup will build up a dependency graph from this entry point and then bundle all the components that are imported/exported.</p><h3 id="output"><code>output</code> </h3><p>is an array with two objects, each specifying output config. We will be outputting two bundles in two different JavaScript module formats: </p><ul><li><strong>CommonJS - CJS</strong> </li><li><strong>ES Modules - ESM</strong> </li></ul><p>This is because we want to support tools that use CommonJS (Webpack, Node.js) and tools that work with ES Modules (Webpack 2+, Rollup). Read more about CJS, ESM and other output formats in <a href="https://rollupjs.org/guide/en/">Rollup's documentation</a>. </p><p>ES Modules have a <a href="https://exploringjs.com/es6/ch_modules.html#static-module-structure">static module structure</a>. This enables bundling tools that target ESM to perform <a href="https://webpack.js.org/guides/tree-shaking/">tree shaking</a>. Tree shaking is a process of dead code elimination where Bundlers will attempt to only bundle code that is being used. </p><p>CommonJS modules have a <a href="https://exploringjs.com/es6/ch_modules.html#static-module-structure">dynamic module structure</a>. This makes it difficult for bundling tools that target CommonJS to perform tree shaking. This means that even if only one component is imported from our library, all components will be bundled.</p><p>Read more about <a href="https://medium.com/better-programming/introduction-to-tree-shaking-e94e57db081e#cjs-vs.-esm">CJS, ESM and tree shaking here</a>.</p><p>We will import the filename of our desired CommonJS and ES Modules index file from package.json. The <code>main</code> field in <code>package.json</code> points to our bundled CommonJS entry point and the <code>module</code> field points to our bundled ES Modules entry point. </p><p><em>If a tool can support ESM, it'll use <code>module</code> otherwise it'll use <code>main</code>. </em></p><p>We will be outputting all our bundles to the <code>build</code> directory. Add the <code>main</code> and <code>module</code> fields in <code>package.json</code> in addition to the <code>files</code> field to instruct NPM what files to include when our component library is installed as a dependency: </p><figure class="kg-card kg-code-card"><pre><code class="language-json">...
"main": "build/index.js",
"module": "build/index.es.js",
"files": ["build"],
...</code></pre><figcaption>package.json</figcaption></figure><h3 id="plugins"><code>plugins</code></h3><p>is an array of 3rd party Rollup plugins. The plugins I've included are ones that are required to bundle the component library. A complete list of plugins can be found <a href="https://github.com/rollup/plugins">here</a>. Let's go through all the plugins we're using:</p><ul><li><a href="https://www.npmjs.com/package/rollup-plugin-peer-deps-external">peerDepsExternal </a>(<code>rollup-plugin-peer-deps-external</code>)<a href="https://www.npmjs.com/package/rollup-plugin-peer-deps-external"> </a>- prevents Rollup from bundling the peer dependencies we've defined in <code>package.json</code> (<code>react</code> and <code>react-dom</code>)</li><li> <a href="https://github.com/rollup/plugins/tree/master/packages/node-resolve">resolve</a> (<code>@rollup/plugin-node-resolve</code>) - efficiently bundles third party dependencies we've installed and use in <code>node_modules</code></li><li><a href="https://github.com/rollup/plugins/tree/master/packages/commonjs">commonjs</a> (<code>@rollup/plugin-commonjs</code>) - enables transpilation into CommonJS (CJS) format</li><li><a href="https://github.com/rollup/plugins/tree/master/packages/typescript">typescript</a> (<code>rollup-plugin-typescript2</code>) - transpiles our TypeScript code into JavaScript. This plugin will use all the settings we have set in <code>tsconfig.json</code>. We set <code>"useTsconfigDeclarationDir": true</code> so that it outputs the <code>.d.ts</code> files in the directory specified by in <code>tsconfig.json</code> </li><li><a href="https://github.com/egoist/rollup-plugin-postcss">postcss</a> (<code>rollup-plugin-postcss</code>) - transforms our Sass into CSS. In order to get this plugin working with Sass, we've installed <code>node-sass</code>. It also supports <strong>CSS Modules</strong>, <strong>LESS </strong>and <strong>Stylus</strong>. I recommend <a href="https://github.com/egoist/rollup-plugin-postcss">reading the documentation here</a> if you want to use a different CSS pre-processor and/or to learn all the other settings available.</li></ul><p>Other useful plugins you might want to add are: </p><ul><li><a href="https://github.com/rollup/plugins/tree/master/packages/image">@rollup/plugin-images</a> - import image files into your components</li><li><a href="https://github.com/rollup/plugins/tree/master/packages/json">@rollup/plugin-json</a> - import JSON files into your components</li><li><a href="https://www.npmjs.com/package/rollup-plugin-terser">rollup-plugin-terser</a> - minify the Rollup bundle</li></ul><h3 id="running-rollup">Running Rollup</h3><p>Now we need to add the first <code>package.json</code> script entry that will run our Rollup bundling process: </p><figure class="kg-card kg-code-card"><pre><code class="language-json">...
"scripts": {
  "build" "rollup -c"
}
...</code></pre><figcaption>package.json</figcaption></figure><p>The <code>-c</code> argument is short for <code>--config</code> which accepts a Rollup config file name as a parameter. If no file is provided it'll attempt to use <code>rollup.config.js</code> within the same directory.</p><p>Now by running <code>npm run build</code> you should see Rollup do it's thing and create a <code>/build</code> folder that will contain the compiled (CJS and ESM) component library, ready to be published to an NPM registry:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-7.png" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></figure><p><strong>Note: </strong>you might want to create a <code>.gitignore</code> file and add <code>node_modules</code> and <code>build</code> to it. </p><h2 id="adding-storybook">Adding <a href="https://storybook.js.org/">Storybook</a></h2><p><a href="https://storybook.js.org/docs/guides/guide-react/#manual-setup">We will be adding Storybook to our component library manually.</a></p><p>Install Storybook for React, it's core dependencies (<a href="https://babeljs.io/">Babel</a>) and <a href="https://webpack.js.org/">Webpack</a> loaders:</p><pre><code>npm install --save-dev @storybook/react @babel/core babel-preset-react-app babel-loader sass-loader</code></pre><p>Storybook uses Webpack to serve and load JS modules. It ships with default config <a href="https://storybook.js.org/docs/configurations/default-config/">outlined here</a>. Since we are using Sass and TypeScript, we'll need to extend the default config with additional Webpack rules to get Storybook working with our library. </p><p>Yes, it's less than ideal that we have to configure both Webpack and Rollup for our component library. Until Storybook supports Rollup or Webpack becomes the recommended module bundler for JavaScript libraries, we'll have to stick with this!</p><p>In <code>.storybook/main.js</code> and add:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">const path = require("path");

module.exports = {
  stories: ["../src/**/*.stories.tsx"],
  // Add any Storybook addons you want here: https://storybook.js.org/addons/
  addons: [],
  webpackFinal: async (config) =&gt; {
    config.module.rules.push({
      test: /\.scss$/,
      use: ["style-loader", "css-loader", "sass-loader"],
      include: path.resolve(__dirname, "../")
    });

    config.module.rules.push({
      test: /\.(ts|tsx)$/,
      loader: require.resolve("babel-loader"),
      options: {
        presets: [["react-app", { flow: false, typescript: true }]]
      }
    });
    config.resolve.extensions.push(".ts", ".tsx");

    return config;
  }
};
</code></pre><figcaption>.storybook/main.js</figcaption></figure><p>Here, we're instructing Storybook where to find our stories, specifying our Stoprybook addons (none so far, but <a href="https://storybook.js.org/addons/">check out all the addons available</a>) and using <code>sass-loader</code> + <code>babel-loader</code> to compile our Sass + TypeScript files. </p><p>Storybook has <code>style-loader</code> and <code>css-loader</code> as dependencies already. Hence why we only had to install <code>sass-loader</code>.</p><p>We also need to create <code>package.json</code> script entries for Storybook:</p><figure class="kg-card kg-code-card"><pre><code>...
  "scripts": {
    ...
    "storybook": "start-storybook -p 6006",
    "storybook:export": "build-storybook",
    ...
  }
...</code></pre><figcaption>package.json</figcaption></figure><p><code>storybook:export</code> is optional! It allows you to export Storybook as static files that can can be served anywhere. This is helpful for showing off your components to non-technical members of your team. You can chuck the files into an S3 Bucket, use GitHub pages or spin up a custom ExpressJS server - the choice is yours!</p><p>That's it! Storybook is now configured for our component library.</p><p>Now, we have to create stories for <code>TestComponent</code>. Open <code>src/TestComponent/TestComponent.stories.tsx</code> and place:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">import React from "react";
import TestComponent from './TestComponent';

export default {
  title: "TestComponent"
};

export const Primary = () =&gt; &lt;TestComponent theme="primary" /&gt;;

export const Secondary = () =&gt; &lt;TestComponent theme="secondary" /&gt;;
</code></pre><figcaption>src/TestComponent/TestComponent.stories.tsx</figcaption></figure><p>This is a very basic story, showing the two variants of our component.</p><p>Now run <code>npm run storybook</code> and Storybook will run it's magic and load up your components at <code>http://localhost:6006</code>.  Storybook should look like:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-8.png" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></figure><h2 id="adding-jest-and-react-testing-library">Adding <a href="https://jestjs.io/">Jest</a> and <a href="https://github.com/testing-library/react-testing-library">React Testing Library</a></h2><p>Maintaining a high level of test coverage on components is extremely important for your component library. We need to have confidence that when we make changes to our components that we won't be breaking how the component is being expected to behave in another project. For testing our React components, we will be using <strong>Jest </strong>and <strong>React Testing Library</strong>.</p><p>Start by installing Jest, React Testing Library:</p><pre><code class="language-bash">npm i --save-dev jest ts-jest @types/jest identity-obj-proxy @testing-library/react @testing-library/jest-dom</code></pre><p>In <code>jest.config.js</code> add:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">module.exports = {
  roots: ["./src"],
  setupFilesAfterEnv: ["./jest.setup.ts"],
  moduleFileExtensions: ["ts", "tsx", "js"],
  testPathIgnorePatterns: ["node_modules/"],
  transform: {
    "^.+\\.tsx?$": "ts-jest"
  },
  testMatch: ["**/*.test.(ts|tsx)"],
  moduleNameMapper: {
    // Mocks out all these file formats when tests are run
    "\\.(jpg|ico|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$":
      "identity-obj-proxy",
    "\\.(css|less|scss|sass)$": "identity-obj-proxy"
  }
};
</code></pre><figcaption>jest.config.js</figcaption></figure><p>The creator of <code>react-testing-library</code> has also created a library called <a href="https://github.com/testing-library/jest-dom">jest-dom</a>. It  extends Jest, providing a number of helpful Jest matchers like <a href="https://github.com/testing-library/jest-dom#tohaveclass"><code>toHaveClass</code></a>, <code>toHaveAttribute</code>, <code>toBeDisabled</code> and so on. If you want to add it, in <code>jest.setup.ts</code> add:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">import "@testing-library/jest-dom";
</code></pre><figcaption>jest.setup.ts</figcaption></figure><p>And add two new scripts in <code>package.json</code> to run the tests: </p><figure class="kg-card kg-code-card"><pre><code>...
"scripts":
    {
        ....
        "test": "jest",
        "test:watch": "jest --watch",
        ....
    }
...</code></pre><figcaption>package.json</figcaption></figure><p> <code>test</code> should be used on your CI/CD pipeline and <code>test:watch</code> should be used when you're running your tests locally (they will re-run whenever a file is changed).</p><p>Then in <code>TestComponent.test.tsx</code> create two simple tests:</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import React from "react";
import { render } from "@testing-library/react";

import TestComponent from "./TestComponent";
import { TestComponentProps } from "./TestComponent.types";

describe("Test Component", () =&gt; {
  let props: TestComponentProps;

  beforeEach(() =&gt; {
    props = {
      theme: "primary"
    };
  });

  const renderComponent = () =&gt; render(&lt;TestComponent {...props} /&gt;);

  it("should have primary className with default props", () =&gt; {
    const { getByTestId } = renderComponent();

    const testComponent = getByTestId("test-component");

    expect(testComponent).toHaveClass("test-component-primary");
  });

  it("should have secondary className with theme set as secondary", () =&gt; {
    props.theme = "secondary";
    const { getByTestId } = renderComponent();

    const testComponent = getByTestId("test-component");

    expect(testComponent).toHaveClass("test-component-secondary");
  });
});
</code></pre><figcaption>src/TestComponent/TestComponent.test.tsx</figcaption></figure><p>After running <code>npm run test:watch</code> you should see Jest run and output: </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/04/image-39.png" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></figure><p>Indicating we have set it up correctly!</p><h2 id="final-npm-config">Final NPM Config</h2><p>After going through these steps, your <code>package.json</code> should look like:</p><pre><code class="language-json">{
  "name": "react-component-library",
  "version": "1.0.0",
  "main": "build/index.js",
  "module": "build/index.esm.js",
  "files": [
    "build"
  ],
  "scripts": {
    "build": "rollup -c",
    "test": "jest",
    "test:watch": "jest --watch",
    "storybook": "start-storybook -p 6006",
    "storybook:export": "build-storybook"
  },
  "peerDependencies": {
    "react": "&gt;=16.8.0",
    "react-dom": "&gt;=16.8.0"
  },
 "devDependencies": {
    "@babel/core": "^7.9.0",
    "@rollup/plugin-commonjs": "^11.1.0",
    "@rollup/plugin-node-resolve": "^7.1.3",
    "@storybook/react": "^5.3.18",
    "@testing-library/jest-dom": "^5.5.0",
    "@testing-library/react": "^10.0.2",
    "@types/jest": "^24.0.24",
    "@types/react": "^16.9.12",
    "@types/react-dom": "^16.9.8",
    "babel-loader": "^8.1.0",
    "babel-preset-react-app": "^9.1.2",
    "identity-obj-proxy": "^3.0.0",
    "jest": "^24.9.0",
    "node-sass": "^4.14.1",
    "react": "^16.13.1",
    "react-dom": "^16.13.1",
    "rollup": "^1.27.4",
    "rollup-plugin-copy": "^3.3.0",
    "rollup-plugin-peer-deps-external": "^2.2.0",
    "rollup-plugin-postcss": "^3.1.2",
    "rollup-plugin-typescript2": "^0.27.0",
    "sass-loader": "^8.0.0",
    "ts-jest": "^24.2.0",
    "typescript": "^3.7.2"
  }
}</code></pre><p></p><h2 id="publishing-the-component-library">Publishing the Component Library</h2><p>To publish our library, we first have to make sure that we have run Rollup (<code>npm run build</code>) and the transpiled/bundled library code exists under <code>/build</code>. We can utilise the <a href="https://docs.npmjs.com/misc/scripts">NPM script</a> <code>prepublishOnly</code> to make sure build is run before publish occurs. Add the following to <code>package.json</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">...
"scripts": {
    ...
    "prepublishOnly": "npm run build"
  },
...</code></pre><figcaption>package.json</figcaption></figure><p>Next, we have to choose an NPM registry to which we want to upload our library to. The easiest option is to use the <a href="https://www.npmjs.com/">public NPM registry</a>. Other private (self hosted) alternatives are <a href="https://github.com/verdaccio/verdaccio">Verdaccio</a> and <a href="https://inedo.com/proget">ProGet</a>. Create an account, then using those credentials (<code>username</code> and <code>password</code>) log into NPM by running: <code>npm login</code>. </p><p>Next, make sure the name of the package in <code>package.json</code> is something you desire and the version is <code>1.0.0</code> (to begin with):</p><pre><code>{
  "name": "react-component-library",
  "version": "1.0.0",
  ...
}</code></pre><p>The name in this article uses <code>react-component-library</code> which has already been taken - so choose something unique. For example, I've used <a href="https://www.npmjs.com/package/harvey-component-library"><code>harvey-component-library</code></a>.</p><p>Now run:</p><pre><code>npm publish</code></pre><p>Any files that are under the directories outlined in <code>files</code> in package.json will be uploaded to the registry. For us, this will be all the files under <code>/build</code> which is the output from Rollup. You should see:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/09/Screen-Shot-2020-09-22-at-11.37.15-pm.png" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></figure><p>Anytime you want to update your library, you will have to increment the version in <code>package.json</code> following the <a href="https://semver.org/">Semantic Versioning</a> guide. For example, if I had a "patch" update, I'd change the version from <code>1.0.0</code> to <code>1.0.1</code> and run <code>npm publish</code>.</p><h2 id="using-the-component-library">Using the Component Library</h2><h3 id="installing-from-npm-registry">Installing from NPM Registry</h3><p>Let's say we've followed the above steps to publish our component library to NPM at: <a href="https://www.npmjs.com/package/harvey-component-library">https://www.npmjs.com/package/harvey-component-library</a>. We would install the component library as a dependency like we would any other NPM package:</p><pre><code>npm install --save harvey-component-library</code></pre><h3 id="installing-locally">Installing Locally</h3><p>We probably don't always want to have to publish and then install the component library to see new updates when using the library in other React projects. Fortunately, we <strong>don't </strong>have to publish the component library to an NPM registry before installing our component library and test out our components.</p><p>Let's say you had a React project on your local machine called <code>harvey-test-app</code>. In <code>harvey-test-app</code> run (making sure the path is correct to your component library):</p><pre><code>npm i --save ../react-component-library</code></pre><p> This will install your local instance of the component library as a dependency to <code>harvey-test-app</code>! This establishes a symlink from the component library to the dependency in the consuming project. Anytime an update is made to the library, it will immediately be reflected in the consuming project. Read more about <a href="https://docs.npmjs.com/cli/link">NPM link here</a>.</p><h3 id="using-the-components">Using the Components</h3><p>For either option, you would then go into your project consuming the component library (<code>harvey-test-app</code>) project and import our <code>TestComponent</code> like:</p><pre><code class="language-javascript">import React from "react";
import { TestComponent } from "react-component-library";

const App = () =&gt; (
    &lt;div className="app-container"&gt;
        &lt;h1&gt;Hello I'm consuming the component library&lt;/h1&gt;
        &lt;TestComponent theme="primary" /&gt;
    &lt;/div&gt;
);

export default App;</code></pre><p>Running <code>harvey-test-app</code> should now successfully render our <code>TestApp</code> component!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.harveydelaney.com/content/images/2020/02/component-library-usage.jpg" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"><figcaption>CRA app consuming our TestComponent component</figcaption></figure><p>Check out this Code Sandpit snippet to see how easy it is to use the component!</p><figure class="kg-card kg-embed-card"><iframe width="1000" height="500" src="https://codesandbox.io/embed/harvey-component-library-example-y2b60?file=/src/App.js" style="width:1000px; height:500px; border:0; border-radius: 4px; overflow:hidden;" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe></figure><h2 id="adding-more-components">Adding More Components</h2><p>The way we've structured our project allows for any number of components to be added. You can copy the <code>TestComponent</code> folder, rename and then build out your new component.</p><p><em>Update 26/04/2020: </em>I found the above steps to create a new component were cumbersome and time consuming. I decided to create a Node script that helps generation of new components:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.harveydelaney.com/content/images/2020/04/generated-component.min.gif" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></figure><p><a href="https://github.com/HarveyD/react-component-library/blob/master/util/create-component.js">Check out <code>util/create-component.js</code> in my GitHub repository if you want to implement something similar.</a></p><p><strong>For both approaches, make sure you export your new component in</strong> <code>index.ts</code> otherwise it won't be picked up and bundled by Rollup. </p><h2 id="introducing-code-splitting-optional-">Introducing Code Splitting (optional)</h2><p>We can introduce <a href="https://medium.com/rollup/rollup-now-has-code-splitting-and-we-need-your-help-46defd901c82">code splitting</a> that enables projects consuming our library to import one or a few components, instead of the whole library. This <strong>can </strong>help us achieve smaller bundle sizes in projects using our component library without tree shaking.</p><p>You'll only see benefit from this if you have a large component library with many components OR if the projects consuming the library do not use modern module bundlers that can perform tree shaking.</p><p>We will be utilising <a href="https://github.com/rollup/rollup-starter-code-splitting">Rollup's Code Splitting</a> functionality to help us achieve this. </p><p><strong>Note 1:</strong> I opted to only target one module format (CJS). This reduces one level of nesting when importing components (<code>import X from library/build/cjs/component/component</code> -&gt; <code>import X from library/build/component/component</code>). You can still target two formats if you want. </p><p><strong>Note 2:</strong> It's also worthwhile to introduce an <code>index.ts</code> file to each of our components. The <code>index.ts</code> file can either contain our component, or import/export the component from another file. This reduces another level of nesting when importing components (<code>import X from library/build/component/component</code> -&gt; <code>import X from library/build/component</code>). <a href="https://github.com/HarveyD/react-component-library/blob/code-splitting/src/TestComponent/index.ts">Here is an example</a>.</p><p><strong>Note 3:</strong> I've found that there is <a href="https://github.com/HarveyD/react-component-library/issues/19">an issue with Rollup code</a> splitting and bundling dependencies from <code>node_modules</code>. As such, I've removed <code>@rollup/plugin-node-resolve</code> when code splitting until I can find a better solution.</p><p>There are some changes we have to make to <code>rollup.config.js</code> and <code>package.json</code>. First, in <code>rollup.config.js</code>, update <code>input</code>, <code>output</code> and <code>perserveModules</code> to:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  input: ["src/index.ts", "src/TestComponent/TestComponent.tsx"],
  output: [
    {
      dir: "build",
      format: "cjs",
      sourcemap: true
    }
  ],
  preserveModules: true,
  ...
}</code></pre><figcaption>rollup.config.js</figcaption></figure><p><br>Instead of specifying one index file, we're telling Rollup to use a number of index files. This is how we instruct Rollup to perform code splitting. <strong>Add a path to all components</strong> you want to be split in the array. </p><p>We can use <a href="https://www.npmjs.com/package/@rollup/plugin-multi-entry">@rollup/plugin-multi-entry</a> and provide appropriate file name patterns instead of having to add each of our components.</p><p>Setting <code>preserveModules</code> to <code>true</code> is required to maintain the directory/file structure of our components after being compiled. Without it, Rollup won't co-locate our compiled components and their associated type files correctly:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2020/07/preservemodules.png" class="kg-image" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></figure><p> </p><p>We're updating <code>output.file</code> to <code>output.dir</code> because Rollup bundles with code splitting, it produces output in multiple files/directories instead of one file.</p><p>In <code>package.json</code> <strong>remove </strong>the <code>module</code> entry as we're only shipping CJS:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  ...,
  "main": "build/index.js"
  "files": [
    "build"
  ],
  ...
}</code></pre><figcaption>package.json</figcaption></figure><p>That's it!</p><p>After building/publishing/installing our updated component library, we should be able to import components in two ways:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">import { TestComponent } from 'react-component-library';</code></pre><figcaption>Option A</figcaption></figure><p><strong>OR</strong></p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">import TestComponent from 'react-component-library/build/TestComponent';</code></pre><figcaption>Option B</figcaption></figure><p><strong>Option A</strong> will import <code>TestComponent</code> along all the other components available in the library. This will increase your overall bundle size (assuming there's no tree shaking).</p><p><strong>Option B</strong> will only import <code>TestComponent</code>. This approach can significantly reduce the amount of code that is sent to the client.</p><p><strong><a href="https://github.com/HarveyD/react-component-library/tree/code-splitting">See the component library with code splitting on this branch</a>. Or checkout <a href="https://github.com/HarveyD/react-component-library/commit/94631be5a871f3b39dbc3e9bd3e75a8ae5b3b759">this commit </a>to see what changes are required.</strong></p><h1 id="further-steps">Further Steps</h1><h2 id="setting-up-a-private-npm-registry-ci-pipeline-and-consuming-the-library">Setting up a Private NPM Registry, CI Pipeline and Consuming the Library</h2><p>The next steps to get this component library was to setup a private NPM registry and pipeline which automatically publishes/maintains library versioning. I'll be covering this aspect in another blog post which you can read at: <a href="https://blog.harveydelaney.com/setting-up-a-private-npm-registry-publishing-ci-cd-pipeline/">Setting up a Private NPM Registry and Publishing CI/CD Pipeline</a></p><h2 id="maintaining-code-quality-in-your-component-library">Maintaining Code Quality in your Component Library</h2><p>As this component library will likely be contributed by other engineers on your team, you'll probably want to maintain a high level of code quality by using static code analysis. Follow along with: <a href="https://blog.harveydelaney.com/maintaining-code-formatting-and-quality-automatically/">Maintaining Code Formatting and Quality Automatically</a> to help achieve this.</p><h2 id="github-repository">GitHub Repository</h2><p>Again, I've created a GitHub repository with all the code necessary to create your own component library. Feel free to clone, fork or star it if it helps you!</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/HarveyD/react-component-library"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HarveyD/react-component-library</div><div class="kg-bookmark-description">A project skeleton to get your very own React Component Library up and running using Rollup, Typescript, SASS + Storybook - HarveyD/react-component-library</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"><span class="kg-bookmark-author">HarveyD</span><span class="kg-bookmark-publisher">GitHub</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars1.githubusercontent.com/u/5586128?s=400&amp;v=4" alt="Creating a React Component Library using Rollup, Typescript, Sass and Storybook"></div></a></figure><p></p><h1></h1>]]></content:encoded></item><item><title><![CDATA[Setting up BuildKite and your first Continuous Integration pipeline in 2 hours]]></title><description><![CDATA[Learn how to quickly setup your very first BuildKite CI/CD pipeline!]]></description><link>https://blog.harveydelaney.com/setting-up-buildkite-and-your-first-ci-pipeline-in-2-hours/</link><guid isPermaLink="false">5dce90040c13ba0001b95648</guid><category><![CDATA[buildkite]]></category><category><![CDATA[discaper]]></category><category><![CDATA[Continuous Integration and Continuous Delivery (CI/CD)]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sat, 16 Nov 2019 03:40:30 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2019/11/buildkite-pipeline.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.harveydelaney.com/content/images/2019/11/buildkite-pipeline.jpg" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"><p>I've written a few blogs in the past about how I setup my continuous integration pipeline for www.harveydelaney.com (<a href="https://blog.harveydelaney.com/jenkins-build-test-deploy-node-app/">https://blog.harveydelaney.com/jenkins-build-test-deploy-node-app/</a> and <a href="https://blog.harveydelaney.com/setting-up-jenkins-on-docker/">https://blog.harveydelaney.com/setting-up-jenkins-on-docker/</a>). My Jenkins pipeline has been quite reliable and has done everything I needed.</p><p>At RateSetter we use Buildkite for our Continous Integration. I've found it to be an excellent tool that provides a simple, clean UI for all your pipelines in addition to having great documentation and an easy to use API. It also has excellent support for emojis! </p><p>When I joined RateSetter, all the BuildKite configuration had already been done. I had created new pipelines, but I wanted to learn how to set everything up from scratch. So I made it a task this weekend to migrate my CI pipeline from Jenkins to BuildKite and document it all here.</p><p>Please note, this article only covers how you can setup your pipeline from BuildKite's UI and <strong>not </strong>from writing BuildKite config files.</p><h2 id="buildkite-agent">BuildKite Agent</h2><p>For this blog, I'll be using an Azure Ubuntu VM. </p><p>After creating an account on BuildKite, it prompted me to run some commands on a VM. So I headed over to Azure to spin up a Standard B1s machine (1 virtual CPU and 1GB RAM, 0.5GB RAM will have troubles installing the agent): </p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/azure-bk.jpg" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><p>After the VM was spun up, I SSHed into the machine and ran the all steps listed under setting up an Ubuntu agent on BuildKite:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/bk-ubuntu-setup-1.jpg" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><p>After running all these commands, got a nice little popup and then could setup my first Pipeline!</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><h2 id="creating-my-first-pipeline">Creating my first Pipeline</h2><p>I wanted to create a pipeline for <a href="https://www.discaper.com">Discaper</a>. Discaper uses <a href="https://nextjs.org/">NextJS</a> for the front-end ( basically a React application that has server side rendering) and Node for the back-end. I want to have two pipelines, one for the front-end and one for the back-end. The pipelines will perform as follows:</p><ul><li>Detect a push to the master branch of Discaper on GitHub</li><li>Clone from GitHub onto the BuildKite agent</li><li><strong>Build the project:</strong> Run <code>npm ci &amp;&amp; npm run build &amp;&amp; npm run export</code>   to create an <code>out</code> folder that contains all necessary files to deploy a NextJS app - read more about <a href="https://nextjs.org/learn/excel/static-html-export">static NextJS exports here</a></li><li><strong>Prepare the assets: </strong>Compress the contents of the <code>out</code> folder as a <code>tar</code> file then upload it as a <a href="https://buildkite.com/docs/pipelines/artifacts">BuildKite Artifact</a> (so it can be downloaded and used in a proceeding BuildKite step) </li><li><strong>Block the pipeline:</strong> have a step that requires me to press it before a deployment occurs. Read more about <a href="https://buildkite.com/docs/pipelines/block-step">block steps here</a></li><li><strong>Deploy the assets:</strong> On block step click, download <code>tar</code>  from BuildKite artifacts and then<code>i</code>t to Discaper's VM (using <a href="https://linuxacademy.com/blog/linux/ssh-and-scp-howto-tips-tricks/">SCP</a>) where it'll be uncompressed and served!</li></ul><h2 id="configuring-build-agent">Configuring Build Agent</h2><p>First step is was to generate an SSH key. It was important to create this while logged in as the buildkite-agent user, on my agent I ran:</p><pre><code>sudo su buildkite-agent
ssh-keygen</code></pre><p> Then added it as a deploy key for Discaper (Discaper is a private GitHub repository). </p><p>Read more about <a href="https://developer.github.com/v3/guides/managing-deploy-keys/">GitHub Deploy keys here</a>. If you're going to be adding multiple deploy keys, read more about how to do that in my other blog post: <a href="https://blog.harveydelaney.com/configuring-multiple-deploy-keys-on-github-for-your-vps/">https://blog.harveydelaney.com/configuring-multiple-deploy-keys-on-github-for-your-vps/</a>. You should also read <a href="https://buildkite.com/docs/agent/v3/ssh-keys">BuildKite's documentation around adding SSH keys</a>.</p><p>I also needed to setup NodeJS and NPM as we'd need this to build Discaper by following the steps <a href="https://tecadmin.net/install-latest-nodejs-npm-on-ubuntu/">here</a>.</p><h2 id="creating-the-build-step">Creating the Build step</h2><p>After adding a deploy key so that the Agent can clone the repository, I created a new Pipeline with Discaper's GitHub URL and added the script: </p><pre><code>npm ci
npm run build
npm run export</code></pre><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-4.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><h3 id="automated-github-deployments">Automated GitHub Deployments</h3><p>BuildKite then prompted me with a list of steps required to setup GitHub webhooks:<br>"Follow the instructions below to integrate Buildkite with your GitHub repository to automatically create new builds when you push new code." </p><p>After doing so, I went back to my BuildKite pipeline, it was already to perform the first build! </p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.harveydelaney.com/content/images/2019/11/buildkite-pipeline-fresh.jpg" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><p>Since I just wanted to test if the agent worked, I created a small commit in Discaper and pushed it to GitHub. BuildKite knew there was a push event and triggered a new build. BuildKite ran the build commands, NextJS did it's magic and I got my first successful BuildKite build:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.harveydelaney.com/content/images/2019/11/image-1.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><h2 id="uploading-artifacts">Uploading Artifacts</h2><p>Now that my agent could clone the repository from GitHub and generate a successful build, I needed to compress and package the contents of the <code>out</code> folder, then tell BuildKite to upload the <code>tar</code> file to it's artifacts and block the pipeline.</p><p>To upload artifacts, I went back to Discaper's <em>Pipeline Settings</em> and added the following command to my step:</p><pre><code>buildkite-agent artifact upload discaper-assets.tar.gz</code></pre><p> This tells BuildKite to upload <code>discaper-assets.tar.gz</code> at the very end of the step, read more about how <a href="https://buildkite.com/docs/pipelines/artifacts">BuildKite Artifacts work here</a>:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/bk-artifact-1.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><p>To test this out, I simply triggered another build and could see that BuildKite uploaded all the files in <code>out</code> were compressed then uploaded to the pipeline's artifacts, which could be viewed in the <strong>Artifacts </strong>tab:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.harveydelaney.com/content/images/2019/11/bk-artifact.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><h2 id="block-step">Block Step</h2><p>Creating the block step from the BuildKite UI is trivial. I simply had to go into Pipeline Settings and add a new state, being <code>Wait and block pipeline</code>, I gave it the label: <code>Deploy to Production</code> along with a nice little rocket emoji:</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/buildkite-wait-and-block.jpg" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><h2 id="creating-the-deploy-step">Creating the Deploy step</h2><p>My deploy step was relatively simple, although you may want to handle deploying assets in a different way. My strategy was to download <code>discaper-assets.tar.gz</code>, copy it over to my Discaper server, uncompress the <code>tar</code> file then serve the static files.</p><p>First, I created an additional step called <code>Deploy to Production</code> and told BuildKite to download <code>discaper-assets.tar.gz</code>, copy it over to my Discaper server and then execute a bash script on the server (that uncompressed the <code>tar</code> then re-runs the server with the new files copied 0ver):</p><figure class="kg-card kg-image-card"><img src="https://blog.harveydelaney.com/content/images/2019/11/bk-deploy-1.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><p>The commands in the screenshot are:</p><pre><code class="language-bash">buildkite-agent artifact download discaper-assets.tar.gz /tmp
scp /tmp/discaper-assets.tar.gz discaper@discaper-ip:~
ssh discaper@discaper-ip "./publish.sh"</code></pre><p>Running this step in my Pipeline gives the output:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.harveydelaney.com/content/images/2019/11/buildkite-deploy-step.jpg" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><h1 id="all-together">All Together</h1><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.harveydelaney.com/content/images/2019/11/bk-all-together.png" class="kg-image" alt="Setting up BuildKite and your first Continuous Integration pipeline in 2 hours"></figure><p>Hope this article helped you setup your first BuildKite pipeline. From here, you can (and should!) extend your pipeline to do more things like:</p><ul><li>Run linting/tests</li><li>Deploy to to your test/staging environment</li><li>Cleaning up environments</li></ul>]]></content:encoded></item><item><title><![CDATA[Creating Breadcrumb Structured Data for your React/Typescript Website]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Recently, I've been working a my new website called <a href="https://www.discaper.com">Discaper</a>. My vision for Discaper is that it will hopefully become the <a href="https://www.zomato.com/sydney">Zomato</a> of Escape Rooms! Meaning that it'll be the place you go to to find the most suitable escape room for you.</p>
<p>Anyway, I've implemented a breadcrumb component for</p>]]></description><link>https://blog.harveydelaney.com/creating-breadcrumb-structured-data-for-your-react-typescript-website/</link><guid isPermaLink="false">5d5a8338153b4c000171bde6</guid><category><![CDATA[seo]]></category><category><![CDATA[discaper]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Mon, 19 Aug 2019 11:58:15 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2019/11/seo.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.harveydelaney.com/content/images/2019/11/seo.jpg" alt="Creating Breadcrumb Structured Data for your React/Typescript Website"><p>Recently, I've been working a my new website called <a href="https://www.discaper.com">Discaper</a>. My vision for Discaper is that it will hopefully become the <a href="https://www.zomato.com/sydney">Zomato</a> of Escape Rooms! Meaning that it'll be the place you go to to find the most suitable escape room for you.</p>
<p>Anyway, I've implemented a breadcrumb component for Discaper that helps users navigate throughout the website. For example, for the <a href="https://www.discaper.com/room/assassin-in-the-pub">Assassin in the Club</a> Escape Room, has the breadcrumbs of: <code>Home -&gt; Australia -&gt; NSW -&gt; Sydney -&gt; Escape Hunt Sydney -&gt; Assassin in the Pub</code>. Clicking on any section of the breadcrumb will then show you all places/rooms in the next level contained within it. For example, clicking <code>Australia</code> will show you all states that have escape rooms in Australia. Or clicking <code>Escape Hunt Sydney</code> will show you all the escape rooms that are operated by Escape Hunt Sydney.</p>
<p>I've noticed on other big websites that results in Google Search give a neat little breadcrumb result. For example, if I searched Indu (a restaurant I went to last weekend) on Google, Zomato's search result looks like:<br>
<img src="https://blog.harveydelaney.com/content/images/2019/08/zomato-indu.jpg" alt="Creating Breadcrumb Structured Data for your React/Typescript Website"><br>
You can see Zomato has a nice little breadcrumb result: <code>https://www.zomato.com › Australia › Sydney › City of Sydney › CBD</code>. I wanted this for Discaper! Currently a result on Discaper looks like:</p>
<p><img src="https://blog.harveydelaney.com/content/images/2019/08/discaper-search.jpg" alt="Creating Breadcrumb Structured Data for your React/Typescript Website"></p>
<p>It turns out there's a relatively easy way to get these search results.  You have to implement something called <a href="https://www.webopedia.com/TERM/S/structured_data.html">Structured Data</a> and search engines consume this to give neat little results like Zomato.</p>
<p>Discaper is made using NextJS, React and TypeScript and I wanted to document how I got structured data working for Discaper using these technologies!</p>
<h2 id="structureddatabreadcrumbcomponent">Structured Data Breadcrumb Component</h2>
<p>For this example, let's say we want to display the breadcrumbs for our &quot;Place&quot;, which looks like: <code>Home -&gt; Country -&gt; State -&gt; Region -&gt; Place</code>.</p>
<p>Let's also say we have our breadcrumb list with an interface of</p>
<pre><code class="language-js">export interface IBreadcrumb {
  description: string;
  url: string;
}
</code></pre>
<p>and we have our breadcrumb data for our place as:</p>
<pre><code class="language-js">  const breadcrumbList: IBreadcrumb[] = [
    {
      description: &quot;Home&quot;,
      url: &quot;/&quot;
    },
    {
      description: &quot;Australia&quot;,
      url: &quot;/australia&quot;
    },
    {
      description: &quot;NSW&quot;,
      url: &quot;/australia/nsw&quot;
    },
    {
      description: &quot;Sydney&quot;,
      url: &quot;/australia/nsw/sydney&quot;
    },
    {
      description: &quot;Harvey's place&quot;,
      url: &quot;/australia/nsw/sydney/harveys-place&quot;
    }
  ];
</code></pre>
<p>The first thing I did was went to the <a href="https://developers.google.com/search/docs/data-types/breadcrumb">Google documentation</a> for how they wanted the BreadCrumb structured data to be... structured. This documentation gave me an overview, but I found the examples to be lacking.</p>
<p>So, I looked for a concrete example and found one <a href="https://stackoverflow.com/questions/31861260/correct-microdata-markup-for-breadcrumbs">here on Stack Overflow</a>. This post basically gave the outline of what a valid structured data breadcrumb HTML should look like:</p>
<pre><code class="language-html">&lt;ol itemscope itemtype=&quot;http://schema.org/BreadcrumbList&quot;&gt;
  &lt;li itemprop=&quot;itemListElement&quot; itemscope itemtype=&quot;http://schema.org/ListItem&quot;&gt;
    &lt;a itemscope itemtype=&quot;http://schema.org/Thing&quot; itemprop=&quot;item&quot; href=&quot;/&quot; itemid=&quot;/&quot;&gt;
      &lt;span itemprop=&quot;name&quot;&gt;Root page&lt;/span&gt;
    &lt;/a&gt;
    &lt;meta itemprop=&quot;position&quot; content=&quot;1&quot; /&gt;
  &lt;/li&gt;
  &lt;li itemprop=&quot;itemListElement&quot; itemscope itemtype=&quot;http://schema.org/ListItem&quot;&gt;
    &lt;a itemscope itemtype=&quot;http://schema.org/Thing&quot; itemprop=&quot;item&quot; href=&quot;/category&quot; itemid=&quot;/category&quot;&gt;
      &lt;span itemprop=&quot;name&quot;&gt;Category page&lt;/span&gt;
    &lt;/a&gt;
    &lt;meta itemprop=&quot;position&quot; content=&quot;2&quot; /&gt;
  &lt;/li&gt;
  &lt;li itemprop=&quot;itemListElement&quot; itemscope itemtype=&quot;http://schema.org/ListItem&quot;&gt;
    &lt;span itemscope itemtype=&quot;http://schema.org/Thing&quot; itemprop=&quot;item&quot; itemid=&quot;/category/this-page&quot;&gt;
      &lt;span itemprop=&quot;name&quot;&gt;This page&lt;/span&gt;
    &lt;/span&gt;
    &lt;meta itemprop=&quot;position&quot; content=&quot;3&quot; /&gt;
  &lt;/li&gt;
&lt;/ol&gt;
</code></pre>
<p>User <code>FlameStorm</code> also outlined that you can use Google's <a href="https://search.google.com/structured-data/testing-tool">Structured Data Testing Tool</a> to find out whether your structured data is valid or not. Try pasting the above HTML into it!</p>
<p>Using this example, I created a component that allowed me to render the breadcrumbs that are structured data compliant (with Google at least).</p>
<p>I created the component <code>BreadCrumb</code> that looks like:</p>
<script src="https://gist.github.com/HarveyD/7964ac480fe7a6fbb0d85004d3959d39.js"></script>
<p>The gotchas here are to remember that, in React, attributes have different names to what they normally would be (took me a while to figure out why React wasn't rendering things correctly!).</p>
<p>So for example:</p>
<p><code>&lt;li itemprop=&quot;itemListElement&quot; itemscope itemtype=&quot;http://schema.org/ListItem&quot;&gt;</code></p>
<p>would be:</p>
<p><code>&lt;li itemProp=&quot;itemListElement&quot; itemScope={true} itemType=&quot;http://schema.org/ListItem&quot;&gt;</code></p>
<p>in React.</p>
<h2 id="exampleusage">Example usage</h2>
<p>Let's use our component and see if our HTML is structured data compliant.</p>
<p>Rendering:</p>
<pre><code class="language-js">    &lt;StructuredBreadcrumb breadcrumbList={breadcrumbList} /&gt;
</code></pre>
<p>Would give us:</p>
<pre><code class="language-html">&lt;ol
  itemscope=&quot;&quot;
  itemtype=&quot;http://schema.org/BreadcrumbList&quot;
  class=&quot;breadcrumbs-container&quot;
&gt;
  &lt;li
    itemprop=&quot;itemListElement&quot;
    itemscope=&quot;&quot;
    itemtype=&quot;http://schema.org/ListItem&quot;
  &gt;
    &lt;a
      href=&quot;/&quot;
      itemscope=&quot;&quot;
      itemtype=&quot;http://schema.org/Thing&quot;
      itemprop=&quot;item&quot;
      itemid=&quot;/&quot;
      &gt;&lt;span itemprop=&quot;name&quot;&gt;Home&lt;/span&gt;&lt;/a
    &gt;&lt;meta itemprop=&quot;position&quot; content=&quot;0&quot; /&gt;
  &lt;/li&gt;
  &lt;li
    itemprop=&quot;itemListElement&quot;
    itemscope=&quot;&quot;
    itemtype=&quot;http://schema.org/ListItem&quot;
  &gt;
    &lt;a
      href=&quot;/australia&quot;
      itemscope=&quot;&quot;
      itemtype=&quot;http://schema.org/Thing&quot;
      itemprop=&quot;item&quot;
      itemid=&quot;/australia&quot;
      &gt;&lt;span itemprop=&quot;name&quot;&gt;Australia&lt;/span&gt;&lt;/a
    &gt;&lt;meta itemprop=&quot;position&quot; content=&quot;1&quot; /&gt;
  &lt;/li&gt;
  &lt;li
    itemprop=&quot;itemListElement&quot;
    itemscope=&quot;&quot;
    itemtype=&quot;http://schema.org/ListItem&quot;
  &gt;
    &lt;a
      href=&quot;/australia/nsw&quot;
      itemscope=&quot;&quot;
      itemtype=&quot;http://schema.org/Thing&quot;
      itemprop=&quot;item&quot;
      itemid=&quot;/australia/nsw&quot;
      &gt;&lt;span itemprop=&quot;name&quot;&gt;NSW&lt;/span&gt;&lt;/a
    &gt;&lt;meta itemprop=&quot;position&quot; content=&quot;2&quot; /&gt;
  &lt;/li&gt;
  &lt;li
    itemprop=&quot;itemListElement&quot;
    itemscope=&quot;&quot;
    itemtype=&quot;http://schema.org/ListItem&quot;
  &gt;
    &lt;a
      href=&quot;/australia/nsw/sydney&quot;
      itemscope=&quot;&quot;
      itemtype=&quot;http://schema.org/Thing&quot;
      itemprop=&quot;item&quot;
      itemid=&quot;/australia/nsw/sydney&quot;
      &gt;&lt;span itemprop=&quot;name&quot;&gt;Sydney&lt;/span&gt;&lt;/a
    &gt;&lt;meta itemprop=&quot;position&quot; content=&quot;3&quot; /&gt;
  &lt;/li&gt;
  &lt;li
    itemprop=&quot;itemListElement&quot;
    itemscope=&quot;&quot;
    itemtype=&quot;http://schema.org/ListItem&quot;
  &gt;
    &lt;a
      href=&quot;/australia/nsw/sydney/harveys-place&quot;
      itemscope=&quot;&quot;
      itemtype=&quot;http://schema.org/Thing&quot;
      itemprop=&quot;item&quot;
      itemid=&quot;/australia/nsw/sydney/harveys-place&quot;
      &gt;&lt;span itemprop=&quot;name&quot;&gt;Harvey's place&lt;/span&gt;&lt;/a
    &gt;&lt;meta itemprop=&quot;position&quot; content=&quot;4&quot; /&gt;
  &lt;/li&gt;
&lt;/ol&gt;
</code></pre>
<p>Putting this HTML into <a href="https://search.google.com/structured-data/testing-tool">the Structured Data Testing Tool</a> gives us a valid result!</p>
<p><img src="https://blog.harveydelaney.com/content/images/2019/08/breadcrumb.jpg" alt="Creating Breadcrumb Structured Data for your React/Typescript Website"></p>
<p>If this helped, give <a href="https://gist.github.com/HarveyD/7964ac480fe7a6fbb0d85004d3959d39">https://gist.github.com/HarveyD/7964ac480fe7a6fbb0d85004d3959d39</a> a star :). Enjoy your breadcrumbs!</p>
<p>Also, put <a href="https://www.discaper.com/room/assassin-in-the-pub">https://www.discaper.com/room/assassin-in-the-pub</a> into the Google Structured Data Testing tool! Pretty cool right?</p>
<p><em>Update 25/08/2019:</em> Assassin in the Pub now has the search result: <img src="https://blog.harveydelaney.com/content/images/2019/08/assasin-in-the-pub-result.jpg" alt="Creating Breadcrumb Structured Data for your React/Typescript Website"></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Running a Test with Multiple Test Cases using Jest/Enzyme (React)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Last year I wrote an article which outlines some tips I had for running a test with multiple test cases using Jasmine: /running-multiple-test-cases-in-jasmine/</p>
<p>Both at work and on my personal side projects, I've started to use React a lot more than Angular. Since Create React App comes with Jest/Enzyme</p>]]></description><link>https://blog.harveydelaney.com/running-a-test-with-multiple-test-cases-using-jest-enzyme-react/</link><guid isPermaLink="false">5d4e56a0153b4c000171bdda</guid><category><![CDATA[React]]></category><category><![CDATA[testing]]></category><dc:creator><![CDATA[Harvey Delaney]]></dc:creator><pubDate>Sat, 10 Aug 2019 07:18:01 GMT</pubDate><media:content url="https://blog.harveydelaney.com/content/images/2019/08/jest.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.harveydelaney.com/content/images/2019/08/jest.jpg" alt="Running a Test with Multiple Test Cases using Jest/Enzyme (React)"><p>Last year I wrote an article which outlines some tips I had for running a test with multiple test cases using Jasmine: /running-multiple-test-cases-in-jasmine/</p>
<p>Both at work and on my personal side projects, I've started to use React a lot more than Angular. Since Create React App comes with Jest/Enzyme out of the box, I thought I would write another article outlining how I run multiple test cases in Jest/Enzyme for my React projects!</p>
<h2 id="simpletestscases">Simple Tests Cases</h2>
<p>If we had the following simple class that we wanted to test:</p>
<pre><code class="language-js">export default class CalculatorService {
  average = numberList =&gt;
    numberList.reduce((acc, val) =&gt; acc + val, 0) / numberList.length;
}
</code></pre>
<p>We might have created a test for our CalculatorService like:</p>
<pre><code class="language-js">import CalculatorService from &quot;./calculator.service&quot;;

describe(&quot;CalculatorService&quot;, () =&gt; {
  let calculatorService;

  beforeEach(() =&gt; {
    calculatorService = new CalculatorService();
  });

  describe(&quot;average&quot;, () =&gt; {
    it(&quot;should average the number list correctly&quot;, () =&gt; {
      const numberList = [1, 4, 5, 10];
      const res = calculatorService.average(numberList);

      expect(res).toEqual(5);
    });
  });
});
</code></pre>
<p>But now we want to easy run multiple test cases without creating a new statement each time. The best way that I've found to structure my tests this way is by running a forEach loop over an <code>it</code> statement. It's important that we have the forEach outside of the <code>it</code> statement otherwise we won't be able to provide information to the <code>it</code> statement that will inform us which test case in particular has failed. This is how I would write test cases for <code>CalculatorService</code>:</p>
<pre><code class="language-js">  describe(&quot;average&quot;, () =&gt; {
    const testCases = [
      {
        numberList: [1, 2, 3],
        expected: 2
      },
      {
        numberList: [1, 4, 10],
        expected: 5
      }
    ];

    testCases.forEach(test =&gt; {
      it(`should correctly find the average of: ${test.numberList} which is: ${
        test.expected
      }`, () =&gt; {
        const res = calculatorService.average(test.numberList);

        expect(res).toEqual(test.expected);
      });
    });
  });
</code></pre>
<p>As you can see, having the <code>it</code> statement outside allows us to write descriptive test cases which will provide an output of:</p>
<p><img src="https://blog.harveydelaney.com/content/images/2019/08/jest-testing.jpg" alt="Running a Test with Multiple Test Cases using Jest/Enzyme (React)"></p>
<p>This allows us to pin-point exactly which test case has failed and rectify it.</p>
<h2 id="reactcomponents">React Components</h2>
<p>Applying this <code>forEach</code> loop methodology to testing React components using shallow rendering from the Enzyme library works very well! For this example, I'll be testing the following React component:</p>
<pre><code class="language-js">import React from &quot;react&quot;;
import &quot;./List.css&quot;;

const List = ({ items, itemClickEvent }) =&gt; (
  &lt;div className=&quot;list-container&quot;&gt;
    {items.map(item =&gt; (
      &lt;div
        key={item.id}
        onClick={() =&gt; itemClickEvent(item.id)}
        className=&quot;item-container&quot;
      &gt;
        &lt;h2 className=&quot;heading&quot;&gt;{item.heading}&lt;/h2&gt;
        &lt;div className=&quot;description&quot;&gt;{item.description}&lt;/div&gt;
      &lt;/div&gt;
    ))}
  &lt;/div&gt;
);

export default List;
</code></pre>
<p>I was able to create 3 test cases using the forEach structure:</p>
<pre><code class="language-js">import React from &quot;react&quot;;
import User from &quot;./User&quot;;
import List from &quot;./List&quot;;
import { shallow } from &quot;enzyme&quot;;
import { configure } from &quot;enzyme&quot;;
import Adapter from &quot;enzyme-adapter-react-16&quot;;

configure({ adapter: new Adapter() });

describe(&quot;List&quot;, () =&gt; {
  const testItems = [
    {
      id: 45,
      heading: &quot;Test Item&quot;,
      description: &quot;This is a test item&quot;
    },
    {
      id: 75,
      heading: &quot;Test Item 2&quot;,
      description: &quot;This is a test item&quot;
    },
    {
      id: 90,
      heading: &quot;Test Item 3&quot;,
      description: &quot;This is a test item&quot;
    },
    {
      id: 150,
      heading: &quot;Test Item 4&quot;,
      description: &quot;This is a test item&quot;
    },
    {
      id: 270,
      heading: &quot;Test Item 5&quot;,
      description: &quot;This is a test item&quot;
    }
  ];

  let testCases = [
    {
      itemAt: 0,
      expectedToCallWith: 45
    },
    {
      itemAt: 2,
      expectedToCallWith: 90
    },
    {
      itemAt: 4,
      expectedToCallWith: 265
    }
  ];

  testCases.forEach(test =&gt; {
    it(`should call correctly call itemClickEvent with ${
      test.expectedToCallWith
    } when item at: ${test.itemAt} is clicked`, () =&gt; {
      const mockItemClickEventHandler = jest.fn();
      const props = {
        items: testItems,
        itemClickEvent: mockItemClickEventHandler
      };

      const listComponent = shallow(&lt;List {...props} /&gt;);
      listComponent
        .find(&quot;.list-container&quot;)
        .children()
        .at(test.itemAt)
        .simulate(&quot;click&quot;);

      expect(mockItemClickEventHandler).toHaveBeenCalledWith(
        test.expectedToCallWith
      );
    });
  });
});
</code></pre>
<p>A failure of a test would give a helpful error message of:<br>
<img src="https://blog.harveydelaney.com/content/images/2019/08/jest-testing-2-1.jpg" alt="Running a Test with Multiple Test Cases using Jest/Enzyme (React)"></p>
<h2 id="reactcomponentsnapshots">React Component Snapshots</h2>
<p>Again, this can be applied to React Snapshots. If we had the component:</p>
<pre><code class="language-js">import React from &quot;react&quot;;
import &quot;./user.css&quot;;

const User = ({ imgUrl, firstName, lastName, age, address }) =&gt; {
  return (
    &lt;div className=&quot;user-container&quot;&gt;
      &lt;img src={imgUrl} /&gt;
      &lt;div className=&quot;information&quot; /&gt;
      &lt;h1&gt;{`${firstName} ${lastName}`}&lt;/h1&gt;
      &lt;span&gt;{age}&lt;/span&gt;
      &lt;span&gt;{address}&lt;/span&gt;
    &lt;/div&gt;
  );
};

export default User;
</code></pre>
<p>We can easily create multiple Snapshot test cases using <code>react-test-renderer</code> like so:</p>
<pre><code class="language-js">import React from &quot;react&quot;;
import renderer from &quot;react-test-renderer&quot;;
import User from &quot;./User&quot;;

describe(&quot;User&quot;, () =&gt; {
  describe(&quot;Snapshots&quot;, () =&gt; {
    let testCases = [
      {
        imgUrl: &quot;https://test.com&quot;,
        firstName: &quot;Harvey&quot;,
        lastName: &quot;Delaney&quot;,
        age: &quot;69&quot;,
        address: &quot;123 Fake Street&quot;
      },
      {
        imgUrl: &quot;https://test2.com&quot;,
        firstName: &quot;John&quot;,
        lastName: &quot;Doe&quot;,
        age: &quot;35&quot;,
        address: &quot;123 Real Street&quot;
      }
    ];

    testCases.forEach(test =&gt; {
      it(`should have the correct snapshot for ${test.firstName} ${
        test.lastName
      }`, () =&gt; {
        const userComponent = renderer.create(&lt;User {...test} /&gt;).toJSON();
        expect(userComponent).toMatchSnapshot();
      });
    });
  });
});
</code></pre>
<p>This will output two Snapshots:</p>
<pre><code>// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`User Snapshots should have the correct snapshot for Harvey Delaney 1`] = `
&lt;div
  className=&quot;user-container&quot;
&gt;
  &lt;img
    src=&quot;https://test.com&quot;
  /&gt;
  &lt;div
    className=&quot;information&quot;
  /&gt;
  &lt;h1&gt;
    Harvey Delaney
  &lt;/h1&gt;
  &lt;span&gt;
    69
  &lt;/span&gt;
  &lt;span&gt;
    123 Fake Street
  &lt;/span&gt;
&lt;/div&gt;
`;

exports[`User Snapshots should have the correct snapshot for John Doe 1`] = `
&lt;div
  className=&quot;user-container&quot;
&gt;
  &lt;img
    src=&quot;https://test2.com&quot;
  /&gt;
  &lt;div
    className=&quot;information&quot;
  /&gt;
  &lt;h1&gt;
    John Doe
  &lt;/h1&gt;
  &lt;span&gt;
    35
  &lt;/span&gt;
  &lt;span&gt;
    123 Real Street
  &lt;/span&gt;
&lt;/div&gt;
`;
</code></pre>
<p>I hope this article has helped give you a bit of insight into how duplicated tests can be written more concisely by opting to use a simple <code>forEach</code> loop!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>