Shake off dev dependencies with Angular's environment files

When generating an Angular app with the Angular CLI it will contain two environment files for development and production. A common use case for those files is to conditionally load utility libraries only during development. But sometimes these dev dependencies accidentally end up in production as well. This post tries to explain why this happens and shows how you can prevent it.

By default the Angular CLI creates a folder called "environments" inside the source directory of an Angular app. It contains two files which both export an "environment" object. This object is just a plain JavaScript object which initially only contains one property called "production". This property is set to true for the production environment and to false for the development environment.

This is how the file looks like for the production environment:

export const environment = {
  production: true
};

When building the app, the CLI will make sure to load the appropriate environment file for the targeted environment. This behaviour is controlled by the "fileReplacements" configuration somewhere deep inside the angular.json file.

The exported environment object can be imported anywhere in the source code just like every other TypeScript file. It is for example used in the main.ts file to make sure the enableProdMode() method is only called in production.

import { enableProdMode } from '@angular/core';
import { environment } from './environments/environment';

if (environment.production) {
  enableProdMode();
}

The environment config shines when used in combination with the optimizations that the build process applies to the emitted JavaScript code. In the code above the environment object can for example be eliminated entirely. The build process is smart enough to know that the if statement will always evaluate to true and can therefore be removed safely.

This is how the emitted code for the production environment will look like before applying the minification:

import { enableProdMode } from '@angular/core';

enableProdMode();

The same technique can also be used to do the opposite. It can be leveraged to eliminate utility libraries from the production build that are only used during development.

The popular state management library NGRX does for example provide a dedicated StoreDevtoolsModule. It can be used to connect NGRX to the Redux DevTools extension which is very helpful to inspect the store while developing the app. NGRX also comes with a semi official tool called ngrx-store-freeze which ensures that you don't accidentally mutate the data of the store.

It is debatable whether the StoreDevtoolsModule should be shipped to production or not. But I think at least the ngrx-store-freeze module should definitely not be used in production. A common way to make sure that this doesn't happen is to conditionally add it to the providers of the respective NgModule by using the environment config. This can be done in many ways and here is how I did it:

import { NgModule } from '@angular/core';
import { USER_PROVIDED_META_REDUCERS } from '@ngrx/store';
import { storeFreeze } from 'ngrx-store-freeze';

const metaReducersProvider = (environment.production)
  ? [ ]
  : {
    provide: USER_PROVIDED_META_REDUCERS,
    useValue: [ storeFreeze ]
  };

@NgModule({
  providers: [
    metaReducersProvider
    // ... all the other providers ...
  ]
  // ... declarations, exports, imports ...
})
export class AnyModule { }

I optimistically believed in the combined power of webpack, ngc, Terser.js and whatever else is used by the CLI when building the app. I was surprised to find out that ngrx-store-freeze is still part of my production build although it is never used. For humans it is relatively easy to grasp that ngrx-store-freeze is unreachable code in the production environment but unfortunately this seems to be undetectable for machines.

After trying out some variations I settled with this solution:

import { NgModule } from '@angular/core';
import { USER_PROVIDED_META_REDUCERS } from '@ngrx/store';
import { storeFreeze } from 'ngrx-store-freeze';

@NgModule({
  providers: [
    (environment.production)
      ? [ ]
      : {
        provide: USER_PROVIDED_META_REDUCERS,
        useValue: [ storeFreeze ]
      }
    // ... all the other providers ...
  ]
  // ... declarations, exports, imports ...
})
export class AnyModule { }

The code looks almost the same but apparently it allows ngrx-store-freeze to be removed when written like this. Only the empty array (which is an alias for specifying no provider at all) will be part of the production build.

import { NgModule } from '@angular/core';

@NgModule({
  providers: [
    [ ]
    // ... all the other providers ...
  ]
  // ... declarations, exports, imports ...
})
export class AnyModule { }

I think the reason why the second snippet works is because the branching happens directly at the top level of the providers array. In my original code the branching logic was applied before by defining a variable. Both snippets work the same way and cause the same providers to be loaded. The only difference is that the second one can be statically analyzed with todays tools while the first one cannot.

I was of course curious to find out if this is only something that affects me or if this is a problem which others have as well. I took a look at some of the sites listed on made with angular and scanned their JavaScript files to see if I could find some typical dev dependencies. And from what I can tell, it looks like others also expected the build process to be a bit more magical as it actually is. A couple of sites did contain dev dependencies as well. The site for the Google Developers Experts does for example also include the ngrx-store-freeze module in its production build.

But I don't want to blaim other people here for doing the same mistake as I did. I just want to show that dev dependencies can slip through quite easily and there needs to be a way to make sure it doesn't happen. Downloading and parsing dead code is just a waste of resources but what if you expose some sensitive information about your dev or staging environment instead?

Doing a manual check of the production build after each update is of course not a viable solution. There needs to be an automatic way of testing this. I think the best way to do so is by making sure that the environment object is not present in the build anymore. Every time the environment is used to conditionally execute a piece of code this should be understood by the optimization tools. And if they fully understand the logic they can remove the branching and either leave the code in place or remove it entirely. The environment object itself will then be obsolete and can be removed as well. In other words: If the environment file is still present in the production build it indicates that its usage was not fully understood by the build process.

Source Maps can be utilized to check for the presence of a certain file. They are typically used to link transpiled and minified code back to its original source files. In order to do that they contain a list of all the files which somehow contributed code to the file they describe. And this list is what also can be used to verify that a particular file is not part of the bundle.

The simplest way to enable Source Maps is by specifying an additional flag when executing ng build.

ng build --source-map

Not everyone is comfortable with shipping Source Maps to production though because they do often also contain the entire original source code. Luckily the Angular CLI also supports hidden Source Maps. These special Source Maps are not linked in the corresponding JavaScript files and can therefore not be retrieved automatically. Kevin Kreuzer has written a detailed post on the topic called "Debug Angular apps in production without revealing source maps" which appeared on Angular in Depth. It shows how to enable hidden Source Maps with the Angular CLI and also explains how to make sure they are not shipped to production.

When Source Maps are enabled and in case your production build gets saved in the dist folder (as it is done by default) the following command will fail if the environment file is still present anywhere in the JavaScript code.

grep -r dist/**/*.map -e '/environments/environment.ts'; test $? -eq 1

To further automate this process it might be benifitial to append this check to the regular build task. Here is the relevant part of the package.json file which needs to be changed to do that.

"scripts": {
  "build": "ng build --source-map",
  "postbuild": "grep -r dist/**/*.map -e '/environments/environment.ts'; test $? -eq 1"
}

Now each time you run "npm run build" (but not "ng build") it will fail in case the environment file could not be eliminated from the build.

Introducing code which can't be statically analyzed anymore by the build process can happen very quickly. But with this little check you can make sure that you discover it before you actually ship something unintended to production.

The Angular team is currently doing a lot to improve the treeshakeability™ of the framework with the new Ivy compiler. It will potentially reduce the code that needs to be send to the user. I think it would be a pitty if we foil this effort by blindly shipping unused dev dependencies to production at the same time.

I would like to thank Kevin Kreuzer for reviewing this post and for providing very useful feedback.