Strict TypeScript settings guide
We have recently turned on more strict compiler settings in the TypeScript compiler, including noUncheckedIndexedAccess and noImplicitAny. While those settings should make it easier to write safe code in the future, it's not always clear how to proceed with existing code.
This guide aims at explaining the most common situations & pitfalls and how to fix & avoid them.
noImplicitAny
has surfaced a lot of missing type annotations, where the compiler has automatically filled in an any
type for us. Simple example from a react component:
// ❌
const handleClick = (e) => {
e.stopPropagation();
props.onClick();
};
Here, we didn't see any error without this compiler setting, but now we get:
Parameter 'e' implicitly has an 'any' type.
In the migration, we addressed many of those problems by just adding an explicit any:
// ⚠️
const handleClick = (e: any) => {
e.stopPropagation();
props.onClick();
};
This is obviously not great, but since we had 3k+ errors, this was the fastest way to get this setting to be turned on. So there's a good chance you'll see many explicit any type annotations now.
- Sometimes, it's enough to change
any
tounknown
(given that enough type narrowing happens inside the function).unknown
is totally safe to use, and if you have no errors, this is a good way forward. The above example is not one of them though. - Look at the usages of the function and potentially inline it into one of those usages while removing the manual
:any
. You can then check via type inference what the type of e would be in that usage. You can then give it an explicit type, in this case:React.MouseEvent<HTMLButtonElement>
// ✅
- const handleClick = (e: any) => {
+ const handleClick = (e: React.MouseEvent<HTMLButtonElement>) => {
e.stopPropagation()
props.onClick()
}
noUncheckedIndexedAccess
has revealed many cases where we "know" an element exists in an Array, but TypeScript does not know it.
Inside of for loops, the access of elements is usually safe, because the index is checked. However, TypeScript doesn't know that, so we need the !
operator:
// ❌
for (let i = 0; i < collection.length; i++) {
const child = collection[i]!;
// ...
}
The easiest fix is to convert this to a for..of
loop:
// ✅
for (const child of collection) {
// ...
}
There is also a stylistic typescript-eslint rule that can catch those cases, as this arguably easier to read, too.
Another approach would be to switch to a more functional style, like collection.map
or collection.forEach
:
// ✅
collection.forEach(child => {
// ...
})
If this is a good approach depends on what's actually happening in the body of the function, and if performance is a concern and the collection itself is large, the loop might be the better approach.
Oftentimes, we need to access a specific element, mostly the first element, of an array after we've checked the length:
// ❌
if (collection.length > 0) {
const first = collection[0]!;
// ...
}
There are various ways to address this:
We could check for existence of the item itself instead of checking for the length:
// ✅
const first = collection[0]
if (first) {
// ...
}
first
will be properly narrowed here, but note that the if
check is not 100% identical to a length check. If the array contains [null]
or [undefined]
at the first index, we would pass the length check, but not the "existence check".
We could not check the length at all and just embrace that first
is potentially undefined
and just continue with optional chaining:
// ✅
const first = collection[0]
first?.toUpperCase()
Sometimes, we know that collection[0]
exists because we have manually defined it:
// ❌
const collection = ['hello', 'world']
const first = collection[0]!
In this case, accessing collection[0]
is likely safe, but TypeScript doesn't know that. Arrays can be mutated later, so an Array<string>
is not guaranteed to have those two elements.
The fix is to add as const
to the definition. Const assertions are very powerful - they make sure that the most narrow type possible is inferred, and they also make the collection readonly
(immutable):
- const collection = ['hello', 'world']
+ const collection = ['hello', 'world'] as const
const first = collection[0]
Especially for checking the first element, which is arguably the most common case, having a descriptive helper function that checks if lists are "non empty" is possible to implement at runtime and on type level with a user-defined type-guard:
type NonEmptyArray<T> = readonly [T, ...ReadonlyArray<T>]
export const isNonEmpty = <T,>(
array: ReadonlyArray<T> | undefined,
): array is NonEmptyArray<T> => !!array && array.length > 0;
Since this user-defined type-guard will narrow our array to be a NonEmptyArray, we can now continue to use a length check as before:
// ✅
if (isNonEmpty(collection)) {
const first = collection[0];
// ...
}
Iterating over every item in an object has similar drawbacks of a for loop. Traditionally, code may look like this:
// ❌
function doSomething(input: Record<string, unknown>) {
Object.keys(input).forEach(key => {
const item = input[key]!
// ...
}
}
We'd expect item
to never be undefined
here, but with noUncheckedIndexedAccess
, Typescript won't know that, as key
is not narrowed to keyof typeof input
.
The best solution here would be to switch to Object.entries
or Object.values
to get access to the item
directly:
// ✅
function doSomething(input: Record<string, unknown>) {
Object.entries(input).forEach(([key, item]) => {
// ...
}
}
A very common error we had to disable with ts-expect-error
during the migration is:
TS7053: Element implicitly has an any type because expression of type ... can't be used to index type ...
This most commonly happens because we have a structure without an index signature - an object with a "fixed" set of fields:
interface Person {
firstName: string,
lastName: string
}
Similar to the error seen when Iterating over indexed objects, iterating over those objects with Object.keys
won't work:
// ❌
function print(person: Person) {
Object.keys(person).forEach(key => {
console.log(person[key])
})
}
And we also can't do lookups with arbitrary string keys:
// ❌
function printKey(person: Person, key: string) {
console.log(person[key])
}
The error we are getting in these situations is:
TS7053: Element implicitly has an any type because expression of type string can't be used to index type Person
No index signature with a parameter of type string was found on type Person
While this is similar to the error before, it can't easily be solved with a non-null assertion - it's not even solvable with key as any
. Typescript simply won't allow accessing a seemingly random key on an object that doesn't have an index signature.
How we address this problem depends on our intent:
The easiest way is to disallow lookups by string
. If we change our printKey
function to accept a more narrow index type:
// ✅
function printKey(person: Person, key: keyof Person) {
console.log(person[key]);
}
Of course, this might lead to errors on the call-sides if we have strings there that might potentially not be a valid key.
Sometimes, we want to allow arbitrary strings to be passed in, e.g. because we get them from an endpoint that is typed as string
. In those cases, we might want to just leverage that JavaScript will return undefined
at runtime:
// ❌
function printKey(person: Person, key: string) {
console.log(person[key]?.toUpperCase() ?? 'N/A');
}
While this works fine at runtime and is also quite easy to read and understand, TypeScript doesn't like it, and there is no easy way to narrow key
. A type assertion might help:
// ⚠️
function printKey(person: Person, key: string) {
console.log(person[key as keyof Person]?.toUpperCase() ?? 'N/A');
}
But it has the drawback that the optional chaining might be seen as unnecessary by tools like eslint
. That's because person[key as keyof Person]
returns string
, not string | undefined
. The correct type assertion would be quite verbose:
// ✅
function printKey(person: Person, key: string) {
console.log(
(person[key as keyof Person] as string | undefined)?.toUpperCase() ??
"N/A",
);
}
so I don't think we want that littered in our code-base.
The in
operator is good for checks at runtime, and it also narrows the type correctly if we check for a hard-coded field. However, it can't narrow for arbitrary strings like key:string
, so this also won't work:
// ❌
function printKey(person: Person, key: string) {
console.log(key in person ? person[key].toUpperCase() : 'N/A');
}
We need to assert the key here again, but at least this time, the assertion would be safe:
// ✅
function printKey(person: Person, key: string) {
console.log(key in person ? person[key as keyof Person].toUpperCase() : 'N/A');
}
This pattern seems like the best approach, if we apply it sparingly. If we need it more often, a hasKey
helper function to narrow keys might be a good idea:
const hasKey = <T extends Record<PropertyKey, any>>(
obj: T,
key: PropertyKey
): key is keyof typeof obj => {
return key in obj;
};
With this, we could do:
// ✅
function printKey(person: Person, key: string) {
console.log(hasKey(person, key) ? person[key].toUpperCase() : 'N/A');
}
Which isn't terrible.
Another common issue revolves around defining mapping objects with a finite set of keys that aren't exhaustive. As a real-life example, let's look at this PRODUCTS
mapping:
const PRODUCTS = {
[DataCategory.ERRORS]: 'Error Monitoring',
[DataCategory.TRANSACTIONS]: 'Performance Monitoring',
[DataCategory.REPLAYS]: 'Session Replay',
[DataCategory.PROFILES]: 'Profiling',
[DataCategory.MONITOR_SEATS]: 'Crons',
[DataCategory.ATTACHMENTS]: 'Attachments',
[DataCategory.SPANS]: 'Spans',
};
We then might reasonably accept that we could access Products
with a variable that is of type DataCategory
// ❌
function getProduct(category: DataCategory) {
return PRODUCTS[category];
}
But PRODUCTS
doesn't contain all DataCategories - it's not exhaustive. We're getting the following error:
// ❌
function getProduct(category: DataCategory) {
return PRODUCTS[category];
}
TS7053: Element implicitly has an any type because expression of type DataCategory can't be used to index type
Property '[DataCategory. PROFILE_DURATION]' does not exist on type ...
If our intention was to have an exhaustive mapping - every DataCategory
should be in the PRODUCTS
, we should ensure this with satisfies
:
// ✅
const PRODUCTS = {
[DataCategory.ERRORS]: "Error Monitoring",
[DataCategory.TRANSACTIONS]: "Performance Monitoring",
[DataCategory.REPLAYS]: "Session Replay",
[DataCategory.PROFILES]: "Profiling",
[DataCategory.MONITOR_SEATS]: "Crons",
[DataCategory.ATTACHMENTS]: "Attachments",
[DataCategory.SPANS]: "Spans",
} satisfies Record<DataCategory, string>;
This will give us a good error in the right place if our enum grows:
TS1360: Type ... does not satisfy the expected type Record<DataCategory, string>
Type ... is missing the following properties from type Record<DataCategory, string>
: profileDuration, spansIndexed, profileChunks
If our intention was to not have all DataCategories
mapped, we have to define our mapping object as Partial
:
// ✅
const PRODUCTS: Partial<Record<DataCategory, string>> = {
// ...
};
Accessing with a DataCategory
will then be possible, however, accessing any field will return string | undefined
. This isn't great if we know that we've put a certain key into our mapping and we're accessing it statically, e.g. in tests:
// ❌
const errorProduct = PRODUCTS[DataCategory.ERRORS].toUpperCase();
This won't work because we might be accessing .toUpperCase()
on undefined
, e.g. if we remove DataCategory.ERRORS
from our Partial mapping.
For tests, !
might be good enough, but for runtime code, just embracing the fact that no category is guaranteed to be in the mapping is likely best, so let's use optional chaining then:
// ✅
const errorProduct = PRODUCTS[DataCategory.ERRORS]?.toUpperCase();
// ⚠️
const errorProduct = PRODUCTS[DataCategory.ERRORS]!.toUpperCase();
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").