es.next and beyond: removing all first edition text

This commit is contained in:
Kyle Simpson 2020-03-27 05:26:16 -05:00
parent 545b7aad38
commit cb2a49e27d
No known key found for this signature in database
GPG Key ID: F5D84852C80B2DF8
12 changed files with 3 additions and 8689 deletions

View File

@ -8,12 +8,4 @@
* [Foreword](foreword.md) (by TBA) * [Foreword](foreword.md) (by TBA)
* [Preface](../preface.md) * [Preface](../preface.md)
* [Chapter 1: ES? Now & Future](ch1.md) * [Chapter 1: TODO](ch1.md)
* [Chapter 2: Syntax](ch2.md)
* [Chapter 3: Organization](ch3.md)
* [Chapter 4: Async Flow Control](ch4.md)
* [Chapter 5: Collections](ch5.md)
* [Chapter 6: API Additions](ch6.md)
* [Chapter 7: Meta Programming](ch7.md)
* [Chapter 8: Beyond ES6](ch8.md)
* [Appendix A: TODO](apA.md)

View File

@ -1,6 +0,0 @@
# You Don't Know JS Yet: ES.Next & Beyond - 2nd Edition
# Appendix A: TODO
| NOTE: |
| :--- |
| Work in progress |

View File

@ -5,139 +5,3 @@
| :--- | | :--- |
| Work in progress | | Work in progress |
.
.
.
.
.
.
.
----
| NOTE: |
| :--- |
| Everything below here is previous text from 1st edition, and is only here for reference while 2nd edition work is underway. **Please ignore this stuff.** |
Before you dive into this book, you should have a solid working proficiency over JavaScript up to the most recent standard (at the time of this writing), which is commonly called *ES5* (technically ES 5.1). Here, we plan to talk squarely about the upcoming *ES6*, as well as cast our vision beyond to understand how JS will evolve moving forward.
If you are still looking for confidence with JavaScript, I highly recommend you read the other titles in this series first:
* *Up & Going*: Are you new to programming and JS? This is the roadmap you need to consult as you start your learning journey.
* *Scope & Closures*: Did you know that JS lexical scope is based on compiler (not interpreter!) semantics? Can you explain how closures are a direct result of lexical scope and functions as values?
* *this & Object Prototypes*: Can you recite the four simple rules for how `this` is bound? Have you been muddling through fake "classes" in JS instead of adopting the simpler "behavior delegation" design pattern? Ever heard of *objects linked to other objects* (OLOO)?
* *Types & Grammar*: Do you know the built-in types in JS, and more importantly, do you know how to properly and safely use coercion between types? How comfortable are you with the nuances of JS grammar/syntax?
* *Async & Performance*: Are you still using callbacks to manage your asynchrony? Can you explain what a promise is and why/how it solves "callback hell"? Do you know how to use generators to improve the legibility of async code? What exactly constitutes mature optimization of JS programs and individual operations?
If you've already read all those titles and you feel pretty comfortable with the topics they cover, it's time we dive into the evolution of JS to explore all the changes coming not only soon but farther over the horizon.
Unlike ES5, ES6 is not just a modest set of new APIs added to the language. It incorporates a whole slew of new syntactic forms, some of which may take quite a bit of getting used to. There's also a variety of new organization forms and new API helpers for various data types.
ES6 is a radical jump forward for the language. Even if you think you know JS in ES5, ES6 is full of new stuff you *don't know yet*, so get ready! This book explores all the major themes of ES6 that you need to get up to speed on, and even gives you a glimpse of future features coming down the track that you should be aware of.
**Warning:** All code in this book assumes an ES6+ environment. At the time of this writing, ES6 support varies quite a bit in browsers and JS environments (like Node.js), so your mileage may vary.
## Versioning
The JavaScript standard is referred to officially as "ECMAScript" (abbreviated "ES"), and up until just recently has been versioned entirely by ordinal number (i.e., "5" for "5th edition").
The earliest versions, ES1 and ES2, were not widely known or implemented. ES3 was the first widespread baseline for JavaScript, and constitutes the JavaScript standard for browsers like IE6-8 and older Android 2.x mobile browsers. For political reasons beyond what we'll cover here, the ill-fated ES4 never came about.
In 2009, ES5 was officially finalized (later ES5.1 in 2011), and settled as the widespread standard for JS for the modern revolution and explosion of browsers, such as Firefox, Chrome, Opera, Safari, and many others.
Leading up to the expected *next* version of JS (slipped from 2013 to 2014 and then 2015), the obvious and common label in discourse has been ES6.
However, late into the ES6 specification timeline, suggestions have surfaced that versioning may in the future switch to a year-based schema, such as ES2016 (aka ES7) to refer to whatever version of the specification is finalized before the end of 2016. Some disagree, but ES6 will likely maintain its dominant mindshare over the late-change substitute ES2015. However, ES2016 may in fact signal the new year-based schema.
It has also been observed that the pace of JS evolution is much faster even than single-year versioning. As soon as an idea begins to progress through standards discussions, browsers start prototyping the feature, and early adopters start experimenting with the code.
Usually well before there's an official stamp of approval, a feature is de facto standardized by virtue of this early engine/tooling prototyping. So it's also valid to consider the future of JS versioning to be per-feature rather than per-arbitrary-collection-of-major-features (as it is now) or even per-year (as it may become).
The takeaway is that the version labels stop being as important, and JavaScript starts to be seen more as an evergreen, living standard. The best way to cope with this is to stop thinking about your code base as being "ES6-based," for instance, and instead consider it feature by feature for support.
## Transpiling
Made even worse by the rapid evolution of features, a problem arises for JS developers who at once may both strongly desire to use new features while at the same time being slapped with the reality that their sites/apps may need to support older browsers without such support.
The way ES5 appears to have played out in the broader industry, the typical mindset was that code bases waited to adopt ES5 until most if not all pre-ES5 environments had fallen out of their support spectrum. As a result, many are just recently (at the time of this writing) starting to adopt things like `strict` mode, which landed in ES5 over five years ago.
It's widely considered to be a harmful approach for the future of the JS ecosystem to wait around and trail the specification by so many years. All those responsible for evolving the language desire for developers to begin basing their code on the new features and patterns as soon as they stabilize in specification form and browsers have a chance to implement them.
So how do we resolve this seeming contradiction? The answer is tooling, specifically a technique called *transpiling* (transformation + compiling). Roughly, the idea is to use a special tool to transform your ES6 code into equivalent (or close!) matches that work in ES5 environments.
For example, consider shorthand property definitions (see "Object Literal Extensions" in Chapter 2). Here's the ES6 form:
```js
var foo = [1,2,3];
var obj = {
foo // means `foo: foo`
};
obj.foo; // [1,2,3]
```
But (roughly) here's how that transpiles:
```js
var foo = [1,2,3];
var obj = {
foo: foo
};
obj.foo; // [1,2,3]
```
This is a minor but pleasant transformation that lets us shorten the `foo: foo` in an object literal declaration to just `foo`, if the names are the same.
Transpilers perform these transformations for you, usually in a build workflow step similar to how you perform linting, minification, and other similar operations.
### Shims/Polyfills
Not all new ES6 features need a transpiler. Polyfills (aka shims) are a pattern for defining equivalent behavior from a newer environment into an older environment, when possible. Syntax cannot be polyfilled, but APIs often can be.
For example, `Object.is(..)` is a new utility for checking strict equality of two values but without the nuanced exceptions that `===` has for `NaN` and `-0` values. The polyfill for `Object.is(..)` is pretty easy:
```js
if (!Object.is) {
Object.is = function(v1, v2) {
// test for `-0`
if (v1 === 0 && v2 === 0) {
return 1 / v1 === 1 / v2;
}
// test for `NaN`
if (v1 !== v1) {
return v2 !== v2;
}
// everything else
return v1 === v2;
};
}
```
**Tip:** Pay attention to the outer `if` statement guard wrapped around the polyfill. This is an important detail, which means the snippet only defines its fallback behavior for older environments where the API in question isn't already defined; it would be very rare that you'd want to overwrite an existing API.
There's a great collection of ES6 shims called "ES6 Shim" (https://github.com/paulmillr/es6-shim/) that you should definitely adopt as a standard part of any new JS project!
It is assumed that JS will continue to evolve constantly, with browsers rolling out support for features continually rather than in large chunks. So the best strategy for keeping updated as it evolves is to just introduce polyfill shims into your code base, and a transpiler step into your build workflow, right now and get used to that new reality.
If you decide to keep the status quo and just wait around for all browsers without a feature supported to go away before you start using the feature, you're always going to be way behind. You'll sadly be missing out on all the innovations designed to make writing JavaScript more effective, efficient, and robust.
## Review
ES6 (some may try to call it ES2015) is just landing as of the time of this writing, and it has lots of new stuff you need to learn!
But it's even more important to shift your mindset to align with the new way that JavaScript is going to evolve. It's not just waiting around for years for some official document to get a vote of approval, as many have done in the past.
Now, JavaScript features land in browsers as they become ready, and it's up to you whether you'll get on the train early or whether you'll be playing costly catch-up games years from now.
Whatever labels that future JavaScript adopts, it's going to move a lot quicker than it ever has before. Transpilers and shims/polyfills are important tools to keep you on the forefront of where the language is headed.
If there's any narrative important to understand about the new reality for JavaScript, it's that all JS developers are strongly implored to move from the trailing edge of the curve to the leading edge. And learning ES6 is where that all starts!

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,405 +0,0 @@
# You Don't Know JS Yet: ES.Next & Beyond - 2nd Edition
# Chapter 4: Async Flow Control
| NOTE: |
| :--- |
| Work in progress |
.
.
.
.
.
.
.
----
| NOTE: |
| :--- |
| Everything below here is previous text from 1st edition, and is only here for reference while 2nd edition work is underway. **Please ignore this stuff.** |
It's no secret if you've written any significant amount of JavaScript that asynchronous programming is a required skill. The primary mechanism for managing asynchrony has been the function callback.
However, ES6 adds a new feature that helps address significant shortcomings in the callbacks-only approach to async: *Promises*. In addition, we can revisit generators (from the previous chapter) and see a pattern for combining the two that's a major step forward in async flow control programming in JavaScript.
## Promises
Let's clear up some misconceptions: Promises are not about replacing callbacks. Promises provide a trustable intermediary -- that is, between your calling code and the async code that will perform the task -- to manage callbacks.
Another way of thinking about a Promise is as an event listener, on which you can register to listen for an event that lets you know when a task has completed. It's an event that will only ever fire once, but it can be thought of as an event nonetheless.
Promises can be chained together, which can sequence a series of asychronously completing steps. Together with higher-level abstractions like the `all(..)` method (in classic terms, a "gate") and the `race(..)` method (in classic terms, a "latch"), promise chains provide a mechanism for async flow control.
Yet another way of conceptualizing a Promise is that it's a *future value*, a time-independent container wrapped around a value. This container can be reasoned about identically whether the underlying value is final or not. Observing the resolution of a Promise extracts this value once available. In other words, a Promise is said to be the async version of a sync function's return value.
A Promise can only have one of two possible resolution outcomes: fulfilled or rejected, with an optional single value. If a Promise is fulfilled, the final value is called a fulfillment. If it's rejected, the final value is called a reason (as in, a "reason for rejection"). Promises can only be resolved (fulfillment or rejection) *once*. Any further attempts to fulfill or reject are simply ignored. Thus, once a Promise is resolved, it's an immutable value that cannot be changed.
Clearly, there are several different ways to think about what a Promise is. No single perspective is fully sufficient, but each provides a separate aspect of the whole. The big takeaway is that they offer a significant improvement over callbacks-only async, namely that they provide order, predictability, and trustability.
### Making and Using Promises
To construct a promise instance, use the `Promise(..)` constructor:
```js
var p = new Promise( function pr(resolve,reject){
// ..
} );
```
The `Promise(..)` constructor takes a single function (`pr(..)`), which is called immediately and receives two control functions as arguments, usually named `resolve(..)` and `reject(..)`. They are used as:
* If you call `reject(..)`, the promise is rejected, and if any value is passed to `reject(..)`, it is set as the reason for rejection.
* If you call `resolve(..)` with no value, or any non-promise value, the promise is fulfilled.
* If you call `resolve(..)` and pass another promise, this promise simply adopts the state -- whether immediate or eventual -- of the passed promise (either fulfillment or rejection).
Here's how you'd typically use a promise to refactor a callback-reliant function call. If you start out with an `ajax(..)` utility that expects to be able to call an error-first style callback:
```js
function ajax(url,cb) {
// make request, eventually call `cb(..)`
}
// ..
ajax( "http://some.url.1", function handler(err,contents){
if (err) {
// handle ajax error
}
else {
// handle `contents` success
}
} );
```
You can convert it to:
```js
function ajax(url) {
return new Promise( function pr(resolve,reject){
// make request, eventually call
// either `resolve(..)` or `reject(..)`
} );
}
// ..
ajax( "http://some.url.1" )
.then(
function fulfilled(contents){
// handle `contents` success
},
function rejected(reason){
// handle ajax error reason
}
);
```
Promises have a `then(..)` method that accepts one or two callback functions. The first function (if present) is treated as the handler to call if the promise is fulfilled successfully. The second function (if present) is treated as the handler to call if the promise is rejected explicitly, or if any error/exception is caught during resolution.
If one of the arguments is omitted or otherwise not a valid function -- typically you'll use `null` instead -- a default placeholder equivalent is used. The default success callback passes its fulfillment value along and the default error callback propagates its rejection reason along.
The shorthand for calling `then(null,handleRejection)` is `catch(handleRejection)`.
Both `then(..)` and `catch(..)` automatically construct and return another promise instance, which is wired to receive the resolution from whatever the return value is from the original promise's fulfillment or rejection handler (whichever is actually called). Consider:
```js
ajax( "http://some.url.1" )
.then(
function fulfilled(contents){
return contents.toUpperCase();
},
function rejected(reason){
return "DEFAULT VALUE";
}
)
.then( function fulfilled(data){
// handle data from original promise's
// handlers
} );
```
In this snippet, we're returning an immediate value from either `fulfilled(..)` or `rejected(..)`, which then is received on the next event turn in the second `then(..)`'s `fulfilled(..)`. If we instead return a new promise, that new promise is subsumed and adopted as the resolution:
```js
ajax( "http://some.url.1" )
.then(
function fulfilled(contents){
return ajax(
"http://some.url.2?v=" + contents
);
},
function rejected(reason){
return ajax(
"http://backup.url.3?err=" + reason
);
}
)
.then( function fulfilled(contents){
// `contents` comes from the subsequent
// `ajax(..)` call, whichever it was
} );
```
It's important to note that an exception (or rejected promise) in the first `fulfilled(..)` will *not* result in the first `rejected(..)` being called, as that handler only responds to the resolution of the first original promise. Instead, the second promise, which the second `then(..)` is called against, receives that rejection.
In this previous snippet, we are not listening for that rejection, which means it will be silently held onto for future observation. If you never observe it by calling a `then(..)` or `catch(..)`, then it will go unhandled. Some browser developer consoles may detect these unhandled rejections and report them, but this is not reliably guaranteed; you should always observe promise rejections.
**Note:** This was just a brief overview of Promise theory and behavior. For a much more in-depth exploration, see Chapter 3 of the *Async & Performance* title of this series.
### Thenables
Promises are genuine instances of the `Promise(..)` constructor. However, there are promise-like objects called *thenables* that generally can interoperate with the Promise mechanisms.
Any object (or function) with a `then(..)` function on it is assumed to be a thenable. Any place where the Promise mechanisms can accept and adopt the state of a genuine promise, they can also handle a thenable.
Thenables are basically a general label for any promise-like value that may have been created by some other system than the actual `Promise(..)` constructor. In that perspective, a thenable is generally less trustable than a genuine Promise. Consider this misbehaving thenable, for example:
```js
var th = {
then: function thener( fulfilled ) {
// call `fulfilled(..)` once every 100ms forever
setInterval( fulfilled, 100 );
}
};
```
If you received that thenable and chained it with `th.then(..)`, you'd likely be surprised that your fulfillment handler is called repeatedly, when normal Promises are supposed to only ever be resolved once.
Generally, if you're receiving what purports to be a promise or thenable back from some other system, you shouldn't just trust it blindly. In the next section, we'll see a utility included with ES6 Promises that helps address this trust concern.
But to further understand the perils of this issue, consider that *any* object in *any* piece of code that's ever been defined to have a method on it called `then(..)` can be potentially confused as a thenable -- if used with Promises, of course -- regardless of if that thing was ever intended to even remotely be related to Promise-style async coding.
Prior to ES6, there was never any special reservation made on methods called `then(..)`, and as you can imagine there's been at least a few cases where that method name has been chosen prior to Promises ever showing up on the radar screen. The most likely case of mistaken thenable will be async libraries that use `then(..)` but which are not strictly Promises-compliant -- there are several out in the wild.
The onus will be on you to guard against directly using values with the Promise mechanism that would be incorrectly assumed to be a thenable.
### `Promise` API
The `Promise` API also provides some static methods for working with Promises.
`Promise.resolve(..)` creates a promise resolved to the value passed in. Let's compare how it works to the more manual approach:
```js
var p1 = Promise.resolve( 42 );
var p2 = new Promise( function pr(resolve){
resolve( 42 );
} );
```
`p1` and `p2` will have essentially identical behavior. The same goes for resolving with a promise:
```js
var theP = ajax( .. );
var p1 = Promise.resolve( theP );
var p2 = new Promise( function pr(resolve){
resolve( theP );
} );
```
**Tip:** `Promise.resolve(..)` is the solution to the thenable trust issue raised in the previous section. Any value that you are not already certain is a trustable promise -- even if it could be an immediate value -- can be normalized by passing it to `Promise.resolve(..)`. If the value is already a recognizable promise or thenable, its state/resolution will simply be adopted, insulating you from misbehavior. If it's instead an immediate value, it will be "wrapped" in a genuine promise, thereby normalizing its behavior to be async.
`Promise.reject(..)` creates an immediately rejected promise, the same as its `Promise(..)` constructor counterpart:
```js
var p1 = Promise.reject( "Oops" );
var p2 = new Promise( function pr(resolve,reject){
reject( "Oops" );
} );
```
While `resolve(..)` and `Promise.resolve(..)` can accept a promise and adopt its state/resolution, `reject(..)` and `Promise.reject(..)` do not differentiate what value they receive. So, if you reject with a promise or thenable, the promise/thenable itself will be set as the rejection reason, not its underlying value.
`Promise.all([ .. ])` accepts an array of one or more values (e.g., immediate values, promises, thenables). It returns a promise back that will be fulfilled if all the values fulfill, or reject immediately once the first of any of them rejects.
Starting with these values/promises:
```js
var p1 = Promise.resolve( 42 );
var p2 = new Promise( function pr(resolve){
setTimeout( function(){
resolve( 43 );
}, 100 );
} );
var v3 = 44;
var p4 = new Promise( function pr(resolve,reject){
setTimeout( function(){
reject( "Oops" );
}, 10 );
} );
```
Let's consider how `Promise.all([ .. ])` works with combinations of those values:
```js
Promise.all( [p1,p2,v3] )
.then( function fulfilled(vals){
console.log( vals ); // [42,43,44]
} );
Promise.all( [p1,p2,v3,p4] )
.then(
function fulfilled(vals){
// never gets here
},
function rejected(reason){
console.log( reason ); // Oops
}
);
```
While `Promise.all([ .. ])` waits for all fulfillments (or the first rejection), `Promise.race([ .. ])` waits only for either the first fulfillment or rejection. Consider:
```js
// NOTE: re-setup all test values to
// avoid timing issues misleading you!
Promise.race( [p2,p1,v3] )
.then( function fulfilled(val){
console.log( val ); // 42
} );
Promise.race( [p2,p4] )
.then(
function fulfilled(val){
// never gets here
},
function rejected(reason){
console.log( reason ); // Oops
}
);
```
**Warning:** While `Promise.all([])` will fulfill right away (with no values), `Promise.race([])` will hang forever. This is a strange inconsistency, and speaks to the suggestion that you should never use these methods with empty arrays.
## Generators + Promises
It *is* possible to express a series of promises in a chain to represent the async flow control of your program. Consider:
```js
step1()
.then(
step2,
step1Failed
)
.then(
function step3(msg) {
return Promise.all( [
step3a( msg ),
step3b( msg ),
step3c( msg )
] )
}
)
.then(step4);
```
However, there's a much better option for expressing async flow control, and it will probably be much more preferable in terms of coding style than long promise chains. We can use what we learned in Chapter 3 about generators to express our async flow control.
The important pattern to recognize: a generator can yield a promise, and that promise can then be wired to resume the generator with its fulfillment value.
Consider the previous snippet's async flow control expressed with a generator:
```js
function *main() {
try {
var ret = yield step1();
}
catch (err) {
ret = yield step1Failed( err );
}
ret = yield step2( ret );
// step 3
ret = yield Promise.all( [
step3a( ret ),
step3b( ret ),
step3c( ret )
] );
yield step4( ret );
}
```
On the surface, this snippet may seem more verbose than the promise chain equivalent in the earlier snippet. However, it offers a much more attractive -- and more importantly, a more understandable and reason-able -- synchronous-looking coding style (with `=` assignment of "return" values, etc.) That's especially true in that `try..catch` error handling can be used across those hidden async boundaries.
Why are we using Promises with the generator? It's certainly possible to do async generator coding without Promises.
Promises are a trustable system that uninverts the inversion of control of normal callbacks or thunks (see the *Async & Performance* title of this series). So, combining the trustability of Promises and the synchronicity of code in generators effectively addresses all the major deficiencies of callbacks. Also, utilities like `Promise.all([ .. ])` are a nice, clean way to express concurrency at a generator's single `yield` step.
So how does this magic work? We're going to need a *runner* that can run our generator, receive a `yield`ed promise, and wire it up to resume the generator with either the fulfillment success value, or throw an error into the generator with the rejection reason.
Many async-capable utilities/libraries have such a "runner"; for example, `Q.spawn(..)` and my asynquence's `runner(..)` plug-in. But here's a stand-alone runner to illustrate how the process works:
```js
function run(gen) {
var args = [].slice.call( arguments, 1), it;
it = gen.apply( this, args );
return Promise.resolve()
.then( function handleNext(value){
var next = it.next( value );
return (function handleResult(next){
if (next.done) {
return next.value;
}
else {
return Promise.resolve( next.value )
.then(
handleNext,
function handleErr(err) {
return Promise.resolve(
it.throw( err )
)
.then( handleResult );
}
);
}
})( next );
} );
}
```
**Note:** For a more prolifically commented version of this utility, see the *Async & Performance* title of this series. Also, the run utilities provided with various async libraries are often more powerful/capable than what we've shown here. For example, asynquence's `runner(..)` can handle `yield`ed promises, sequences, thunks, and immediate (non-promise) values, giving you ultimate flexibility.
So now running `*main()` as listed in the earlier snippet is as easy as:
```js
run( main )
.then(
function fulfilled(){
// `*main()` completed successfully
},
function rejected(reason){
// Oops, something went wrong
}
);
```
Essentially, anywhere that you have more than two asynchronous steps of flow control logic in your program, you can *and should* use a promise-yielding generator driven by a run utility to express the flow control in a synchronous fashion. This will make for much easier to understand and maintain code.
This yield-a-promise-resume-the-generator pattern is going to be so common and so powerful, the next version of JavaScript after ES6 is almost certainly going to introduce a new function type that will do it automatically without needing the run utility. We'll cover `async function`s (as they're expected to be called) in Chapter 8.
## Review
As JavaScript continues to mature and grow in its widespread adoption, asynchronous programming is more and more of a central concern. Callbacks are not fully sufficient for these tasks, and totally fall down the more sophisticated the need.
Thankfully, ES6 adds Promises to address one of the major shortcomings of callbacks: lack of trust in predictable behavior. Promises represent the future completion value from a potentially async task, normalizing behavior across sync and async boundaries.
But it's the combination of Promises with generators that fully realizes the benefits of rearranging our async flow control code to de-emphasize and abstract away that ugly callback soup (aka "hell").
Right now, we can manage these interactions with the aide of various async libraries' runners, but JavaScript is eventually going to support this interaction pattern with dedicated syntax alone!

View File

@ -1,569 +0,0 @@
# You Don't Know JS Yet: ES.Next & Beyond - 2nd Edition
# Chapter 5: Collections
| NOTE: |
| :--- |
| Work in progress |
.
.
.
.
.
.
.
----
| NOTE: |
| :--- |
| Everything below here is previous text from 1st edition, and is only here for reference while 2nd edition work is underway. **Please ignore this stuff.** |
Structured collection and access to data is a critical component of just about any JS program. From the beginning of the language up to this point, the array and the object have been our primary mechanism for creating data structures. Of course, many higher-level data structures have been built on top of these, as user-land libraries.
As of ES6, some of the most useful (and performance-optimizing!) data structure abstractions have been added as native components of the language.
We'll start this chapter first by looking at *TypedArrays*, technically contemporary to ES5 efforts several years ago, but only standardized as companions to WebGL and not JavaScript itself. As of ES6, these have been adopted directly by the language specification, which gives them first-class status.
Maps are like objects (key/value pairs), but instead of just a string for the key, you can use any value -- even another object or map! Sets are similar to arrays (lists of values), but the values are unique; if you add a duplicate, it's ignored. There are also weak (in relation to memory/garbage collection) counterparts: WeakMap and WeakSet.
## TypedArrays
As we cover in the *Types & Grammar* title of this series, JS does have a set of built-in types, like `number` and `string`. It'd be tempting to look at a feature named "typed array" and assume it means an array of a specific type of values, like an array of only strings.
However, typed arrays are really more about providing structured access to binary data using array-like semantics (indexed access, etc.). The "type" in the name refers to a "view" layered on type of the bucket of bits, which is essentially a mapping of whether the bits should be viewed as an array of 8-bit signed integers, 16-bit signed integers, and so on.
How do you construct such a bit-bucket? It's called a "buffer," and you construct it most directly with the `ArrayBuffer(..)` constructor:
```js
var buf = new ArrayBuffer( 32 );
buf.byteLength; // 32
```
`buf` is now a binary buffer that is 32-bytes long (256-bits), that's pre-initialized to all `0`s. A buffer by itself doesn't really allow you any interaction except for checking its `byteLength` property.
**Tip:** Several web platform features use or return array buffers, such as `FileReader#readAsArrayBuffer(..)`, `XMLHttpRequest#send(..)`, and `ImageData` (canvas data).
But on top of this array buffer, you can then layer a "view," which comes in the form of a typed array. Consider:
```js
var arr = new Uint16Array( buf );
arr.length; // 16
```
`arr` is a typed array of 16-bit unsigned integers mapped over the 256-bit `buf` buffer, meaning you get 16 elements.
### Endianness
It's very important to understand that the `arr` is mapped using the endian-setting (big-endian or little-endian) of the platform the JS is running on. This can be an issue if the binary data is created with one endianness but interpreted on a platform with the opposite endianness.
Endian means if the low-order byte (collection of 8-bits) of a multi-byte number -- such as the 16-bit unsigned ints we created in the earlier snippet -- is on the right or the left of the number's bytes.
For example, let's imagine the base-10 number `3085`, which takes 16-bits to represent. If you have just one 16-bit number container, it'd be represented in binary as `0000110000001101` (hexadecimal `0c0d`) regardless of endianness.
But if `3085` was represented with two 8-bit numbers, the endianness would significantly affect its storage in memory:
* `0000110000001101` / `0c0d` (big endian)
* `0000110100001100` / `0d0c` (little endian)
If you received the bits of `3085` as `0000110100001100` from a little-endian system, but you layered a view on top of it in a big-endian system, you'd instead see value `3340` (base-10) and `0d0c` (base-16).
Little endian is the most common representation on the web these days, but there are definitely browsers where that's not true. It's important that you understand the endianness of both the producer and consumer of a chunk of binary data.
From MDN, here's a quick way to test the endianness of your JavaScript:
```js
var littleEndian = (function() {
var buffer = new ArrayBuffer( 2 );
new DataView( buffer ).setInt16( 0, 256, true );
return new Int16Array( buffer )[0] === 256;
})();
```
`littleEndian` will be `true` or `false`; for most browsers, it should return `true`. This test uses `DataView(..)`, which allows more low-level, fine-grained control over accessing (setting/getting) the bits from the view you layer over the buffer. The third parameter of the `setInt16(..)` method in the previous snippet is for telling the `DataView` what endianness you're wanting it to use for that operation.
**Warning:** Do not confuse endianness of underlying binary storage in array buffers with how a given number is represented when exposed in a JS program. For example, `(3085).toString(2)` returns `"110000001101"`, which with an assumed leading four `"0"`s appears to be the big-endian representation. In fact, this representation is based on a single 16-bit view, not a view of two 8-bit bytes. The `DataView` test above is the best way to determine endianness for your JS environment.
### Multiple Views
A single buffer can have multiple views attached to it, such as:
```js
var buf = new ArrayBuffer( 2 );
var view8 = new Uint8Array( buf );
var view16 = new Uint16Array( buf );
view16[0] = 3085;
view8[0]; // 13
view8[1]; // 12
view8[0].toString( 16 ); // "d"
view8[1].toString( 16 ); // "c"
// swap (as if endian!)
var tmp = view8[0];
view8[0] = view8[1];
view8[1] = tmp;
view16[0]; // 3340
```
The typed array constructors have multiple signature variations. We've shown so far only passing them an existing buffer. However, that form also takes two extra parameters: `byteOffset` and `length`. In other words, you can start the typed array view at a location other than `0` and you can make it span less than the full length of the buffer.
If the buffer of binary data includes data in non-uniform size/location, this technique can be quite useful.
For example, consider a binary buffer that has a 2-byte number (aka "word") at the beginning, followed by two 1-byte numbers, followed by a 32-bit floating point number. Here's how you can access that data with multiple views on the same buffer, offsets, and lengths:
```js
var first = new Uint16Array( buf, 0, 2 )[0],
second = new Uint8Array( buf, 2, 1 )[0],
third = new Uint8Array( buf, 3, 1 )[0],
fourth = new Float32Array( buf, 4, 4 )[0];
```
### TypedArray Constructors
In addition to the `(buffer,[offset, [length]])` form examined in the previous section, typed array constructors also support these forms:
* [constructor]`(length)`: Creates a new view over a new buffer of `length` bytes
* [constructor]`(typedArr)`: Creates a new view and buffer, and copies the contents from the `typedArr` view
* [constructor]`(obj)`: Creates a new view and buffer, and iterates over the array-like or object `obj` to copy its contents
The following typed array constructors are available as of ES6:
* `Int8Array` (8-bit signed integers), `Uint8Array` (8-bit unsigned integers)
- `Uint8ClampedArray` (8-bit unsigned integers, each value clamped on setting to the `0`-`255` range)
* `Int16Array` (16-bit signed integers), `Uint16Array` (16-bit unsigned integers)
* `Int32Array` (32-bit signed integers), `Uint32Array` (32-bit unsigned integers)
* `Float32Array` (32-bit floating point, IEEE-754)
* `Float64Array` (64-bit floating point, IEEE-754)
Instances of typed array constructors are almost the same as regular native arrays. Some differences include having a fixed length and the values all being of the same "type."
However, they share most of the same `prototype` methods. As such, you likely will be able to use them as regular arrays without needing to convert.
For example:
```js
var a = new Int32Array( 3 );
a[0] = 10;
a[1] = 20;
a[2] = 30;
a.map( function(v){
console.log( v );
} );
// 10 20 30
a.join( "-" );
// "10-20-30"
```
**Warning:** You can't use certain `Array.prototype` methods with TypedArrays that don't make sense, such as the mutators (`splice(..)`, `push(..)`, etc.) and `concat(..)`.
Be aware that the elements in TypedArrays really are constrained to the declared bit sizes. If you have a `Uint8Array` and try to assign something larger than an 8-bit value into one of its elements, the value wraps around so as to stay within the bit length.
This could cause problems if you were trying to, for instance, square all the values in a TypedArray. Consider:
```js
var a = new Uint8Array( 3 );
a[0] = 10;
a[1] = 20;
a[2] = 30;
var b = a.map( function(v){
return v * v;
} );
b; // [100, 144, 132]
```
The `20` and `30` values, when squared, resulted in bit overflow. To get around such a limitation, you can use the `TypedArray#from(..)` function:
```js
var a = new Uint8Array( 3 );
a[0] = 10;
a[1] = 20;
a[2] = 30;
var b = Uint16Array.from( a, function(v){
return v * v;
} );
b; // [100, 400, 900]
```
See the "`Array.from(..)` Static Function" section in Chapter 6 for more information about the `Array.from(..)` that is shared with TypedArrays. Specifically, the "Mapping" section explains the mapping function accepted as its second argument.
One interesting behavior to consider is that TypedArrays have a `sort(..)` method much like regular arrays, but this one defaults to numeric sort comparisons instead of coercing values to strings for lexicographic comparison. For example:
```js
var a = [ 10, 1, 2, ];
a.sort(); // [1,10,2]
var b = new Uint8Array( [ 10, 1, 2 ] );
b.sort(); // [1,2,10]
```
The `TypedArray#sort(..)` takes an optional compare function argument just like `Array#sort(..)`, which works in exactly the same way.
## Maps
If you have a lot of JS experience, you know that objects are the primary mechanism for creating unordered key/value-pair data structures, otherwise known as maps. However, the major drawback with objects-as-maps is the inability to use a non-string value as the key.
For example, consider:
```js
var m = {};
var x = { id: 1 },
y = { id: 2 };
m[x] = "foo";
m[y] = "bar";
m[x]; // "bar"
m[y]; // "bar"
```
What's going on here? The two objects `x` and `y` both stringify to `"[object Object]"`, so only that one key is being set in `m`.
Some have implemented fake maps by maintaining a parallel array of non-string keys alongside an array of the values, such as:
```js
var keys = [], vals = [];
var x = { id: 1 },
y = { id: 2 };
keys.push( x );
vals.push( "foo" );
keys.push( y );
vals.push( "bar" );
keys[0] === x; // true
vals[0]; // "foo"
keys[1] === y; // true
vals[1]; // "bar"
```
Of course, you wouldn't want to manage those parallel arrays yourself, so you could define a data structure with methods that automatically do the management under the covers. Besides having to do that work yourself, the main drawback is that access is no longer O(1) time-complexity, but instead is O(n).
But as of ES6, there's no longer any need to do this! Just use `Map(..)`:
```js
var m = new Map();
var x = { id: 1 },
y = { id: 2 };
m.set( x, "foo" );
m.set( y, "bar" );
m.get( x ); // "foo"
m.get( y ); // "bar"
```
The only drawback is that you can't use the `[ ]` bracket access syntax for setting and retrieving values. But `get(..)` and `set(..)` work perfectly suitably instead.
To delete an element from a map, don't use the `delete` operator, but instead use the `delete(..)` method:
```js
m.set( x, "foo" );
m.set( y, "bar" );
m.delete( y );
```
You can clear the entire map's contents with `clear()`. To get the length of a map (i.e., the number of keys), use the `size` property (not `length`):
```js
m.set( x, "foo" );
m.set( y, "bar" );
m.size; // 2
m.clear();
m.size; // 0
```
The `Map(..)` constructor can also receive an iterable (see "Iterators" in Chapter 3), which must produce a list of arrays, where the first item in each array is the key and the second item is the value. This format for iteration is identical to that produced by the `entries()` method, explained in the next section. That makes it easy to make a copy of a map:
```js
var m2 = new Map( m.entries() );
// same as:
var m2 = new Map( m );
```
Because a map instance is an iterable, and its default iterator is the same as `entries()`, the second shorter form is more preferable.
Of course, you can just manually specify an *entries* list (array of key/value arrays) in the `Map(..)` constructor form:
```js
var x = { id: 1 },
y = { id: 2 };
var m = new Map( [
[ x, "foo" ],
[ y, "bar" ]
] );
m.get( x ); // "foo"
m.get( y ); // "bar"
```
### Map Values
To get the list of values from a map, use `values(..)`, which returns an iterator. In Chapters 2 and 3, we covered various ways to process an iterator sequentially (like an array), such as the `...` spread operator and the `for..of` loop. Also, "Arrays" in Chapter 6 covers the `Array.from(..)` method in detail. Consider:
```js
var m = new Map();
var x = { id: 1 },
y = { id: 2 };
m.set( x, "foo" );
m.set( y, "bar" );
var vals = [ ...m.values() ];
vals; // ["foo","bar"]
Array.from( m.values() ); // ["foo","bar"]
```
As discussed in the previous section, you can iterate over a map's entries using `entries()` (or the default map iterator). Consider:
```js
var m = new Map();
var x = { id: 1 },
y = { id: 2 };
m.set( x, "foo" );
m.set( y, "bar" );
var vals = [ ...m.entries() ];
vals[0][0] === x; // true
vals[0][1]; // "foo"
vals[1][0] === y; // true
vals[1][1]; // "bar"
```
### Map Keys
To get the list of keys, use `keys()`, which returns an iterator over the keys in the map:
```js
var m = new Map();
var x = { id: 1 },
y = { id: 2 };
m.set( x, "foo" );
m.set( y, "bar" );
var keys = [ ...m.keys() ];
keys[0] === x; // true
keys[1] === y; // true
```
To determine if a map has a given key, use `has(..)`:
```js
var m = new Map();
var x = { id: 1 },
y = { id: 2 };
m.set( x, "foo" );
m.has( x ); // true
m.has( y ); // false
```
Maps essentially let you associate some extra piece of information (the value) with an object (the key) without actually putting that information on the object itself.
While you can use any kind of value as a key for a map, you typically will use objects, as strings and other primitives are already eligible as keys of normal objects. In other words, you'll probably want to continue to use normal objects for maps unless some or all of the keys need to be objects, in which case map is more appropriate.
**Warning:** If you use an object as a map key and that object is later discarded (all references unset) in attempt to have garbage collection (GC) reclaim its memory, the map itself will still retain its entry. You will need to remove the entry from the map for it to be GC-eligible. In the next section, we'll see WeakMaps as a better option for object keys and GC.
## WeakMaps
WeakMaps are a variation on maps, which has most of the same external behavior but differs underneath in how the memory allocation (specifically its GC) works.
WeakMaps take (only) objects as keys. Those objects are held *weakly*, which means if the object itself is GC'd, the entry in the WeakMap is also removed. This isn't observable behavior, though, as the only way an object can be GC'd is if there's no more references to it -- once there are no more references to it, you have no object reference to check if it exists in the WeakMap.
Otherwise, the API for WeakMap is similar, though more limited:
```js
var m = new WeakMap();
var x = { id: 1 },
y = { id: 2 };
m.set( x, "foo" );
m.has( x ); // true
m.has( y ); // false
```
WeakMaps do not have a `size` property or `clear()` method, nor do they expose any iterators over their keys, values, or entries. So even if you unset the `x` reference, which will remove its entry from `m` upon GC, there is no way to tell. You'll just have to take JavaScript's word for it!
Just like Maps, WeakMaps let you soft-associate information with an object. But they are particularly useful if the object is not one you completely control, such as a DOM element. If the object you're using as a map key can be deleted and should be GC-eligible when it is, then a WeakMap is a more appropriate option.
It's important to note that a WeakMap only holds its *keys* weakly, not its values. Consider:
```js
var m = new WeakMap();
var x = { id: 1 },
y = { id: 2 },
z = { id: 3 },
w = { id: 4 };
m.set( x, y );
x = null; // { id: 1 } is GC-eligible
y = null; // { id: 2 } is GC-eligible
// only because { id: 1 } is
m.set( z, w );
w = null; // { id: 4 } is not GC-eligible
```
For this reason, WeakMaps are in my opinion better named "WeakKeyMaps."
## Sets
A set is a collection of unique values (duplicates are ignored).
The API for a set is similar to map. The `add(..)` method takes the place of the `set(..)` method (somewhat ironically), and there is no `get(..)` method.
Consider:
```js
var s = new Set();
var x = { id: 1 },
y = { id: 2 };
s.add( x );
s.add( y );
s.add( x );
s.size; // 2
s.delete( y );
s.size; // 1
s.clear();
s.size; // 0
```
The `Set(..)` constructor form is similar to `Map(..)`, in that it can receive an iterable, like another set or simply an array of values. However, unlike how `Map(..)` expects *entries* list (array of key/value arrays), `Set(..)` expects a *values* list (array of values):
```js
var x = { id: 1 },
y = { id: 2 };
var s = new Set( [x,y] );
```
A set doesn't need a `get(..)` because you don't retrieve a value from a set, but rather test if it is present or not, using `has(..)`:
```js
var s = new Set();
var x = { id: 1 },
y = { id: 2 };
s.add( x );
s.has( x ); // true
s.has( y ); // false
```
**Note:** The comparison algorithm in `has(..)` is almost identical to `Object.is(..)` (see Chapter 6), except that `-0` and `0` are treated as the same rather than distinct.
### Set Iterators
Sets have the same iterator methods as maps. Their behavior is different for sets, but symmetric with the behavior of map iterators. Consider:
```js
var s = new Set();
var x = { id: 1 },
y = { id: 2 };
s.add( x ).add( y );
var keys = [ ...s.keys() ],
vals = [ ...s.values() ],
entries = [ ...s.entries() ];
keys[0] === x;
keys[1] === y;
vals[0] === x;
vals[1] === y;
entries[0][0] === x;
entries[0][1] === x;
entries[1][0] === y;
entries[1][1] === y;
```
The `keys()` and `values()` iterators both yield a list of the unique values in the set. The `entries()` iterator yields a list of entry arrays, where both items of the array are the unique set value. The default iterator for a set is its `values()` iterator.
The inherent uniqueness of a set is its most useful trait. For example:
```js
var s = new Set( [1,2,3,4,"1",2,4,"5"] ),
uniques = [ ...s ];
uniques; // [1,2,3,4,"1","5"]
```
Set uniqueness does not allow coercion, so `1` and `"1"` are considered distinct values.
## WeakSets
Whereas a WeakMap holds its keys weakly (but its values strongly), a WeakSet holds its values weakly (there aren't really keys).
```js
var s = new WeakSet();
var x = { id: 1 },
y = { id: 2 };
s.add( x );
s.add( y );
x = null; // `x` is GC-eligible
y = null; // `y` is GC-eligible
```
**Warning:** WeakSet values must be objects, not primitive values as is allowed with sets.
## Review
ES6 defines a number of useful collections that make working with data in structured ways more efficient and effective.
TypedArrays provide "view"s of binary data buffers that align with various integer types, like 8-bit unsigned integers and 32-bit floats. The array access to binary data makes operations much easier to express and maintain, which enables you to more easily work with complex data like video, audio, canvas data, and so on.
Maps are key-value pairs where the key can be an object instead of just a string/primitive. Sets are unique lists of values (of any type).
WeakMaps are maps where the key (object) is weakly held, so that GC is free to collect the entry if it's the last reference to an object. WeakSets are sets where the value is weakly held, again so that GC can remove the entry if it's the last reference to that object.

View File

@ -1,823 +0,0 @@
# You Don't Know JS Yet: ES.Next & Beyond - 2nd Edition
# Chapter 6: API Additions
| NOTE: |
| :--- |
| Work in progress |
.
.
.
.
.
.
.
----
| NOTE: |
| :--- |
| Everything below here is previous text from 1st edition, and is only here for reference while 2nd edition work is underway. **Please ignore this stuff.** |
From conversions of values to mathematic calculations, ES6 adds many static properties and methods to various built-in natives and objects to help with common tasks. In addition, instances of some of the natives have new capabilities via various new prototype methods.
**Note:** Most of these features can be faithfully polyfilled. We will not dive into such details here, but check out "ES6 Shim" (https://github.com/paulmillr/es6-shim/) for standards-compliant shims/polyfills.
## `Array`
One of the most commonly extended features in JS by various user libraries is the Array type. It should be no surprise that ES6 adds a number of helpers to Array, both static and prototype (instance).
### `Array.of(..)` Static Function
There's a well known gotcha with the `Array(..)` constructor, which is that if there's only one argument passed, and that argument is a number, instead of making an array of one element with that number value in it, it constructs an empty array with a `length` property equal to the number. This action produces the unfortunate and quirky "empty slots" behavior that's reviled about JS arrays.
`Array.of(..)` replaces `Array(..)` as the preferred function-form constructor for arrays, because `Array.of(..)` does not have that special single-number-argument case. Consider:
```js
var a = Array( 3 );
a.length; // 3
a[0]; // undefined
var b = Array.of( 3 );
b.length; // 1
b[0]; // 3
var c = Array.of( 1, 2, 3 );
c.length; // 3
c; // [1,2,3]
```
Under what circumstances would you want to use `Array.of(..)` instead of just creating an array with literal syntax, like `c = [1,2,3]`? There's two possible cases.
If you have a callback that's supposed to wrap argument(s) passed to it in an array, `Array.of(..)` fits the bill perfectly. That's probably not terribly common, but it may scratch an itch for you.
The other scenario is if you subclass `Array` (see "Classes" in Chapter 3) and want to be able to create and initialize elements in an instance of your subclass, such as:
```js
class MyCoolArray extends Array {
sum() {
return this.reduce( function reducer(acc,curr){
return acc + curr;
}, 0 );
}
}
var x = new MyCoolArray( 3 );
x.length; // 3 -- oops!
x.sum(); // 0 -- oops!
var y = [3]; // Array, not MyCoolArray
y.length; // 1
y.sum(); // `sum` is not a function
var z = MyCoolArray.of( 3 );
z.length; // 1
z.sum(); // 3
```
You can't just (easily) create a constructor for `MyCoolArray` that overrides the behavior of the `Array` parent constructor, because that constructor is necessary to actually create a well-behaving array value (initializing the `this`). The "inherited" static `of(..)` method on the `MyCoolArray` subclass provides a nice solution.
### `Array.from(..)` Static Function
An "array-like object" in JavaScript is an object that has a `length` property on it, specifically with an integer value of zero or higher.
These values have been notoriously frustrating to work with in JS; it's been quite common to need to transform them into an actual array, so that the various `Array.prototype` methods (`map(..)`, `indexOf(..)` etc.) are available to use with it. That process usually looks like:
```js
// array-like object
var arrLike = {
length: 3,
0: "foo",
1: "bar"
};
var arr = Array.prototype.slice.call( arrLike );
```
Another common task where `slice(..)` is often used is in duplicating a real array:
```js
var arr2 = arr.slice();
```
In both cases, the new ES6 `Array.from(..)` method can be a more understandable and graceful -- if also less verbose -- approach:
```js
var arr = Array.from( arrLike );
var arrCopy = Array.from( arr );
```
`Array.from(..)` looks to see if the first argument is an iterable (see "Iterators" in Chapter 3), and if so, it uses the iterator to produce values to "copy" into the returned array. Because real arrays have an iterator for those values, that iterator is automatically used.
But if you pass an array-like object as the first argument to `Array.from(..)`, it behaves basically the same as `slice()` (no arguments!) or `apply(..)` does, which is that it simply loops over the value, accessing numerically named properties from `0` up to whatever the value of `length` is.
Consider:
```js
var arrLike = {
length: 4,
2: "foo"
};
Array.from( arrLike );
// [ undefined, undefined, "foo", undefined ]
```
Because positions `0`, `1`, and `3` didn't exist on `arrLike`, the result was the `undefined` value for each of those slots.
You could produce a similar outcome like this:
```js
var emptySlotsArr = [];
emptySlotsArr.length = 4;
emptySlotsArr[2] = "foo";
Array.from( emptySlotsArr );
// [ undefined, undefined, "foo", undefined ]
```
#### Avoiding Empty Slots
There's a subtle but important difference in the previous snippet between the `emptySlotsArr` and the result of the `Array.from(..)` call. `Array.from(..)` never produces empty slots.
Prior to ES6, if you wanted to produce an array initialized to a certain length with actual `undefined` values in each slot (no empty slots!), you had to do extra work:
```js
var a = Array( 4 ); // four empty slots!
var b = Array.apply( null, { length: 4 } ); // four `undefined` values
```
But `Array.from(..)` now makes this easier:
```js
var c = Array.from( { length: 4 } ); // four `undefined` values
```
**Warning:** Using an empty slot array like `a` in the previous snippets would work with some array functions, but others ignore empty slots (like `map(..)`, etc.). You should never intentionally work with empty slots, as it will almost certainly lead to strange/unpredictable behavior in your programs.
#### Mapping
The `Array.from(..)` utility has another helpful trick up its sleeve. The second argument, if provided, is a mapping callback (almost the same as the regular `Array#map(..)` expects) which is called to map/transform each value from the source to the returned target. Consider:
```js
var arrLike = {
length: 4,
2: "foo"
};
Array.from( arrLike, function mapper(val,idx){
if (typeof val == "string") {
return val.toUpperCase();
}
else {
return idx;
}
} );
// [ 0, 1, "FOO", 3 ]
```
**Note:** As with other array methods that take callbacks, `Array.from(..)` takes an optional third argument that if set will specify the `this` binding for the callback passed as the second argument. Otherwise, `this` will be `undefined`.
See "TypedArrays" in Chapter 5 for an example of using `Array.from(..)` in translating values from an array of 8-bit values to an array of 16-bit values.
### Creating Arrays and Subtypes
In the last couple of sections, we've discussed `Array.of(..)` and `Array.from(..)`, both of which create a new array in a similar way to a constructor. But what do they do in subclasses? Do they create instances of the base `Array` or the derived subclass?
```js
class MyCoolArray extends Array {
..
}
MyCoolArray.from( [1, 2] ) instanceof MyCoolArray; // true
Array.from(
MyCoolArray.from( [1, 2] )
) instanceof MyCoolArray; // false
```
Both `of(..)` and `from(..)` use the constructor that they're accessed from to construct the array. So if you use the base `Array.of(..)` you'll get an `Array` instance, but if you use `MyCoolArray.of(..)`, you'll get a `MyCoolArray` instance.
In "Classes" in Chapter 3, we covered the `@@species` setting which all the built-in classes (like `Array`) have defined, which is used by any prototype methods if they create a new instance. `slice(..)` is a great example:
```js
var x = new MyCoolArray( 1, 2, 3 );
x.slice( 1 ) instanceof MyCoolArray; // true
```
Generally, that default behavior will probably be desired, but as we discussed in Chapter 3, you *can* override if you want:
```js
class MyCoolArray extends Array {
// force `species` to be parent constructor
static get [Symbol.species]() { return Array; }
}
var x = new MyCoolArray( 1, 2, 3 );
x.slice( 1 ) instanceof MyCoolArray; // false
x.slice( 1 ) instanceof Array; // true
```
It's important to note that the `@@species` setting is only used for the prototype methods, like `slice(..)`. It's not used by `of(..)` and `from(..)`; they both just use the `this` binding (whatever constructor is used to make the reference). Consider:
```js
class MyCoolArray extends Array {
// force `species` to be parent constructor
static get [Symbol.species]() { return Array; }
}
var x = new MyCoolArray( 1, 2, 3 );
MyCoolArray.from( x ) instanceof MyCoolArray; // true
MyCoolArray.of( [2, 3] ) instanceof MyCoolArray; // true
```
### `copyWithin(..)` Prototype Method
`Array#copyWithin(..)` is a new mutator method available to all arrays (including Typed Arrays; see Chapter 5). `copyWithin(..)` copies a portion of an array to another location in the same array, overwriting whatever was there before.
The arguments are *target* (the index to copy to), *start* (the inclusive index to start the copying from), and optionally *end* (the exclusive index to stop copying). If any of the arguments are negative, they're taken to be relative from the end of the array.
Consider:
```js
[1,2,3,4,5].copyWithin( 3, 0 ); // [1,2,3,1,2]
[1,2,3,4,5].copyWithin( 3, 0, 1 ); // [1,2,3,1,5]
[1,2,3,4,5].copyWithin( 0, -2 ); // [4,5,3,4,5]
[1,2,3,4,5].copyWithin( 0, -2, -1 ); // [4,2,3,4,5]
```
The `copyWithin(..)` method does not extend the array's length, as the first example in the previous snippet shows. Copying simply stops when the end of the array is reached.
Contrary to what you might think, the copying doesn't always go in left-to-right (ascending index) order. It's possible this would result in repeatedly copying an already copied value if the from and target ranges overlap, which is presumably not desired behavior.
So internally, the algorithm avoids this case by copying in reverse order to avoid that gotcha. Consider:
```js
[1,2,3,4,5].copyWithin( 2, 1 ); // ???
```
If the algorithm was strictly moving left to right, then the `2` should be copied to overwrite the `3`, then *that* copied `2` should be copied to overwrite `4`, then *that* copied `2` should be copied to overwrite `5`, and you'd end up with `[1,2,2,2,2]`.
Instead, the copying algorithm reverses direction and copies `4` to overwrite `5`, then copies `3` to overwrite `4`, then copies `2` to overwrite `3`, and the final result is `[1,2,2,3,4]`. That's probably more "correct" in terms of expectation, but it can be confusing if you're only thinking about the copying algorithm in a naive left-to-right fashion.
### `fill(..)` Prototype Method
Filling an existing array entirely (or partially) with a specified value is natively supported as of ES6 with the `Array#fill(..)` method:
```js
var a = Array( 4 ).fill( undefined );
a;
// [undefined,undefined,undefined,undefined]
```
`fill(..)` optionally takes *start* and *end* parameters, which indicate a subset portion of the array to fill, such as:
```js
var a = [ null, null, null, null ].fill( 42, 1, 3 );
a; // [null,42,42,null]
```
### `find(..)` Prototype Method
The most common way to search for a value in an array has generally been the `indexOf(..)` method, which returns the index the value is found at or `-1` if not found:
```js
var a = [1,2,3,4,5];
(a.indexOf( 3 ) != -1); // true
(a.indexOf( 7 ) != -1); // false
(a.indexOf( "2" ) != -1); // false
```
The `indexOf(..)` comparison requires a strict `===` match, so a search for `"2"` fails to find a value of `2`, and vice versa. There's no way to override the matching algorithm for `indexOf(..)`. It's also unfortunate/ungraceful to have to make the manual comparison to the `-1` value.
**Tip:** See the *Types & Grammar* title of this series for an interesting (and controversially confusing) technique to work around the `-1` ugliness with the `~` operator.
Since ES5, the most common workaround to have control over the matching logic has been the `some(..)` method. It works by calling a function callback for each element, until one of those calls returns a `true`/truthy value, and then it stops. Because you get to define the callback function, you have full control over how a match is made:
```js
var a = [1,2,3,4,5];
a.some( function matcher(v){
return v == "2";
} ); // true
a.some( function matcher(v){
return v == 7;
} ); // false
```
But the downside to this approach is that you only get the `true`/`false` indicating if a suitably matched value was found, but not what the actual matched value was.
ES6's `find(..)` addresses this. It works basically the same as `some(..)`, except that once the callback returns a `true`/truthy value, the actual array value is returned:
```js
var a = [1,2,3,4,5];
a.find( function matcher(v){
return v == "2";
} ); // 2
a.find( function matcher(v){
return v == 7; // undefined
});
```
Using a custom `matcher(..)` function also lets you match against complex values like objects:
```js
var points = [
{ x: 10, y: 20 },
{ x: 20, y: 30 },
{ x: 30, y: 40 },
{ x: 40, y: 50 },
{ x: 50, y: 60 }
];
points.find( function matcher(point) {
return (
point.x % 3 == 0 &&
point.y % 4 == 0
);
} ); // { x: 30, y: 40 }
```
**Note:** As with other array methods that take callbacks, `find(..)` takes an optional second argument that if set will specify the `this` binding for the callback passed as the first argument. Otherwise, `this` will be `undefined`.
### `findIndex(..)` Prototype Method
While the previous section illustrates how `some(..)` yields a boolean result for a search of an array, and `find(..)` yields the matched value itself from the array search, there's also a need for finding the positional index of the matched value.
`indexOf(..)` does that, but there's no control over its matching logic; it always uses `===` strict equality. So ES6's `findIndex(..)` is the answer:
```js
var points = [
{ x: 10, y: 20 },
{ x: 20, y: 30 },
{ x: 30, y: 40 },
{ x: 40, y: 50 },
{ x: 50, y: 60 }
];
points.findIndex( function matcher(point) {
return (
point.x % 3 == 0 &&
point.y % 4 == 0
);
} ); // 2
points.findIndex( function matcher(point) {
return (
point.x % 6 == 0 &&
point.y % 7 == 0
);
} ); // -1
```
Don't use `findIndex(..) != -1` (the way it's always been done with `indexOf(..)`) to get a boolean from the search, because `some(..)` already yields the `true`/`false` you want. And don't do `a[ a.findIndex(..) ]` to get the matched value, because that's what `find(..)` accomplishes. And finally, use `indexOf(..)` if you need the index of a strict match, or `findIndex(..)` if you need the index of a more customized match.
**Note:** As with other array methods that take callbacks, `findIndex(..)` takes an optional second argument that if set will specify the `this` binding for the callback passed as the first argument. Otherwise, `this` will be `undefined`.
### `entries()`, `values()`, `keys()` Prototype Methods
In Chapter 3, we illustrated how data structures can provide a patterned item-by-item enumeration of their values, via an iterator. We then expounded on this approach in Chapter 5, as we explored how the new ES6 collections (Map, Set, etc.) provide several methods for producing different kinds of iterations.
Because it's not new to ES6, `Array` might not be thought of traditionally as a "collection," but it is one in the sense that it provides these same iterator methods: `entries()`, `values()`, and `keys()`. Consider:
```js
var a = [1,2,3];
[...a.values()]; // [1,2,3]
[...a.keys()]; // [0,1,2]
[...a.entries()]; // [ [0,1], [1,2], [2,3] ]
[...a[Symbol.iterator]()]; // [1,2,3]
```
Just like with `Set`, the default `Array` iterator is the same as what `values()` returns.
In "Avoiding Empty Slots" earlier in this chapter, we illustrated how `Array.from(..)` treats empty slots in an array as just being present slots with `undefined` in them. That's actually because under the covers, the array iterators behave that way:
```js
var a = [];
a.length = 3;
a[1] = 2;
[...a.values()]; // [undefined,2,undefined]
[...a.keys()]; // [0,1,2]
[...a.entries()]; // [ [0,undefined], [1,2], [2,undefined] ]
```
## `Object`
A few additional static helpers have been added to `Object`. Traditionally, functions of this sort have been seen as focused on the behaviors/capabilities of object values.
However, starting with ES6, `Object` static functions will also be for general-purpose global APIs of any sort that don't already belong more naturally in some other location (i.e., `Array.from(..)`).
### `Object.is(..)` Static Function
The `Object.is(..)` static function makes value comparisons in an even more strict fashion than the `===` comparison.
`Object.is(..)` invokes the underlying `SameValue` algorithm (ES6 spec, section 7.2.9). The `SameValue` algorithm is basically the same as the `===` Strict Equality Comparison Algorithm (ES6 spec, section 7.2.13), with two important exceptions.
Consider:
```js
var x = NaN, y = 0, z = -0;
x === x; // false
y === z; // true
Object.is( x, x ); // true
Object.is( y, z ); // false
```
You should continue to use `===` for strict equality comparisons; `Object.is(..)` shouldn't be thought of as a replacement for the operator. However, in cases where you're trying to strictly identify a `NaN` or `-0` value, `Object.is(..)` is now the preferred option.
**Note:** ES6 also adds a `Number.isNaN(..)` utility (discussed later in this chapter) which may be a slightly more convenient test; you may prefer `Number.isNaN(x)` over `Object.is(x,NaN)`. You *can* accurately test for `-0` with a clumsy `x == 0 && 1 / x === -Infinity`, but in this case `Object.is(x,-0)` is much better.
### `Object.getOwnPropertySymbols(..)` Static Function
The "Symbols" section in Chapter 2 discusses the new Symbol primitive value type in ES6.
Symbols are likely going to be mostly used as special (meta) properties on objects. So the `Object.getOwnPropertySymbols(..)` utility was introduced, which retrieves only the symbol properties directly on an object:
```js
var o = {
foo: 42,
[ Symbol( "bar" ) ]: "hello world",
baz: true
};
Object.getOwnPropertySymbols( o ); // [ Symbol(bar) ]
```
### `Object.setPrototypeOf(..)` Static Function
Also in Chapter 2, we mentioned the `Object.setPrototypeOf(..)` utility, which (unsurprisingly) sets the `[[Prototype]]` of an object for the purposes of *behavior delegation* (see the *this & Object Prototypes* title of this series). Consider:
```js
var o1 = {
foo() { console.log( "foo" ); }
};
var o2 = {
// .. o2's definition ..
};
Object.setPrototypeOf( o2, o1 );
// delegates to `o1.foo()`
o2.foo(); // foo
```
Alternatively:
```js
var o1 = {
foo() { console.log( "foo" ); }
};
var o2 = Object.setPrototypeOf( {
// .. o2's definition ..
}, o1 );
// delegates to `o1.foo()`
o2.foo(); // foo
```
In both previous snippets, the relationship between `o2` and `o1` appears at the end of the `o2` definition. More commonly, the relationship between an `o2` and `o1` is specified at the top of the `o2` definition, as it is with classes, and also with `__proto__` in object literals (see "Setting `[[Prototype]]`" in Chapter 2).
**Warning:** Setting a `[[Prototype]]` right after object creation is reasonable, as shown. But changing it much later is generally not a good idea and will usually lead to more confusion than clarity.
### `Object.assign(..)` Static Function
Many JavaScript libraries/frameworks provide utilities for copying/mixing one object's properties into another (e.g., jQuery's `extend(..)`). There are various nuanced differences between these different utilities, such as whether a property with value `undefined` is ignored or not.
ES6 adds `Object.assign(..)`, which is a simplified version of these algorithms. The first argument is the *target*, and any other arguments passed are the *sources*, which will be processed in listed order. For each source, its enumerable and own (e.g., not "inherited") keys, including symbols, are copied as if by plain `=` assignment. `Object.assign(..)` returns the target object.
Consider this object setup:
```js
var target = {},
o1 = { a: 1 }, o2 = { b: 2 },
o3 = { c: 3 }, o4 = { d: 4 };
// setup read-only property
Object.defineProperty( o3, "e", {
value: 5,
enumerable: true,
writable: false,
configurable: false
} );
// setup non-enumerable property
Object.defineProperty( o3, "f", {
value: 6,
enumerable: false
} );
o3[ Symbol( "g" ) ] = 7;
// setup non-enumerable symbol
Object.defineProperty( o3, Symbol( "h" ), {
value: 8,
enumerable: false
} );
Object.setPrototypeOf( o3, o4 );
```
Only the properties `a`, `b`, `c`, `e`, and `Symbol("g")` will be copied to `target`:
```js
Object.assign( target, o1, o2, o3 );
target.a; // 1
target.b; // 2
target.c; // 3
Object.getOwnPropertyDescriptor( target, "e" );
// { value: 5, writable: true, enumerable: true,
// configurable: true }
Object.getOwnPropertySymbols( target );
// [Symbol("g")]
```
The `d`, `f`, and `Symbol("h")` properties are omitted from copying; non-enumerable properties and non-owned properties are all excluded from the assignment. Also, `e` is copied as a normal property assignment, not duplicated as a read-only property.
In an earlier section, we showed using `setPrototypeOf(..)` to set up a `[[Prototype]]` relationship between an `o2` and `o1` object. There's another form that leverages `Object.assign(..)`:
```js
var o1 = {
foo() { console.log( "foo" ); }
};
var o2 = Object.assign(
Object.create( o1 ),
{
// .. o2's definition ..
}
);
// delegates to `o1.foo()`
o2.foo(); // foo
```
**Note:** `Object.create(..)` is the ES5 standard utility that creates an empty object that is `[[Prototype]]`-linked. See the *this & Object Prototypes* title of this series for more information.
## `Math`
ES6 adds several new mathematic utilities that fill in holes or aid with common operations. All of these can be manually calculated, but most of them are now defined natively so that in some cases the JS engine can either more optimally perform the calculations, or perform them with better decimal precision than their manual counterparts.
It's likely that asm.js/transpiled JS code (see the *Async & Performance* title of this series) is the more likely consumer of many of these utilities rather than direct developers.
Trigonometry:
* `cosh(..)` - Hyperbolic cosine
* `acosh(..)` - Hyperbolic arccosine
* `sinh(..)` - Hyperbolic sine
* `asinh(..)` - Hyperbolic arcsine
* `tanh(..)` - Hyperbolic tangent
* `atanh(..)` - Hyperbolic arctangent
* `hypot(..)` - The squareroot of the sum of the squares (i.e., the generalized Pythagorean theorem)
Arithmetic:
* `cbrt(..)` - Cube root
* `clz32(..)` - Count leading zeros in 32-bit binary representation
* `expm1(..)` - The same as `exp(x) - 1`
* `log2(..)` - Binary logarithm (log base 2)
* `log10(..)` - Log base 10
* `log1p(..)` - The same as `log(x + 1)`
* `imul(..)` - 32-bit integer multiplication of two numbers
Meta:
* `sign(..)` - Returns the sign of the number
* `trunc(..)` - Returns only the integer part of a number
* `fround(..)` - Rounds to nearest 32-bit (single precision) floating-point value
## `Number`
Importantly, for your program to properly work, it must accurately handle numbers. ES6 adds some additional properties and functions to assist with common numeric operations.
Two additions to `Number` are just references to the preexisting globals: `Number.parseInt(..)` and `Number.parseFloat(..)`.
### Static Properties
ES6 adds some helpful numeric constants as static properties:
* `Number.EPSILON` - The minimum value between any two numbers: `2^-52` (see Chapter 2 of the *Types & Grammar* title of this series regarding using this value as a tolerance for imprecision in floating-point arithmetic)
* `Number.MAX_SAFE_INTEGER` - The highest integer that can "safely" be represented unambiguously in a JS number value: `2^53 - 1`
* `Number.MIN_SAFE_INTEGER` - The lowest integer that can "safely" be represented unambiguously in a JS number value: `-(2^53 - 1)` or `(-2)^53 + 1`.
**Note:** See Chapter 2 of the *Types & Grammar* title of this series for more information about "safe" integers.
### `Number.isNaN(..)` Static Function
The standard global `isNaN(..)` utility has been broken since its inception, in that it returns `true` for things that are not numbers, not just for the actual `NaN` value, because it coerces the argument to a number type (which can falsely result in a NaN). ES6 adds a fixed utility `Number.isNaN(..)` that works as it should:
```js
var a = NaN, b = "NaN", c = 42;
isNaN( a ); // true
isNaN( b ); // true -- oops!
isNaN( c ); // false
Number.isNaN( a ); // true
Number.isNaN( b ); // false -- fixed!
Number.isNaN( c ); // false
```
### `Number.isFinite(..)` Static Function
There's a temptation to look at a function name like `isFinite(..)` and assume it's simply "not infinite". That's not quite correct, though. There's more nuance to this new ES6 utility. Consider:
```js
var a = NaN, b = Infinity, c = 42;
Number.isFinite( a ); // false
Number.isFinite( b ); // false
Number.isFinite( c ); // true
```
The standard global `isFinite(..)` coerces its argument, but `Number.isFinite(..)` omits the coercive behavior:
```js
var a = "42";
isFinite( a ); // true
Number.isFinite( a ); // false
```
You may still prefer the coercion, in which case using the global `isFinite(..)` is a valid choice. Alternatively, and perhaps more sensibly, you can use `Number.isFinite(+x)`, which explicitly coerces `x` to a number before passing it in (see Chapter 4 of the *Types & Grammar* title of this series).
### Integer-Related Static Functions
JavaScript number values are always floating point (IEEE-754). So the notion of determining if a number is an "integer" is not about checking its type, because JS makes no such distinction.
Instead, you need to check if there's any non-zero decimal portion of the value. The easiest way to do that has commonly been:
```js
x === Math.floor( x );
```
ES6 adds a `Number.isInteger(..)` helper utility that potentially can determine this quality slightly more efficiently:
```js
Number.isInteger( 4 ); // true
Number.isInteger( 4.2 ); // false
```
**Note:** In JavaScript, there's no difference between `4`, `4.`, `4.0`, or `4.0000`. All of these would be considered an "integer", and would thus yield `true` from `Number.isInteger(..)`.
In addition, `Number.isInteger(..)` filters out some clearly not-integer values that `x === Math.floor(x)` could potentially mix up:
```js
Number.isInteger( NaN ); // false
Number.isInteger( Infinity ); // false
```
Working with "integers" is sometimes an important bit of information, as it can simplify certain kinds of algorithms. JS code by itself will not run faster just from filtering for only integers, but there are optimization techniques the engine can take (e.g., asm.js) when only integers are being used.
Because of `Number.isInteger(..)`'s handling of `NaN` and `Infinity` values, defining a `isFloat(..)` utility would not be just as simple as `!Number.isInteger(..)`. You'd need to do something like:
```js
function isFloat(x) {
return Number.isFinite( x ) && !Number.isInteger( x );
}
isFloat( 4.2 ); // true
isFloat( 4 ); // false
isFloat( NaN ); // false
isFloat( Infinity ); // false
```
**Note:** It may seem strange, but Infinity should neither be considered an integer nor a float.
ES6 also defines a `Number.isSafeInteger(..)` utility, which checks to make sure the value is both an integer and within the range of `Number.MIN_SAFE_INTEGER`-`Number.MAX_SAFE_INTEGER` (inclusive).
```js
var x = Math.pow( 2, 53 ),
y = Math.pow( -2, 53 );
Number.isSafeInteger( x - 1 ); // true
Number.isSafeInteger( y + 1 ); // true
Number.isSafeInteger( x ); // false
Number.isSafeInteger( y ); // false
```
## `String`
Strings already have quite a few helpers prior to ES6, but even more have been added to the mix.
### Unicode Functions
"Unicode-Aware String Operations" in Chapter 2 discusses `String.fromCodePoint(..)`, `String#codePointAt(..)`, and `String#normalize(..)` in detail. They have been added to improve Unicode support in JS string values.
```js
String.fromCodePoint( 0x1d49e ); // "𝒞"
"ab𝒞d".codePointAt( 2 ).toString( 16 ); // "1d49e"
```
The `normalize(..)` string prototype method is used to perform Unicode normalizations that either combine characters with adjacent "combining marks" or decompose combined characters.
Generally, the normalization won't create a visible effect on the contents of the string, but will change the contents of the string, which can affect how things like the `length` property are reported, as well as how character access by position behave:
```js
var s1 = "e\u0301";
s1.length; // 2
var s2 = s1.normalize();
s2.length; // 1
s2 === "\xE9"; // true
```
`normalize(..)` takes an optional argument that specifies the normalization form to use. This argument must be one of the following four values: `"NFC"` (default), `"NFD"`, `"NFKC"`, or `"NFKD"`.
**Note:** Normalization forms and their effects on strings is well beyond the scope of what we'll discuss here. See "Unicode Normalization Forms" (http://www.unicode.org/reports/tr15/) for more information.
### `String.raw(..)` Static Function
The `String.raw(..)` utility is provided as a built-in tag function to use with template string literals (see Chapter 2) for obtaining the raw string value without any processing of escape sequences.
This function will almost never be called manually, but will be used with tagged template literals:
```js
var str = "bc";
String.raw`\ta${str}d\xE9`;
// "\tabcd\xE9", not " abcdé"
```
In the resultant string, `\` and `t` are separate raw characters, not the one escape sequence character `\t`. The same is true with the Unicode escape sequence.
### `repeat(..)` Prototype Function
In languages like Python and Ruby, you can repeat a string as:
```js
"foo" * 3; // "foofoofoo"
```
That doesn't work in JS, because `*` multiplication is only defined for numbers, and thus `"foo"` coerces to the `NaN` number.
However, ES6 defines a string prototype method `repeat(..)` to accomplish the task:
```js
"foo".repeat( 3 ); // "foofoofoo"
```
### String Inspection Functions
In addition to `String#indexOf(..)` and `String#lastIndexOf(..)` from prior to ES6, three new methods for searching/inspection have been added: `startsWith(..)`, `endsWith(..)`, and `includes(..)`.
```js
var palindrome = "step on no pets";
palindrome.startsWith( "step on" ); // true
palindrome.startsWith( "on", 5 ); // true
palindrome.endsWith( "no pets" ); // true
palindrome.endsWith( "no", 10 ); // true
palindrome.includes( "on" ); // true
palindrome.includes( "on", 6 ); // false
```
For all the string search/inspection methods, if you look for an empty string `""`, it will either be found at the beginning or the end of the string.
**Warning:** These methods will not by default accept a regular expression for the search string. See "Regular Expression Symbols" in Chapter 7 for information about disabling the `isRegExp` check that is performed on this first argument.
## Review
ES6 adds many extra API helpers on the various built-in native objects:
* `Array` adds `of(..)` and `from(..)` static functions, as well as prototype functions like `copyWithin(..)` and `fill(..)`.
* `Object` adds static functions like `is(..)` and `assign(..)`.
* `Math` adds static functions like `acosh(..)` and `clz32(..)`.
* `Number` adds static properties like `Number.EPSILON`, as well as static functions like `Number.isFinite(..)`.
* `String` adds static functions like `String.fromCodePoint(..)` and `String.raw(..)`, as well as prototype functions like `repeat(..)` and `includes(..)`.
Most of these additions can be polyfilled (see ES6 Shim), and were inspired by utilities in common JS libraries/frameworks.

File diff suppressed because it is too large Load Diff

View File

@ -1,463 +0,0 @@
# You Don't Know JS Yet: ES.Next & Beyond - 2nd Edition
# Chapter 8: Beyond ES6
| NOTE: |
| :--- |
| Work in progress |
.
.
.
.
.
.
.
----
| NOTE: |
| :--- |
| Everything below here is previous text from 1st edition, and is only here for reference while 2nd edition work is underway. **Please ignore this stuff.** |
At the time of this writing, the final draft of ES6 (*ECMAScript 2015*) is shortly headed toward its final official vote of approval by ECMA. But even as ES6 is being finalized, the TC39 committee is already hard at work on features for ES7/2016 and beyond.
As we discussed in Chapter 1, it's expected that the cadence of progress for JS is going to accelerate from updating once every several years to having an official version update once per year (hence the year-based naming). That alone is going to radically change how JS developers learn about and keep up with the language.
But even more importantly, the committee is actually going to work feature by feature. As soon as a feature is spec-complete and has its kinks worked out through implementation experiments in a few browsers, that feature will be considered stable enough to start using. We're all strongly encouraged to adopt features once they're ready instead of waiting for some official standards vote. If you haven't already learned ES6, the time is *past due* to get on board!
As the time of this writing, a list of future proposals and their status can be seen here (https://github.com/tc39/ecma262#current-proposals).
Transpilers and polyfills are how we'll bridge to these new features even before all browsers we support have implemented them. Babel, Traceur, and several other major transpilers already have support for some of the post-ES6 features that are most likely to stabilize.
With that in mind, it's already time for us to look at some of them. Let's jump in!
**Warning:** These features are all in various stages of development. While they're likely to land, and probably will look similar, take the contents of this chapter with more than a few grains of salt. This chapter will evolve in future editions of this title as these (and other!) features finalize.
## `async function`s
In "Generators + Promises" in Chapter 4, we mentioned that there's a proposal for direct syntactic support for the pattern of generators `yield`ing promises to a runner-like utility that will resume it on promise completion. Let's take a brief look at that proposed feature, called `async function`.
Recall this generator example from Chapter 4:
```js
run( function *main() {
var ret = yield step1();
try {
ret = yield step2( ret );
}
catch (err) {
ret = yield step2Failed( err );
}
ret = yield Promise.all([
step3a( ret ),
step3b( ret ),
step3c( ret )
]);
yield step4( ret );
} )
.then(
function fulfilled(){
// `*main()` completed successfully
},
function rejected(reason){
// Oops, something went wrong
}
);
```
The proposed `async function` syntax can express this same flow control logic without needing the `run(..)` utility, because JS will automatically know how to look for promises to wait and resume. Consider:
```js
async function main() {
var ret = await step1();
try {
ret = await step2( ret );
}
catch (err) {
ret = await step2Failed( err );
}
ret = await Promise.all( [
step3a( ret ),
step3b( ret ),
step3c( ret )
] );
await step4( ret );
}
main()
.then(
function fulfilled(){
// `main()` completed successfully
},
function rejected(reason){
// Oops, something went wrong
}
);
```
Instead of the `function *main() { ..` declaration, we declare with the `async function main() { ..` form. And instead of `yield`ing a promise, we `await` the promise. The call to run the function `main()` actually returns a promise that we can directly observe. That's the equivalent to the promise that we get back from a `run(main)` call.
Do you see the symmetry? `async function` is essentially syntactic sugar for the generators + promises + `run(..)` pattern; under the covers, it operates the same!
If you're a C# developer and this `async`/`await` looks familiar, it's because this feature is directly inspired by C#'s feature. It's nice to see language precedence informing convergence!
Babel, Traceur and other transpilers already have early support for the current status of `async function`s, so you can start using them already. However, in the next section "Caveats", we'll see why you perhaps shouldn't jump on that ship quite yet.
**Note:** There's also a proposal for `async function*`, which would be called an "async generator." You can both `yield` and `await` in the same code, and even combine those operations in the same statement: `x = await yield y`. The "async generator" proposal seems to be more in flux -- namely, its return value is not fully worked out yet. Some feel it should be an *observable*, which is kind of like the combination of an iterator and a promise. For now, we won't go further into that topic, but stay tuned as it evolves.
### Caveats
One unresolved point of contention with `async function` is that because it only returns a promise, there's no way from the outside to *cancel* an `async function` instance that's currently running. This can be a problem if the async operation is resource intensive, and you want to free up the resources as soon as you're sure the result won't be needed.
For example:
```js
async function request(url) {
var resp = await (
new Promise( function(resolve,reject){
var xhr = new XMLHttpRequest();
xhr.open( "GET", url );
xhr.onreadystatechange = function(){
if (xhr.readyState == 4) {
if (xhr.status == 200) {
resolve( xhr );
}
else {
reject( xhr.statusText );
}
}
};
xhr.send();
} )
);
return resp.responseText;
}
var pr = request( "http://some.url.1" );
pr.then(
function fulfilled(responseText){
// ajax success
},
function rejected(reason){
// Oops, something went wrong
}
);
```
This `request(..)` that I've conceived is somewhat like the `fetch(..)` utility that's recently been proposed for inclusion into the web platform. So the concern is, what happens if you want to use the `pr` value to somehow indicate that you want to cancel a long-running Ajax request, for example?
Promises are not cancelable (at the time of writing, anyway). In my opinion, as well as many others, they never should be (see the *Async & Performance* title of this series). And even if a promise did have a `cancel()` method on it, does that necessarily mean that calling `pr.cancel()` should actually propagate a cancelation signal all the way back up the promise chain to the `async function`?
Several possible resolutions to this debate have surfaced:
* `async function`s won't be cancelable at all (status quo)
* A "cancel token" can be passed to an async function at call time
* Return value changes to a cancelable-promise type that's added
* Return value changes to something else non-promise (e.g., observable, or control token with promise and cancel capabilities)
At the time of this writing, `async function`s return regular promises, so it's less likely that the return value will entirely change. But it's too early to tell where things will land. Keep an eye on this discussion.
## `Object.observe(..)`
One of the holy grails of front-end web development is data binding -- listening for updates to a data object and syncing the DOM representation of that data. Most JS frameworks provide some mechanism for these sorts of operations.
It appears likely that post ES6, we'll see support added directly to the language, via a utility called `Object.observe(..)`. Essentially, the idea is that you can set up a listener to observe an object's changes, and have a callback called any time a change occurs. You can then update the DOM accordingly, for instance.
There are six types of changes that you can observe:
* add
* update
* delete
* reconfigure
* setPrototype
* preventExtensions
By default, you'll be notified of all these change types, but you can filter down to only the ones you care about.
Consider:
```js
var obj = { a: 1, b: 2 };
Object.observe(
obj,
function(changes){
for (var change of changes) {
console.log( change );
}
},
[ "add", "update", "delete" ]
);
obj.c = 3;
// { name: "c", object: obj, type: "add" }
obj.a = 42;
// { name: "a", object: obj, type: "update", oldValue: 1 }
delete obj.b;
// { name: "b", object: obj, type: "delete", oldValue: 2 }
```
In addition to the main `"add"`, `"update"`, and `"delete"` change types:
* The `"reconfigure"` change event is fired if one of the object's properties is reconfigured with `Object.defineProperty(..)`, such as changing its `writable` attribute. See the *this & Object Prototypes* title of this series for more information.
* The `"preventExtensions"` change event is fired if the object is made non-extensible via `Object.preventExtensions(..)`.
Because both `Object.seal(..)` and `Object.freeze(..)` also imply `Object.preventExtensions(..)`, they'll also fire its corresponding change event. In addition, `"reconfigure"` change events will also be fired for each property on the object.
* The `"setPrototype"` change event is fired if the `[[Prototype]]` of an object is changed, either by setting it with the `__proto__` setter, or using `Object.setPrototypeOf(..)`.
Notice that these change events are notified immediately after said change. Don't confuse this with proxies (see Chapter 7) where you can intercept the actions before they occur. Object observation lets you respond after a change (or set of changes) occurs.
### Custom Change Events
In addition to the six built-in change event types, you can also listen for and fire custom change events.
Consider:
```js
function observer(changes){
for (var change of changes) {
if (change.type == "recalc") {
change.object.c =
change.object.oldValue +
change.object.a +
change.object.b;
}
}
}
function changeObj(a,b) {
var notifier = Object.getNotifier( obj );
obj.a = a * 2;
obj.b = b * 3;
// queue up change events into a set
notifier.notify( {
type: "recalc",
name: "c",
oldValue: obj.c
} );
}
var obj = { a: 1, b: 2, c: 3 };
Object.observe(
obj,
observer,
["recalc"]
);
changeObj( 3, 11 );
obj.a; // 12
obj.b; // 30
obj.c; // 3
```
The change set (`"recalc"` custom event) has been queued for delivery to the observer, but not delivered yet, which is why `obj.c` is still `3`.
The changes are by default delivered at the end of the current event loop (see the *Async & Performance* title of this series). If you want to deliver them immediately, use `Object.deliverChangeRecords(observer)`. Once the change events are delivered, you can observe `obj.c` updated as expected:
```js
obj.c; // 42
```
In the previous example, we called `notifier.notify(..)` with the complete change event record. An alternative form for queuing change records is to use `performChange(..)`, which separates specifying the type of the event from the rest of event record's properties (via a function callback). Consider:
```js
notifier.performChange( "recalc", function(){
return {
name: "c",
// `this` is the object under observation
oldValue: this.c
};
} );
```
In certain circumstances, this separation of concerns may map more cleanly to your usage pattern.
### Ending Observation
Just like with normal event listeners, you may wish to stop observing an object's change events. For that, you use `Object.unobserve(..)`.
For example:
```js
var obj = { a: 1, b: 2 };
Object.observe( obj, function observer(changes) {
for (var change of changes) {
if (change.type == "setPrototype") {
Object.unobserve(
change.object, observer
);
break;
}
}
} );
```
In this trivial example, we listen for change events until we see the `"setPrototype"` event come through, at which time we stop observing any more change events.
## Exponentiation Operator
An operator has been proposed for JavaScript to perform exponentiation in the same way that `Math.pow(..)` does. Consider:
```js
var a = 2;
a ** 4; // Math.pow( a, 4 ) == 16
a **= 3; // a = Math.pow( a, 3 )
a; // 8
```
**Note:** `**` is essentially the same as it appears in Python, Ruby, Perl, and others.
## Objects Properties and `...`
As we saw in the "Too Many, Too Few, Just Enough" section of Chapter 2, the `...` operator is pretty obvious in how it relates to spreading or gathering arrays. But what about objects?
Such a feature was considered for ES6, but was deferred to be considered after ES6 (aka "ES7" or "ES2016" or ...). Here's how it might work in that "beyond ES6" timeframe:
```js
var o1 = { a: 1, b: 2 },
o2 = { c: 3 },
o3 = { ...o1, ...o2, d: 4 };
console.log( o3.a, o3.b, o3.c, o3.d );
// 1 2 3 4
```
The `...` operator might also be used to gather an object's destructured properties back into an object:
```js
var o1 = { b: 2, c: 3, d: 4 };
var { b, ...o2 } = o1;
console.log( b, o2.c, o2.d ); // 2 3 4
```
Here, the `...o2` re-gathers the destructured `c` and `d` properties back into an `o2` object (`o2` does not have a `b` property like `o1` does).
Again, these are just proposals under consideration beyond ES6. But it'll be cool if they do land.
## `Array#includes(..)`
One extremely common task JS developers need to perform is searching for a value inside an array of values. The way this has always been done is:
```js
var vals = [ "foo", "bar", 42, "baz" ];
if (vals.indexOf( 42 ) >= 0) {
// found it!
}
```
The reason for the `>= 0` check is because `indexOf(..)` returns a numeric value of `0` or greater if found, or `-1` if not found. In other words, we're using an index-returning function in a boolean context. But because `-1` is truthy instead of falsy, we have to be more manual with our checks.
In the *Types & Grammar* title of this series, I explored another pattern that I slightly prefer:
```js
var vals = [ "foo", "bar", 42, "baz" ];
if (~vals.indexOf( 42 )) {
// found it!
}
```
The `~` operator here conforms the return value of `indexOf(..)` to a value range that is suitably boolean coercible. That is, `-1` produces `0` (falsy), and anything else produces a non-zero (truthy) value, which is what we for deciding if we found the value or not.
While I think that's an improvement, others strongly disagree. However, no one can argue that `indexOf(..)`'s searching logic is perfect. It fails to find `NaN` values in the array, for example.
So a proposal has surfaced and gained a lot of support for adding a real boolean-returning array search method, called `includes(..)`:
```js
var vals = [ "foo", "bar", 42, "baz" ];
if (vals.includes( 42 )) {
// found it!
}
```
**Note:** `Array#includes(..)` uses matching logic that will find `NaN` values, but will not distinguish between `-0` and `0` (see the *Types & Grammar* title of this series). If you don't care about `-0` values in your programs, this will likely be exactly what you're hoping for. If you *do* care about `-0`, you'll need to do your own searching logic, likely using the `Object.is(..)` utility (see Chapter 6).
## SIMD
We cover Single Instruction, Multiple Data (SIMD) in more detail in the *Async & Performance* title of this series, but it bears a brief mention here, as it's one of the next likely features to land in a future JS.
The SIMD API exposes various low-level (CPU) instructions that can operate on more than a single number value at a time. For example, you'll be able to specify two *vectors* of 4 or 8 numbers each, and multiply the respective elements all at once (data parallelism!).
Consider:
```js
var v1 = SIMD.float32x4( 3.14159, 21.0, 32.3, 55.55 );
var v2 = SIMD.float32x4( 2.1, 3.2, 4.3, 5.4 );
SIMD.float32x4.mul( v1, v2 );
// [ 6.597339, 67.2, 138.89, 299.97 ]
```
SIMD will include several other operations besides `mul(..)` (multiplication), such as `sub()`, `div()`, `abs()`, `neg()`, `sqrt()`, and many more.
Parallel math operations are critical for the next generations of high performance JS applications.
## WebAssembly (WASM)
Brendan Eich made a late breaking announcement near the completion of the first edition of this title that has the potential to significantly impact the future path of JavaScript: WebAssembly (WASM). We will not be able to cover WASM in detail here, as it's extremely early at the time of this writing. But this title would be incomplete without at least a brief mention of it.
One of the strongest pressures on the recent (and near future) design changes of the JS language has been the desire that it become a more suitable target for transpilation/cross-compilation from other languages (like C/C++, ClojureScript, etc.). Obviously, performance of code running as JavaScript has been a primary concern.
As discussed in the *Async & Performance* title of this series, a few years ago a group of developers at Mozilla introduced an idea to JavaScript called ASM.js. ASM.js is a subset of valid JS that most significantly restricts certain actions that make code hard for the JS engine to optimize. The result is that ASM.js compatible code running in an ASM-aware engine can run remarkably faster, nearly on par with native optimized C equivalents. Many viewed ASM.js as the most likely backbone on which performance-hungry applications would ride in JavaScript.
In other words, all roads to running code in the browser *lead through JavaScript*.
That is, until the WASM announcement. WASM provides an alternate path for other languages to target the browser's runtime environment without having to first pass through JavaScript. Essentially, if WASM takes off, JS engines will grow an extra capability to execute a binary format of code that can be seen as somewhat similar to a bytecode (like that which runs on the JVM).
WASM proposes a format for a binary representation of a highly compressed AST (syntax tree) of code, which can then give instructions directly to the JS engine and its underpinnings, without having to be parsed by JS, or even behave by the rules of JS. Languages like C or C++ can be compiled directly to the WASM format instead of ASM.js, and gain an extra speed advantage by skipping the JS parsing.
The near term for WASM is to have parity with ASM.js and indeed JS. But eventually, it's expected that WASM would grow new capabilities that surpass anything JS could do. For example, the pressure for JS to evolve radical features like threads -- a change that would certainly send major shockwaves through the JS ecosystem -- has a more hopeful future as a future WASM extension, relieving the pressure to change JS.
In fact, this new roadmap opens up many new roads for many languages to target the web runtime. That's an exciting new future path for the web platform!
What does it mean for JS? Will JS become irrelevant or "die"? Absolutely not. ASM.js will likely not see much of a future beyond the next couple of years, but the majority of JS is quite safely anchored in the web platform story.
Proponents of WASM suggest its success will mean that the design of JS will be protected from pressures that would have eventually stretched it beyond assumed breaking points of reasonability. It is projected that WASM will become the preferred target for high-performance parts of applications, as authored in any of a myriad of different languages.
Interestingly, JavaScript is one of the lesser likely languages to target WASM in the future. There may be future changes that carve out subsets of JS that might be tenable for such targeting, but that path doesn't seem high on the priority list.
While JS likely won't be much of a WASM funnel, JS code and WASM code will be able to interoperate in the most significant ways, just as naturally as current module interactions. You can imagine calling a JS function like `foo()` and having that actually invoke a WASM function of that name with the power to run well outside the constraints of the rest of your JS.
Things which are currently written in JS will probably continue to always be written in JS, at least for the foreseeable future. Things which are transpiled to JS will probably eventually at least consider targeting WASM instead. For things which need the utmost in performance with minimal tolerance for layers of abstraction, the likely choice will be to find a suitable non-JS language to author in, then targeting WASM.
There's a good chance this shift will be slow, and will be years in the making. WASM landing in all the major browser platforms is probably a few years out at best. In the meantime, the WASM project (https://github.com/WebAssembly) has an early polyfill to demonstrate proof-of-concept for its basic tenets.
But as time goes on, and as WASM learns new non-JS tricks, it's not too much a stretch of imagination to see some currently-JS things being refactored to a WASM-targetable language. For example, the performance sensitive parts of frameworks, game engines, and other heavily used tools might very well benefit from such a shift. Developers using these tools in their web applications likely won't notice much difference in usage or integration, but will just automatically take advantage of the performance and capabilities.
What's certain is that the more real WASM becomes over time, the more it means to the trajectory and design of JavaScript. It's perhaps one of the most important "beyond ES6" topics developers should keep an eye on.
## Review
If all the other books in this series essentially propose this challenge, "you (may) not know JS (as much as you thought)," this book has instead suggested, "you don't know JS anymore." The book has covered a ton of new stuff added to the language in ES6. It's an exciting collection of new language features and paradigms that will forever improve our JS programs.
But JS is not done with ES6! Not even close. There's already quite a few features in various stages of development for the "beyond ES6" timeframe. In this chapter, we briefly looked at some of the most likely candidates to land in JS very soon.
`async function`s are powerful syntactic sugar on top of the generators + promises pattern (see Chapter 4). `Object.observe(..)` adds direct native support for observing object change events, which is critical for implementing data binding. The `**` exponentiation operator, `...` for object properties, and `Array#includes(..)` are all simple but helpful improvements to existing mechanisms. Finally, SIMD ushers in a new era in the evolution of high performance JS.
Cliché as it sounds, the future of JS is really bright! The challenge of this series, and indeed of this book, is incumbent on every reader now. What are you waiting for? It's time to get learning and exploring!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

View File

@ -8,55 +8,5 @@
* Foreword * Foreword
* Preface * Preface
* Chapter 1: ES? Now & Future * Chapter 1: TODO
* Versioning * TODO
* Transpiling
* Chapter 2: Syntax
* Block-Scoped Declarations
* Spread / Rest
* Default Parameter Values
* Destructuring
* Object Literal Extensions
* Template Literals
* Arrow Functions
* `for..of` Loops
* Regular Expression Extensions
* Number Literal Extensions
* Unicode
* Symbols
* Chapter 3: Organization
* Iterators
* Generators
* Modules
* Classes
* Chapter 4: Async Flow Control
* Promises
* Generators + Promises
* Chapter 5: Collections
* TypedArrays
* Maps
* WeakMaps
* Sets
* WeakSets
* Chapter 6: API Additions
* `Array`
* `Object`
* `Math`
* `Number`
* `String`
* Chapter 7: Meta Programming
* Function Names
* Meta Properties
* Well Known Symbols
* Proxies
* `Reflect` API
* Feature Testing
* Tail Call Optimization (TCO)
* Chapter 8: Beyond ES6
* `async function`s
* `Object.observe(..)`
* Exponentiation Operator
* Object Properties and `...`
* `Array#includes(..)`
* SIMD
* Appendix A: TODO