Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.


problem statement

but seriously though, let us consider some typical situations where concurrency is key


ex1: page loading issue


ex2: networking from JS


callbacks hell

so far we have seen one tool to deal with concurrency: callbacks
however the code can quickly become what is known as the callbacks hell


better tools: promises and async

to mitigate this issue, we have 2 additional tools


digression: the REPL

before going further, it’s important to remember the logic of the REPL:

// when this block is copied in the console, it will print .. 6
10 * 100
10 * 200
2*3

promise example with fetch()

to illustrate the notion of promises:
we see how the browser typically sends its own HTTP requests

our example is about fetching some DNA samples on www.ebi.ac.uk, but the content is not really important, it’s just an example..
and notably, the same technique can be used as-is to send API calls

to achieve this we have a builtin function called fetch(), that returns a promise object

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// let us start with defining a few URLs

// NOTE that they do NOT return HTML, it's actually PLAIN TEXT
// in some kind of bio-informatics standard...
// to get a glimpse, point your navigator to the first one

// a valid small DNA sample (60 kb)
URL_small = 'https://www.ebi.ac.uk/ena/browser/api/embl/AE000789?download=true'

// valid too, but larger (10 Mb)
URL_large = 'https://www.ebi.ac.uk/ena/browser/api/embl/CP010053?download=true'

// an invalid URL - used later for error management
URL_broken = 'https://some-invalid/web/site'

fetching a small file

that done, we can fetch one URL (the small one for starters) with this code:

1
2
3
fetch(URL_small)
    .then(response =>  response.text())
    .then(text => console.log(`received ${text.length} characters`))

as you can see, this causes 2 things:

next, we’ll redo it with a larger file, that takes a longer time, to get a better understanding


again with a larger file

let’s kind of zoom in, and redo the same with a larger URL that will take more time

run the following code, and observe that:

1
2
3
4
5
6
7
8
9
// again with a larger file
// observe how the network activity happens "in the background"

fetch(URL_large)
    .then(response =>  response.text())
    .then(text => console.log(`received ${text.length} characters`))

// proceed to running these immediately
console.log("I am still alive...", 10 * 2000)

promises

.then()

typically, you use a library function that returns a promise
like here: fetch() is such a function that returns a promise

creating a promise is like starting a separate task, it will be processed in parallel !

and you can use .then() to specify what should happen next (i.e. when the promise is complete)


.then().then()

all this allows for chaining, like we did when fetching the URLs above

specifically, in these examples above, what happens is


as a function

let us now rewrite our code into a proper function, so we can use it on any URL

// for convenience, just in case we need to copy that again

URL_small = 'https://www.ebi.ac.uk/ena/browser/api/embl/AE000789?download=true'
URL_large = 'https://www.ebi.ac.uk/ena/browser/api/embl/CP010053?download=true'
URL_broken = 'https://some-invalid/web/site'

without error management

in this first iteration, we do not handle errors
for the sake of simplicity, we just display:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const get_url1 = (url) => {
    // hope for the best (no error handling)
    let promise = fetch(url)
        .then(response => {
            // display http status from header
            // to illustrate that it is available early
            console.log(`${response.url} returned ${response.status}`)
            // actually get the contents
            // and pass it to next stage
            return response.text()
        })
        .then(text => {
            console.log(`${url} page contains ${text.length} bytes`)
            return text
        })
    return promise
}

and here is how we could use it
since our function returns a promise, we use it with .then(), just like we did with fetch()

// let us display the first 20 characters in the file

get_url1(URL_small)
    .then(text => console.log(`first 20 characters >${text.slice(0, 20)}<...`))

but when called on a broken URL, this code raises an exception:

get_url1(URL_broken)

so we need some tool to handle errors, and that’s the purpose of .catch()


.catch()


with error management

so we can come up with a second iteration, where we take care of errors
to this end, we add a catch() at the end

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const get_url2 = (url) => {
    // let's get rid of the promise variable, not needed
    return fetch(url)
        .then(response => {
            console.log(`${response.url} returned ${response.status}`)
            return response.text()
        })
        .then(text => {
            console.log(`actual page contains ${text.length} bytes`)
            return text
        })

        // this catch will deal with any error in the upstream chain
        .catch(err => console.log(`OOPS with ${url}`, err))

        // just to show that the exception was properly handled
        .then((text) => {
            console.log("the show must go on")
            return text
        })
}

.catch() recalls exception handling

// and now, no exception occurs on an invalid URL
// we just receive a void result from get_url2 in that case

get_url2(URL_small)
    .then(text => {
        if (text)
            console.log(`first 20 characters >${text.slice(0, 20)}<...`)
    })

no more pyramid of doom

with this model, we can now avoid the pyramid of doom, using chaining
which means that this code (not runnable of course)

1
2
3
4
5
6
7
8
9
// nested / pyramidal

doSomething(function(result) {
  doSomethingElse(result, function(newResult) {
    doThirdThing(newResult, function(finalResult) {
      console.log(`final result ${finalResult}`)
    }, failureCallback)
  }, failureCallback)
}, failureCallback)

becomes this linear form, that much better describes the logic

1
2
3
4
5
6
7
8
9
10
11
doSomething()
  .then(function(result) {
     return doSomethingElse(result)
  })
  .then(function(newResult) {
     return doThirdThing(newResult)
  })
  .then(function(finalResult) {
     console.log(`final result ${finalResult}`)
  })
 .catch(failureCallback)

async / await

hopefully you are now convinced that promises are cooler than callbacks - for this kind of processing at least
however the syntax is still a little awkward, and so in order to further improve readability, these 2 keywords have been introduced:


async functions

with async we can create a function that returns a Promise by default
moreover, all functions that return a Promise, including .fetch(),
are called asynchronous functions


the await keyword

the await keyword allows to wait for the result of a promise (as opposed to getting the promise itself !)


async get_url()

let us see how we could take advantage of these new features to rewrite get_url()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//              ↓↓↓↓↓
const get_url = async (url) => {
    try {
        //         ↓↓↓↓↓
        response = await fetch(url)
        console.log(`status=${response.status}`)
        //         ↓↓↓↓↓
        let text = await response.text()
        console.log(`length=${text.length}`)
       return text
    } catch(err) {
       console.log(`OOPS with url=${url}`, err)
    }
}

and here is how we would use this code

let text = await get_url(URL_small)
console.log(`first 20 characters >${text.slice(0, 20)}<...`)

see also

this is just an overview, refer to



benefits of promises

promises run as coroutines

// let us fetch the 3 URLS **at the same time**

for (let url of [URL_broken, URL_small, URL_large])
    get_url(url)

Promise.all()

1
2
3
4
5
6
7
8
9
// could also use .map(), but let's keep it simple
promises = [
   get_url(URL_broken), get_url(URL_small), get_url(URL_large)
]

contents = await Promise.all(promises)
// then, once all fetches have completed,
// you find in contents[0] .. contents[2] the 3 texts returned
// e.g. first one being undefined because the url is broken
Footnotes
  1. typically, API calls are also sent over HTTPS, but anyway