How to Make HTTP Requests in Node.js With Fetch API

Flipnode on Jun 12 2023

blog-image

The inception of the world's first website marked the beginning of a web revolution, with HTML serving as its foundation. However, the landscape has evolved significantly since then. Modern websites rely on a multitude of resources like CSS, images, JavaScript, fonts, JSON, and more. Additionally, dynamic websites further increase the resource load.

JavaScript, as a powerful client-side scripting language, has played a pivotal role in the advancement of websites. It facilitated client-server communication without the need for page reloads through XMLHttpRequest or XHR objects.

While the emergence of the Fetch API has introduced a new dynamic, JavaScript remains popular due to its versatility, including its applicability for server-side code via Node.js.

In a recent development, support for the experimental Fetch API has been added to Node.js. This article delves into the concept of the Fetch API, its application in Node.js, and its advantages over alternatives like Axios or XHR.

What is Fetch API

The Fetch API is an application programming interface designed for retrieving network resources. It simplifies the process of making HTTP requests, such as GET or POST, by providing a streamlined approach.

One notable feature of the Fetch API is its support for modern standards like Promises, which enables developers to write cleaner code without the need for complex callbacks.

The Fetch API is natively supported in all major browsers, allowing JavaScript developers to utilize it seamlessly. For server-side JavaScript code, developers often rely on the npm node fetch package, which boasts millions of weekly downloads and widespread popularity.

With the release of Node.js version 17.5, experimental support for the Fetch API has been introduced. This means that you can now write server-side JavaScript code using the Fetch API without the requirement of installing any third-party libraries. To get started, simply run the following command:

node --experimental-fetch your_code.js

How to use Fetch API

To demonstrate the usage of the Fetch API, we will utilize a dummy website as the target. You can employ the fetch-then syntax to work with the returned Promise object. Create a file in your code editor and input the following code:

fetch('https://quotes.toscrape.com/random')
.then((response) => response.text())
.then((body) => {
console.log(body);
});

This code initiates an HTTP GET request and prints the HTML response.

To delve deeper, the fetch() method returns a Promise object. The first then() extracts the text from the response, while the second then() logs the response HTML.

Save the file as quotes.js, open the terminal, and execute the following command:

node --experimental-fetch quotes.js

This will print the HTML content of the page. You may also encounter a warning stating that Fetch is an experimental feature.

Alternatively, you can use the async-await syntax with Node Fetch, as shown below:

(async () => {
const response = await fetch('https://quotes.toscrape.com/random');
const body = await response.text();
console.log(body);
})();

This code achieves the same functionality as the previous example, but utilizes async-await for a more concise and synchronous-looking structure.

If you intend to expand the code for web scraping purposes, you can install a parsing library like Cheerio to extract specific elements. The following example illustrates extracting a quote:

const cheerio = require("cheerio");
fetch('https://quotes.toscrape.com/random')
.then((response) => response.text())
.then((body) => {
const $ = cheerio.load(body);
console.log($('.text').text());
});

By integrating Cheerio, this code loads the HTML response into a Cheerio object ($), allowing you to extract and print the text of the element with the class .text.

HTTP headers in Fetch API

Let's discuss the response headers. The response object contains all the response headers within the response.headers collection. If you want to print the response headers, you can do so using the following code:

const url = 'https://httpbin.org/get';
fetch(url)
.then(response => {
for (const [header, value] of response.headers) {
console.log(`${header}: ${value}`);
}
return response.text();
})
.then(data => {
console.log(data);
});

When running this code with Node.js, you will see the expected response headers. However, things are different when running in the browser. If the server you are querying has CORS (Cross-Origin Resource Sharing) headers enabled, your browser will restrict access to the headers for security reasons.

In the browser, you will only have access to the following headers: Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, and Pragma. You can read more about it here.

It's also possible to send custom request headers using the second parameter of fetch(), where various options can be set, including headers. The following example demonstrates how to send a custom user-agent in the HTTP request:

const url = 'https://httpbin.org/get';
fetch(url, {
headers: {
"User-Agent": "My User Agent",
},
})
.then((response) => response.json())
.then(data => {
console.log(data);
});

As discussed in the next section, the second parameter can be utilized for additional functionalities.

Sending POST requests

By default, the Fetch API uses the GET method for requests. However, you can send a POST request by specifying the method as follows:

fetch(url, { method: "POST" });

Let's practice sending some dummy data to a test website. You'll need to convert the data you want to send in the HTTP POST request into a string. Here's an example:

const url = 'https://httpbin.org/post';
const data = {
x: 1920,
y: 1080,
};
const customHeaders = {
"Content-Type": "application/json",
};
fetch(url, {
method: "POST",
headers: customHeaders,
body: JSON.stringify(data),
})
.then((response) => response.json())
.then((data) => {
console.log(data);
});

Observe how we set method: "POST" and use JSON.stringify(data) to convert the data into a string.

Similarly, you can utilize other HTTP methods such as DELETE, PUT, etc. by specifying the appropriate method in the fetch() call.

Exception handling

When using the Node Fetch API, you can utilize the fetch - then - catch convention to handle errors, as it returns a Promise object:

javascript

Copy code

fetch('https://invalid_url')
.then((response) => response.text())
.then((body) => {
console.log(body);
})
.catch((error) => {
console.error('Error in execution:', error);
});

If you prefer to use the async-await syntax, you can handle errors using the try - catch block like this:

(async () => {
try {
const response = await fetch('https://invalid_url');
const body = await response.text();
console.log(body);
} catch (error) {
console.error(error);
}
})();

By encapsulating the asynchronous code within the try block, any errors that occur during the await operations will be caught and handled in the catch block.

Axios vs Fetch API

Axios is a widely used Node package that simplifies making HTTP GET and POST requests. Check out our tutorial on web scraping with JavaScript and Node.js for a practical example of using Axios.

To send a GET request using Axios, you can call the get() method:

const response = await axios.get(url);

Similarly, to send a POST request, use the post() method:

const response = await axios.post(url);

Now let's compare the Node Fetch API with Axios by sending a POST request to https://httpbin.org/post with JSON data. Pay attention to the following details:

  • JSON data
  • Custom request headers
  • Response in JSON format

Here's an example using Axios:

const axios = require('axios');
const url = 'https://httpbin.org/post';
const data = {
x: 1920,
y: 1080,
};
const customHeaders = {
"Content-Type": "application/json",
};
axios.post(url, data, {
headers: customHeaders,
})
.then(({ data }) => {
console.log(data);
})
.catch((error) => {
console.error(error);
});

And here's the equivalent code using the Fetch API:

const url = 'https://httpbin.org/post';
const data = {
x: 1920,
y: 1080,
};
const customHeaders = {
"Content-Type": "application/json",
};

fetch(url, {
method: "POST",
headers: customHeaders,
body: JSON.stringify(data),
})
.then((response) => response.json())
.then((data) => {
console.log(data);
})
.catch((error) => {
console.error(error);
});

Both of these code snippets will produce the same output.

Based on the examples provided, here are the differences between Axios and Fetch API:

  • Request Body: Fetch API uses the body property of the request, whereas Axios uses the data property.
  • JSON Data: With Axios, JSON data can be sent directly, while Fetch API requires the conversion of data to a string using JSON.stringify().
  • Handling JSON: Axios can handle JSON directly, while the Fetch API requires calling response.json() to obtain the response in JSON format.
  • Response Data Variable: In Axios, the response data variable name must be data, whereas with Fetch API, it can be any variable name.
  • Progress Monitoring: Axios provides an easy way to monitor update progress using the progress event, whereas Fetch API does not have a direct method for this.
  • Interceptors: Axios supports interceptors, allowing you to intercept and modify requests or responses. Fetch API does not have built-in support for interceptors.
  • Streaming Response: Fetch API allows streaming of a response, which can be useful for large data transfers. Axios does not have built-in support for streaming.

These are some of the notable differences between Axios and Fetch API. Consider these distinctions when choosing between the two libraries for your specific use case.

Conclusion

The inclusion of Fetch API in Node.js has been eagerly anticipated. While it remains an experimental feature at the time of writing this article, you can utilize the node-fetch package for production code, offering the same functionality. When combined with libraries like Cheerio, the Fetch API becomes a valuable tool for web scraping purposes.

News and updates

Stay up-to-date with the latest web scraping guides and news by subscribing to our newsletter.

Subscribe

Related articles

thumbnail
How to Use DataOpen-Source Intelligence to Boost Your Business: ESPY's Guide

Discover how to leverage open-source intelligence to drive business growth with this step-by-step guide shared by ESPY.

Flipnode
author avatar
Flipnode
5 min read
thumbnail
ScrapersWeb Scraping With RegEx

Regular Expressions (RegEx) are powerful pattern matching tools that allow you to filter and extract specific combinations of data, providing the desired output.

Flipnode
author avatar
Flipnode
5 min read
thumbnail
How to Use DataXPath vs CSS Selectors

Read this article to learn what XPath and CSS selectors are and how to create them. Find out the differences between XPath vs CSS, and know which option to choose.

Flipnode
author avatar
Flipnode
12 min read