The Only JavaScript Article You'd Ever Need - 3/6

"use strict"; - The Professional Developer’s Safety Net

Before we dive into asynchronous JavaScript and the event loop, we need to discuss something fundamental: how to write JavaScript that fails fast and fails clearly. JavaScript was famously designed in 10 days, and when you create a programming language under such time constraints, you inevitably make some questionable design decisions.

For years, JavaScript had these peculiar silent failures. You’d make a typo, and instead of throwing an error, JavaScript would just quietly do something unexpected in the background, leaving you to spend hours debugging mysterious behavior. It was like having a car that would silently ignore your attempts to brake while appearing to function normally.

Enter "use strict";.

Strict Mode isn’t a new feature; it’s a different parsing and execution context for your code. Think of it as activating a “professional mode” for the JavaScript engine. It doesn’t add new syntax; it tightens the rules on existing syntax. Its primary job is to transform those silent, mysterious errors into loud, obvious, program-stopping errors. This is exactly what you want as a professional developer.

Why It Exists: A Time Machine to a Better Language

The creators of JavaScript couldn’t just fix the language’s original mistakes, because that would break millions of existing websites overnight. So, they gave us an opt-in solution. By putting the simple string "use strict"; at the top of your file or function, you’re telling the browser, “Please run my code with the modern, stricter rules.”

Using Strict Mode is the difference between a car that silently overheats and damages its engine, and a car that displays a clear warning light on the dashboard. You definitely want the warning light.

What Happens When You Forget Your Seatbelt

Let’s look at some cases of the chaos Strict Mode saves you from.

1. Accidental Global Variables (The Silent Bug Factory)

This is the most important protection Strict Mode provides. In “sloppy mode” (the default), if you assign a value to a variable you haven’t declared, JavaScript “helpfully” creates a new global variable for you. This is a maintenance nightmare that leads to global scope pollution and bugs that are nearly impossible to trace.

Without use strict:

function spillTheBeans() {
  // I forgot 'let', 'const', or 'var'
  horribleMistake = "I am now a global variable";
}

spillTheBeans();
console.log(horribleMistake); // "I am now a global variable"
// This variable is now attached to the global object (window in browsers)
// and can be accessed or overwritten from ANYWHERE. Catastrophic.

With use strict:

"use strict";

function containTheMess() {
  // I forgot 'let', 'const', or 'var' again...
  wonderfulMistake = "This will throw an error";
}

containTheMess(); // Uncaught ReferenceError: wonderfulMistake is not defined

Strict Mode throws a ReferenceError, stopping you immediately. It forces you to properly declare your variables, preventing one of the most common and insidious sources of bugs in JavaScript.

2. Deleting Undeletable Things

In non-strict mode, trying to delete things that can’t be deleted (like variables or functions) just fails silently. It does nothing and tells you nothing.

Without use strict:

let myVar = 5;
delete myVar; // Returns 'false'. The variable is NOT deleted. No error. Useless.
console.log(myVar); // 5

With use strict:

"use strict";

let myVar = 5;
delete myVar; // Uncaught SyntaxError: Delete of an unqualified identifier in strict mode.

Strict Mode screams at you, telling you that what you’re trying to do is fundamentally illegal and makes no sense.

3. Duplicate Parameter Names

This is a subtle but problematic bug. In non-strict mode, you can have duplicate parameter names in a function. The last one simply overwrites the others.

Without use strict:

function addThings(a, b, a) {
  // two 'a's!
  console.log(a + b); // Which 'a' does it use? The last one.
}

addThings(10, 20, 30); // It calculates 30 + 20, so it logs 50.
// The first 'a' (value 10) is completely ignored.

With use strict:

"use strict";

function addThings(a, b, a) {
  // Uncaught SyntaxError: Duplicate parameter name not allowed in this context
}

Strict Mode catches this at compile time, saving you from logical errors that would be a pain to debug.

How and Where to Use It

You have two options:

  1. File-level: Put "use strict"; on the very first line of your JavaScript file. This enables it for the entire script.

    "use strict";
    // All the code in this file is now in strict mode.
    console.log("Living on the edge, but safely.");
    
  2. Function-level: Put "use strict"; on the very first line inside a function. This enables it only for that function’s scope. This is useful if you’re working in a legacy codebase and can’t change everything at once.

    function beSloppy() {
      undeclared = "oops"; // This works fine out here.
    }
    
    function beStrict() {
      "use strict";
      // The rules apply inside this function.
      anotherUndeclared = "nope"; // ReferenceError!
    }
    

Important Note: If you’re using modern JS modules (import/export), you can relax. All ES6 modules are in strict mode by default. This is one of the many ways modern JS tools protect you from yourself. But you still need to know why that ReferenceError is popping up, and you should thank the invisible "use strict"; that’s doing its job.

Not using "use strict"; in a non-module file is like willingly disabling the safety features on a power tool. You might get away with it for a while, but eventually, you’re going to lose a finger. Don’t be that developer. Activate the seatbelt.


The Foundation: How Computers “Think” (Essential Background)

I’ve covered this in previous articles, but let’s quickly refresh the fundamentals:

  • A computer’s brain is the CPU. It follows instructions.
  • Your code is a set of instructions.
  • A Process is like an isolated program running on your computer (e.g., your browser). Think of it as a kitchen.
  • A Thread is a sequence of instructions the CPU executes within that process. Think of it as a chef in the kitchen.

Crucially, JavaScript is single-threaded. This means it has one chef. This one-armed chef can only do one thing at a time from one recipe book (your code). This fact is the source of all our problems and all the brilliant solutions we’re about to discuss.

Synchronous Programming: The One-Track Mind, and Why It Sucks for the Web

“Synchronous” is a fancy word for “in order” or “one at a time.” It’s predictable, dumb, and brutally inefficient for tasks that involve waiting.

Our one-armed synchronous chef is given a simple recipe:

  1. Chop 100 carrots. (Takes 5 minutes)
  2. Microwave a potato. (Takes 5 minutes)
  3. Plate the food. (Takes 30 seconds)

The synchronous chef will:

  1. Spend 5 full minutes chopping carrots.
  2. Put the potato in the microwave, press start, and then stand there watching the microwave for the entire 5 minutes. They won’t grab a plate, they won’t clean up, they will do nothing but watch that timer tick down.
  3. Only after the microwave beeps will they finally plate the food.

That 5 minutes of waiting is called blocking. The long-running task (the microwave) is blocking the entire thread (the chef) from doing anything else.

In JavaScript, this is a death sentence for user experience.

console.log("Starting the party.");

// Let's simulate a disgustingly long, blocking task
const startTime = Date.now();
while (Date.now() - startTime < 5000) {
  // This loop does nothing but burn CPU time for 5 seconds.
  // The entire browser is frozen during this. No clicks, no scrolls, nothing.
}

console.log("Party's over. You can finally click stuff again.");

Run that in your browser console. For 5 seconds, your tab will be a digital corpse. That’s synchronous blocking. It’s the reason why, in the early days of the web, a single slow operation could kill a whole page.

Asynchronous Programming: The Grand Illusion of Multitasking

Asynchronous means “not at the same time.” An async chef is smarter. Faced with the same recipe, they would:

  1. Put the potato in the microwave and press start.
  2. IMMEDIATELY turn around and start chopping the carrots while the microwave runs in the background.
  3. When the microwave beeps (a notification!), they’ll pause the chopping, grab the potato, and then resume their other tasks.

This is non-blocking. The chef delegated the “waiting” task to an appliance (the microwave) and got on with their life.

But wait. If our JS chef only has one arm and one brain, how can they possibly do two things at once? They can’t. This is where the magic trick comes in. The asynchronicity isn’t in the chef (the JS thread), it’s in the kitchen environment.

The Big Reveal: The JavaScript Runtime and the Event Loop

JavaScript itself is synchronous. The asynchronous behavior comes from the JavaScript Runtime Environment (like a browser or Node.js). This environment provides more than just the JS engine; it provides the whole damn kitchen.

Here are the key players:

  1. Call Stack: The Chef’s immediate to-do list. It’s a LIFO (Last-In, First-Out) stack. When a function is called, it’s pushed onto the stack. When it returns, it’s popped off. There’s only one.
  2. Web APIs (The Kitchen Appliances): These are tools provided by the browser that are not part of the JavaScript engine itself. They live in a separate part of the kitchen. Think of setTimeout, fetch (for network requests), and DOM events (like onclick) as magical appliances that can do tasks in the background.
  3. Callback Queue (or Task Queue): The “Done” pile. When a Web API finishes its job (the timer finishes, the data arrives), the function you told it to run later (the callback function) gets placed in this queue, waiting its turn.
  4. Event Loop (The Grumpy Kitchen Manager): This is the most important part. The Event Loop has one, simple, relentless job: constantly check, “Is the Call Stack empty?” If and only if it’s empty, it takes the first item from the Callback Queue and pushes it onto the Call Stack to be executed.

Let’s visualize it with the classic setTimeout example:

console.log("Start"); // 1

setTimeout(() => {
  // 2
  console.log("Timer is done!"); // 5
}, 2000);

console.log("End"); // 3

Here’s the blow-by-blow, play-by-play:

  1. console.log("Start") enters the Call Stack. It runs immediately, prints “Start”, and is popped off.
  2. setTimeout(...) enters the Call Stack. The JS engine recognizes it’s a Web API function.
  3. The Call Stack says, “Not my job to wait.” It hands the setTimeout call over to the Web API “appliance” and gives it the callback function () => { ... }. The Web API starts a 2-second timer in the background. The setTimeout function is then popped off the Call Stack.
  4. console.log("End") enters the Call Stack. It runs immediately, prints “End”, and is popped off.
  5. At this point, the main script is done. The Call Stack is now empty. The Event Loop is watching.
  6. …2 seconds pass…
  7. The timer in the Web API finishes. The Web API takes the callback function () => { console.log("Timer is done!"); } and moves it to the Callback Queue. It’s now waiting patiently.
  8. The Event Loop does its check. “Is the Call Stack empty?” Yes, it is!
  9. The Event Loop grabs the callback from the Callback Queue and pushes it onto the Call Stack.
  10. The callback function () => { console.log("Timer is done!"); } is now on the stack. It executes, prints “Timer is done!”, and is popped off.

Final Output:

Start
End
Timer is done!

This is the illusion. JavaScript didn’t “do” anything for 2 seconds. It finished its synchronous work immediately and let the browser’s runtime handle the waiting.


A Better Way to Handle Asynchronicity: Promises (A Receipt for Your Sanity)

Callbacks work, but when you have multiple async steps that depend on each other, you get the infamous “Callback Hell” or “Pyramid of Doom.”

// Welcome to Hell. Population: You.
getData(function (a) {
  getMoreData(
    a,
    function (b) {
      getEvenMoreData(
        b,
        function (c) {
          getFinalData(
            c,
            function (d) {
              console.log(d);
            },
            failureCallback
          );
        },
        failureCallback
      );
    },
    failureCallback
  );
}, failureCallback);

This is unreadable, unmaintainable, and a one-way ticket to a mental breakdown. This is why Promises were invented.

A Promise is an object that represents the eventual success or failure of an asynchronous operation. It’s a placeholder for a future value. Think of it as a receipt from a food truck. You don’t have the food yet, but you have a promise that you will, or a promise that they’ll tell you they’re out of tacos.

A Promise has three states:

  • pending: The initial state. The food truck is still making your order.
  • fulfilled (or resolved): The operation succeeded. Your tacos are ready. The promise now has a value.
  • rejected: The operation failed. They ran out of carnitas. The promise now has a reason for failure (an error).

Let’s turn a callback-based function into a Promise:

function willGetYouADog() {
  return new Promise((resolve, reject) => {
    // The functions 'resolve' and 'reject' are passed in by the Promise constructor.
    // You call them to change the promise's state.
    setTimeout(() => {
      const rand = Math.random();
      if (rand < 0.5) {
        resolve("đŸ¶ Here is your new dog!"); // The promise is fulfilled with a value.
      } else {
        reject(new Error("😭 Sorry, no dogs available.")); // The promise is rejected with an error.
      }
    }, 2000);
  });
}

Now, we can consume this promise in a much cleaner way using .then() for success and .catch() for failure.

console.log("I'm asking for a dog...");

willGetYouADog()
  .then((successMessage) => {
    // This block only runs if the promise is RESOLVED.
    console.log(successMessage);
  })
  .catch((errorMessage) => {
    // This block only runs if the promise is REJECTED.
    console.error(errorMessage.message);
  })
  .finally(() => {
    // This block runs REGARDLESS of success or failure.
    // Perfect for cleanup tasks, like closing a loading spinner.
    console.log("The dog transaction is complete.");
  });

The real power is in chaining. You can flatten the Pyramid of Doom into a clean, readable sequence.

// Instead of nesting, we chain.
firstAsyncOperation()
  .then((result1) => {
    console.log("Step 1 Complete");
    return secondAsyncOperation(result1); // Return the next promise in the chain.
  })
  .then((result2) => {
    console.log("Step 2 Complete");
    return thirdAsyncOperation(result2);
  })
  .then((result3) => {
    console.log("All steps finished!", result3);
  })
  .catch((error) => {
    // A single .catch() can handle an error from ANYWHERE in the chain.
    console.error("An operation failed:", error);
  });

This is a monumental improvement. It’s readable, manageable, and error handling is centralized.


The Modern Approach: async/await (Syntactic Sugar That’s Actually Cocaine)

Promises are great, but can we do better? Yes. async/await, introduced in ES2017, is just “syntactic sugar” over Promises. It doesn’t change how things work underneath, but it lets us write asynchronous code that looks synchronous, making it ridiculously intuitive.

The Two Rules:

  1. You can only use the await keyword inside a function that is marked with the async keyword.
  2. An async function always implicitly returns a Promise.

Let’s rewrite our promise chain with this glorious syntax:

async function doAllTheThings() {
  // Use a try...catch block for error handling. It's the async/await version of .catch()
  try {
    console.log("Starting Step 1...");
    // The 'await' keyword PAUSES the execution of THIS FUNCTION ONLY
    // until the promise resolves. It does NOT block the main thread.
    const result1 = await firstAsyncOperation();

    console.log("Starting Step 2...");
    const result2 = await secondAsyncOperation(result1);

    console.log("Starting Step 3...");
    const result3 = await thirdAsyncOperation(result2);

    console.log("All done!", result3);
    return result3; // This will be the resolved value of the promise returned by doAllTheThings()
  } catch (error) {
    console.error("Something went horribly wrong:", error);
    // If an error is caught, the promise returned by doAllTheThings() will be rejected.
  }
}

Look at that. It reads like a simple, top-to-bottom synchronous script. It’s beautiful. It’s clean. It’s the standard for modern asynchronous JavaScript.

Juggling Multiple Time Bombs: Promise Concurrency Methods

What if you need to run multiple async operations, but not necessarily in a sequence? JavaScript gives you tools for that.

  • Promise.all(promises): The “all or nothing” approach. Give it an array of promises. It returns a single promise that resolves with an array of all the results, but only after all of the input promises have resolved. If even one promise rejects, the whole Promise.all rejects immediately.

    • Use Case: You’re loading a webpage and need to fetch the user’s profile, their posts, and their settings. You need all three to render the page.
    const [user, posts, settings] = await Promise.all([
      fetch("/api/user"),
      fetch("/api/posts"),
      fetch("/api/settings"),
    ]);
    
  • Promise.race(promises): The “first one across the finish line wins (or loses).” It takes an array of promises and settles (resolves or rejects) as soon as the very first promise in the array settles.

    • Use Case: You’re fetching a critical resource and have two servers, one primary and one backup. You can request from both and take whichever responds first. Or, you can race a fetch request against a setTimeout to enforce a timeout.
    const firstToFinish = await Promise.race([
      fetch("https://primary-server.com/data"),
      fetch("https://backup-server.com/data"),
    ]);
    
  • Promise.allSettled(promises): The “I don’t care if you win or lose, just tell me when you’re all done.” It waits for all promises to settle (either fulfilled or rejected) and returns a promise that resolves with an array of objects describing the outcome of each promise ({status: 'fulfilled', value: ...} or {status: 'rejected', reason: ...}).

    • Use Case: You’re updating multiple unrelated settings. Some might fail due to permissions, but you don’t want that to stop the others. You just want a report of what succeeded and what failed.
    const results = await Promise.allSettled([
      updateProfile(),
      updateSettings(),
      deleteTempFiles(), // this one might fail
    ]);
    console.log(results);
    
  • Promise.any(promises): The “I just need a winner.” It waits for the first promise to be fulfilled. It only rejects if all of the input promises reject.

    • Use Case: You’re trying to load an image and have it hosted on three different CDNs. You don’t care which one you get it from, as long as you get it.
    const successfulImage = await Promise.any([
      fetch("//cdn1.com/img.png"),
      fetch("//cdn2.com/img.png"),
      fetch("//cdn3.com/img.png"),
    ]);
    

So, Is JavaScript Truly Asynchronous?

No. And anyone who says it is, is oversimplifying.

JavaScript has a single-threaded, synchronous execution model with a concurrent event loop.

Let that sink in. Your JS code, the stuff you write, runs one line at a time on one thread. The asynchronicity is an illusion masterfully managed by the runtime environment (the browser, Node.js) which can handle long-running operations in the background on its own threads. When those background tasks are done, the event loop schedules your callback code to run back on JavaScript’s single thread, but only when it’s not busy.

It’s not true parallelism like in languages like Go or Java, where you can have multiple threads of your own code running simultaneously on multiple CPU cores. In JS, you have one main thread for your code, and you delegate tasks. It’s concurrency, not parallelism.


Of course. This is the “look behind the curtain” section, perfect for separating those who just use JavaScript from those who truly understand it. This content is designed to stretch the reader’s mind after they’ve mastered the core async concepts.


Advanced (and Weird) JavaScript: Beyond async/await

So, you’ve stared into the Event Loop and it stared back. You’ve tamed Promises and mastered async/await. You think you understand JavaScript’s asynchronous nature now. Think again.

Welcome to the weird part of the language. This is the stuff that doesn’t show up in your average “Learn JS in 21 Days” tutorial. This is the black magic that powers frameworks, enables new programming paradigms, and gives you a peek into the raw, untamed power lurking beneath JavaScript’s deceptively simple surface.

Proxy and Handler: The Ultimate Metaprogramming

Ever wanted to spy on an object? To know exactly when a property is read, set, or deleted, and then lie about the result? That’s what a Proxy is for.

A Proxy is an object that wraps another object (the “target”) and lets you intercept fundamental operations on it, like property lookup, assignment, and function invocation. You define the interception logic in a separate object called a handler. Each interception is called a “trap.”

Think of it like hiring a bouncer (Proxy) for your exclusive nightclub (target object). You give the bouncer a set of rules (handler).

  • Someone tries to get in? The bouncer checks their name against the list.
  • Someone tries to set a new rule? The bouncer checks if they have permission.
  • Someone tries to delete a VIP? The bouncer can stop them.

Why would you do this? This is the core of metaprogramming. You’re not just writing code that runs; you’re writing code that defines how other code runs. This is the secret sauce behind modern frameworks like Vue.js for its reactivity system.

Example: A Loud-Mouthed, Defensive Object

"use strict";

// The object we want to spy on
const target = {
  name: "SMM Article",
  rating: 10,
};

// The bouncer's rulebook
const handler = {
  // Trap for getting a property value
  get(obj, prop) {
    console.log(`Someone is trying to access the '${prop}' property.`);
    // You can even lie about the value!
    if (prop === "rating") {
      return "It's over 9000!";
    }
    return obj[prop];
  },

  // Trap for setting a property value
  set(obj, prop, value) {
    if (prop === "rating" && typeof value !== "number") {
      // Don't let them set a non-numeric rating
      throw new TypeError("Rating must be a number, you heathen.");
    }
    console.log(`Property '${prop}' is being set to '${value}'.`);
    obj[prop] = value;
    return true; // You must return true if the set was successful.
  },
};

const articleProxy = new Proxy(target, handler);

console.log(articleProxy.name); // Logs: "Someone is trying to access the 'name' property." -> "SMM Article"
console.log(articleProxy.rating); // Logs: "Someone is trying to access the 'rating' property." -> "It's over 9000!"

articleProxy.author = "A Chad Developer"; // Logs: "Property 'author' is being set to 'A Chad Developer'."
articleProxy.rating = "garbage"; // Throws: TypeError: Rating must be a number, you heathen.

Proxies are powerful, a bit slow, and not something you’ll use every day. But knowing they exist is knowing that the fundamental behavior of JS objects isn’t set in stone.

Generators (function* and yield): Pause-able Functions

Remember how async/await lets you “pause” a function without blocking the main thread? Where did that magic come from? It came from these weirdos: Generators.

A generator is a special kind of function that can be paused in the middle of execution and resumed later, right where it left off. It doesn’t run to completion in one go.

You declare a generator with function* (the asterisk is key). Instead of return, it uses the yield keyword to pause and “yield” a value.

Think of it like a video game. A normal function is a cutscene—it plays from start to finish. A generator is playable gameplay—you can run around, then hit the pause menu (yield), and later resume.

function* numberGenerator() {
  console.log("I'm running...");
  yield 1; // Pause and hand back the value 1

  console.log("...and we're back!");
  yield 2; // Pause and hand back the value 2

  console.log("One last time.");
  return "All done!"; // 'return' finishes the generator
}

const iterator = numberGenerator(); // Calling a generator doesn't run it! It gives you an iterator object.

console.log(iterator.next()); // { value: 1, done: false } -> Logs "I'm running..." first
console.log(iterator.next()); // { value: 2, done: false } -> Logs "...and we're back!" first
console.log(iterator.next()); // { value: "All done!", done: true } -> Logs "One last time." first
console.log(iterator.next()); // { value: undefined, done: true }

How Generators Power async/await (The Real Magic)

async/await is just beautiful syntactic sugar over generators and promises. await is basically a yield for a promise. Here’s a simplified “manual async/await”:

// This is our async function, but written as a generator
function* myAsyncWorkflow() {
  console.log("About to fetch a user...");
  const user = yield fetch("https://api.github.com/users/sanchxt"); // Yield the promise
  const userData = yield user.json(); // Yield another promise
  console.log(userData.name);
}

// This is the "Task Runner" that simulates the async/await engine
function run(generator) {
  const iterator = generator(); // Get the iterator

  function resume(value) {
    const result = iterator.next(value); // Resume the generator, passing the resolved value back in
    if (result.done) {
      return; // All done
    }
    // If the yielded value is a promise, wait for it to resolve
    result.value.then(resume);
  }

  resume(); // Start the process
}

run(myAsyncWorkflow);

Look at that run function. It takes a generator, calls .next(), waits for the yielded promise to resolve using .then(), and then calls .next() again with the resolved value. This is, conceptually, what the JavaScript engine is doing for you when you use async/await. It’s running your code, pausing at awaits, and resuming when the promise settles.

Web Workers: Finally, Real Threads

“But you said JavaScript is single-threaded!” I did, and it is. Your main script, the one that can touch the DOM and mess with the UI, runs on one thread. If you block it, your page freezes.

But what if you need to do some seriously heavy math, like process a giant image or parse a massive JSON file? If you did that on the main thread, your UI would be dead for seconds, maybe minutes.

Enter Web Workers. A Web Worker is JavaScript’s escape hatch. It allows you to run a script on a genuine background thread, completely separate from the main UI thread. This is true parallelism.

The catch? It’s like hiring a worker and putting them in a windowless basement.

  • They have no access to the document, the window, or any DOM elements.
  • Communication happens exclusively through a message-passing system (postMessage and onmessage), like shouting through a ventilation duct.

Example: Offloading a Heavy Calculation

main.js (The Main Thread)

console.log("Main: Starting up.");

// Create a new worker. The browser will fetch and run this script in a new thread.
const heavyWorker = new Worker("worker.js");

// Listen for messages FROM the worker
heavyWorker.onmessage = function (event) {
  console.log(`Main: Worker sent back a result: ${event.data}`);
  alert(`The 50,000th prime number is ${event.data}!`);
};

console.log("Main: Asking worker to do heavy lifting. UI is still responsive!");
// Send a message TO the worker to kick it off
heavyWorker.postMessage(50000);

// You can click buttons and interact with the page while the worker churns.

worker.js (The Background Thread)

console.log("Worker: Hello from the basement!");

// Listen for messages FROM the main thread
onmessage = function (event) {
  console.log(`Worker: Main thread asked for the ${event.data}th prime.`);

  // Some disgustingly heavy, blocking calculation
  let n = event.data;
  // ... (imagine a complex prime number algorithm here)
  let result = 1299709; // Let's pretend we calculated it

  console.log("Worker: Calculation done. Sending result back.");
  // Send the result back to the main thread
  postMessage(result);
};

Web Workers are your tool for CPU-intensive tasks. Don’t use them for simple I/O like fetch—the event loop already handles that perfectly. Use them when you need to crunch numbers without turning your UI into a static image.

Some More Functional Programming Concepts

This isn’t a weird feature, but a weird (to some) way of thinking. Functional Programming (FP) is a paradigm that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data.

  • Higher-Order Functions: You’ve been using these all along. It’s a fancy term for a function that either takes another function as an argument, or returns a function, or both. Array.prototype.map, Array.prototype.filter, and setTimeout are all higher-order functions. You’re already an FP guru and you didn’t even know it.

  • Currying: This is a technique of transforming a function that takes multiple arguments into a sequence of functions that each take a single argument. It’s like building a specialized machine from a general-purpose one.

Normal Function:

const add = (a, b) => a + b;
add(5, 3); // 8

Curried Function:

const curriedAdd = (a) => (b) => a + b;

// This is how you use it
curriedAdd(5)(3); // 8

// But the real power is creating specialized functions
const addTen = curriedAdd(10); // addTen is now a new function: (b) => 10 + b

console.log(addTen(3)); // 13
console.log(addTen(50)); // 60

Currying is powerful for creating reusable, configurable functions and is a cornerstone of many functional libraries.

A Peek Under the Hood: The V8 Engine

How is modern JavaScript so damn fast? It’s not interpreted line-by-line like in the old days. It’s handled by an incredibly complex engine, the most famous of which is Google’s V8 (which powers Chrome and Node.js).

The key is the JIT (Just-In-Time) Compiler.

  1. When your code first runs, V8’s interpreter (Ignition) starts executing it bytecode almost immediately. This allows for a fast startup.
  2. While it’s running, V8’s profiler watches your code. It identifies “hot” functions—pieces of code that are run over and over again.
  3. These “hot” functions are passed to V8’s optimizing compiler (TurboFan). TurboFan makes some assumptions about your code (e.g., “this variable will probably always be a number”) and compiles it down to highly optimized, low-level machine code that runs at near-native speeds.
  4. If one of its assumptions proves wrong (e.g., you suddenly pass a string to that function), it de-optimizes, throws away the fast code, and falls back to the interpreter.

This hybrid approach gives you the best of both worlds: fast startup time from an interpreter and incredible execution speed for critical code from a compiler. It’s why JavaScript can be used for everything from simple button clicks to complex server-side applications and even 3D games.

The Task Queue Deep Dive: Microtasks vs Macrotasks

Here’s where things get really interesting. The Event Loop doesn’t just have one queue—it actually has multiple queues with different priorities. Understanding this distinction is crucial for predicting execution order in complex async scenarios.

Macrotasks (or just “Tasks”):

  • setTimeout and setInterval callbacks
  • DOM events (clicks, scrolls, etc.)
  • I/O operations
  • These go into the regular Callback Queue we discussed earlier

Microtasks:

  • Promise.then(), Promise.catch(), Promise.finally() callbacks
  • queueMicrotask() callbacks
  • These go into a special Microtask Queue

The critical rule: The Event Loop will always empty the entire Microtask Queue before processing even a single item from the Macrotask Queue.

Let’s see this in action:

console.log("1: Script start");

setTimeout(() => console.log("2: setTimeout"), 0);

Promise.resolve().then(() => console.log("3: Promise 1"));
Promise.resolve().then(() => console.log("4: Promise 2"));

setTimeout(() => console.log("5: setTimeout 2"), 0);

Promise.resolve().then(() => {
  console.log("6: Promise 3");
  Promise.resolve().then(() => console.log("7: Nested Promise"));
});

console.log("8: Script end");

Output:

1: Script start
8: Script end
3: Promise 1
4: Promise 2
6: Promise 3
7: Nested Promise
2: setTimeout
5: setTimeout 2

Notice how all Promise callbacks (microtasks) execute before any setTimeout callbacks (macrotasks), regardless of when they were queued. This priority system ensures that Promise chains complete before the next macrotask begins.

Error Handling in Async Contexts

Error handling becomes more complex in asynchronous code. Here are the key patterns you need to master:

Unhandled Promise Rejections:

// Dangerous: Unhandled promise rejection
Promise.reject(new Error("Something went wrong")); // This will show as an unhandled rejection

// Better: Always handle rejections
Promise.reject(new Error("Something went wrong")).catch((error) =>
  console.error("Handled:", error.message)
);

// Even better with async/await:
async function safeFetch() {
  try {
    const data = await fetch("/api/data");
    return data.json();
  } catch (error) {
    console.error("Fetch failed:", error.message);
    return null; // Return a safe fallback
  }
}

Global Error Handlers:

// Handle unhandled promise rejections globally
window.addEventListener("unhandledrejection", (event) => {
  console.error("Unhandled promise rejection:", event.reason);
  event.preventDefault(); // Prevent the default console error
});

// Handle uncaught errors in async functions
window.addEventListener("error", (event) => {
  console.error("Uncaught error:", event.error);
});

Canceling Promises with AbortController

Sometimes you need to cancel async operations—like when a user navigates away from a page or cancels a search. The AbortController is your tool for this:

async function cancellableFetch(url) {
  const controller = new AbortController();
  const timeoutId = setTimeout(() => controller.abort(), 5000); // 5 second timeout

  try {
    const response = await fetch(url, {
      signal: controller.signal,
    });
    clearTimeout(timeoutId);
    return await response.json();
  } catch (error) {
    if (error.name === "AbortError") {
      console.log("Request was cancelled");
      return null;
    }
    throw error; // Re-throw if it's not an abort error
  }
}

// Usage
const fetchPromise = cancellableFetch("/api/data");
// Later, if needed:
// controller.abort(); // This would cancel the request

Conclusion: Go Forth and Break Things

We’ve journeyed from the simple, single-threaded nature of JavaScript, through the grand illusion of the Event Loop, tamed the Pyramid of Doom with Promises, and mastered the elegance of async/await. We then peeked into the abyss and saw the metaprogramming magic of Proxies, the foundational power of Generators, and the true parallelism of Web Workers.

Your brain is probably buzzing. It should be.

But here’s the reality: everything you just learned is purely theoretical until you apply it. This knowledge doesn’t make you a developer—building things does.

The real learning happens when you spend hours debugging why your promise chain isn’t returning the expected value. When you figure out why your Web Worker is failing silently. When you realize your event loop understanding was theoretical until you had to optimize a performance bottleneck.

So here’s your challenge: Build something ambitious. Create an application that requires multiple, chained API calls. Build something that processes large files without freezing the UI. Tackle a project where you’re not entirely sure how you’ll solve every piece.

Yes, it will be challenging. You’ll encounter frustrating bugs and complex problems. But that’s exactly where theory transforms into practical skill. That’s where you develop the intuition that separates competent developers from beginners.

The web development landscape needs more than basic CRUD applications. Build something that pushes your understanding and contributes value.

See you around, developer :)