Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Data Manipulation Nodes | Grundlæggende Arbejdsgange
AI-Automatiseringsarbejdsgange med n8n

bookData Manipulation Nodes

Data manipulation nodes are the backbone of clean and reliable automation. They shape raw, inconsistent, or oversized data into a stable format that other nodes can handle safely. The main nodes used for this purpose are:

Code
expand arrow

Use when you need maximum flexibility or custom logic that other nodes can't handle.

Edit Fields
expand arrow

Quickly add, remove, or rename fields without writing any code.

Summarize
expand arrow

Reduce long or noisy data into a shorter, structured version that's easier to work with.

Remove Duplicates
expand arrow

Automatically eliminate repeated entries so the workflow processes only unique items.

Filter
expand arrow

Let only valid, useful, or matching data pass through while blocking the rest.

Together, these tools transform messy API responses into a consistent, predictable structure that keeps workflows efficient, low-cost, and error-free.

Code Node

The Code node runs a small piece of JavaScript to directly edit workflow data. It's the most flexible option for shaping or fixing inconsistent inputs from APIs. While drag-and-drop tools cover most cases, sometimes only custom code can produce the exact format a following node expects.

Use it to add, remove, or normalize fields, or reshape objects and arrays. Just remember that n8n always expects the node to return an array of items, or the workflow will fail.

return items;

If you don't know JavaScript, you can describe the transformation to an AI model like ChatGPT or Gemini, show what comes in and what should go out, and it can generate the needed code. Use Code only when simpler nodes (Edit Fields, Filter, Remove Duplicates) can't achieve the goal.

Edit Fields

The Edit Fields node is a simple point-and-click tool for adding, removing, or renaming fields. It defines what your data looks like going forward and prevents unnecessary data from slowing the workflow.

It's best used right after getting raw API data to strip junk, rename cryptic keys, or add fixed values before the next steps.

Summarize

The Summarize node shortens long or repetitive input into concise highlights or summaries, usually using an LLM. It’s ideal for reviews, transcripts, or logs that would otherwise be too large to process efficiently.

Summarizing early reduces token costs and makes results easier to review, but remember, summaries are not factual sources, only compressed versions of data.

Remove Duplicates

The Remove Duplicates node takes a list and deletes repeated items based on a chosen field, such as email, ID, or SKU. It prevents double entries in databases or repeated API calls.

Normalization should come first, so values like ASIN123 and asin123 don't slip past deduplication.

Filter

The Filter node acts as a rule-based gate, letting only specific items continue in the workflow. It's best to filter early, keeping only relevant data and preventing unnecessary operations downstream.

Think of it as a pre-check that protects expensive steps (AI, APIs, databases) from garbage data.

question mark

Which data manipulation node is specifically designed to prevent repeated entries in your workflow by removing duplicates based on a chosen field?

Select the correct answer

Var alt klart?

Hvordan kan vi forbedre det?

Tak for dine kommentarer!

Sektion 2. Kapitel 2

Spørg AI

expand

Spørg AI

ChatGPT

Spørg om hvad som helst eller prøv et af de foreslåede spørgsmål for at starte vores chat

Awesome!

Completion rate improved to 4.17

bookData Manipulation Nodes

Stryg for at vise menuen

Data manipulation nodes are the backbone of clean and reliable automation. They shape raw, inconsistent, or oversized data into a stable format that other nodes can handle safely. The main nodes used for this purpose are:

Code
expand arrow

Use when you need maximum flexibility or custom logic that other nodes can't handle.

Edit Fields
expand arrow

Quickly add, remove, or rename fields without writing any code.

Summarize
expand arrow

Reduce long or noisy data into a shorter, structured version that's easier to work with.

Remove Duplicates
expand arrow

Automatically eliminate repeated entries so the workflow processes only unique items.

Filter
expand arrow

Let only valid, useful, or matching data pass through while blocking the rest.

Together, these tools transform messy API responses into a consistent, predictable structure that keeps workflows efficient, low-cost, and error-free.

Code Node

The Code node runs a small piece of JavaScript to directly edit workflow data. It's the most flexible option for shaping or fixing inconsistent inputs from APIs. While drag-and-drop tools cover most cases, sometimes only custom code can produce the exact format a following node expects.

Use it to add, remove, or normalize fields, or reshape objects and arrays. Just remember that n8n always expects the node to return an array of items, or the workflow will fail.

return items;

If you don't know JavaScript, you can describe the transformation to an AI model like ChatGPT or Gemini, show what comes in and what should go out, and it can generate the needed code. Use Code only when simpler nodes (Edit Fields, Filter, Remove Duplicates) can't achieve the goal.

Edit Fields

The Edit Fields node is a simple point-and-click tool for adding, removing, or renaming fields. It defines what your data looks like going forward and prevents unnecessary data from slowing the workflow.

It's best used right after getting raw API data to strip junk, rename cryptic keys, or add fixed values before the next steps.

Summarize

The Summarize node shortens long or repetitive input into concise highlights or summaries, usually using an LLM. It’s ideal for reviews, transcripts, or logs that would otherwise be too large to process efficiently.

Summarizing early reduces token costs and makes results easier to review, but remember, summaries are not factual sources, only compressed versions of data.

Remove Duplicates

The Remove Duplicates node takes a list and deletes repeated items based on a chosen field, such as email, ID, or SKU. It prevents double entries in databases or repeated API calls.

Normalization should come first, so values like ASIN123 and asin123 don't slip past deduplication.

Filter

The Filter node acts as a rule-based gate, letting only specific items continue in the workflow. It's best to filter early, keeping only relevant data and preventing unnecessary operations downstream.

Think of it as a pre-check that protects expensive steps (AI, APIs, databases) from garbage data.

question mark

Which data manipulation node is specifically designed to prevent repeated entries in your workflow by removing duplicates based on a chosen field?

Select the correct answer

Var alt klart?

Hvordan kan vi forbedre det?

Tak for dine kommentarer!

Sektion 2. Kapitel 2
some-alt