Data Manipulation Nodes
Data manipulation nodes are the backbone of clean and reliable automation. They shape raw, inconsistent, or oversized data into a stable format that other nodes can handle safely. The main nodes used for this purpose are:
Use when you need maximum flexibility or custom logic that other nodes can't handle.
Quickly add, remove, or rename fields without writing any code.
Reduce long or noisy data into a shorter, structured version that's easier to work with.
Automatically eliminate repeated entries so the workflow processes only unique items.
Let only valid, useful, or matching data pass through while blocking the rest.
Together, these tools transform messy API responses into a consistent, predictable structure that keeps workflows efficient, low-cost, and error-free.
Code Node
The Code node runs a small piece of JavaScript to directly edit workflow data. It's the most flexible option for shaping or fixing inconsistent inputs from APIs. While drag-and-drop tools cover most cases, sometimes only custom code can produce the exact format a following node expects.
Use it to add, remove, or normalize fields, or reshape objects and arrays. Just remember that n8n always expects the node to return an array of items, or the workflow will fail.
return items;
If you don't know JavaScript, you can describe the transformation to an AI model like ChatGPT or Gemini, show what comes in and what should go out, and it can generate the needed code. Use Code only when simpler nodes (Edit Fields, Filter, Remove Duplicates) can't achieve the goal.
Edit Fields
The Edit Fields node is a simple point-and-click tool for adding, removing, or renaming fields. It defines what your data looks like going forward and prevents unnecessary data from slowing the workflow.
It's best used right after getting raw API data to strip junk, rename cryptic keys, or add fixed values before the next steps.
Summarize
The Summarize node shortens long or repetitive input into concise highlights or summaries, usually using an LLM. Itβs ideal for reviews, transcripts, or logs that would otherwise be too large to process efficiently.
Summarizing early reduces token costs and makes results easier to review, but remember, summaries are not factual sources, only compressed versions of data.
Remove Duplicates
The Remove Duplicates node takes a list and deletes repeated items based on a chosen field, such as email, ID, or SKU. It prevents double entries in databases or repeated API calls.
Normalization should come first, so values like ASIN123 and asin123 don't slip past deduplication.
Filter
The Filter node acts as a rule-based gate, letting only specific items continue in the workflow. It's best to filter early, keeping only relevant data and preventing unnecessary operations downstream.
Think of it as a pre-check that protects expensive steps (AI, APIs, databases) from garbage data.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Can you give examples of when to use each node?
What are some best practices for combining these nodes in a workflow?
How do I decide between using the Code node and other nodes like Edit Fields or Filter?
Awesome!
Completion rate improved to 4.17
Data Manipulation Nodes
Swipe to show menu
Data manipulation nodes are the backbone of clean and reliable automation. They shape raw, inconsistent, or oversized data into a stable format that other nodes can handle safely. The main nodes used for this purpose are:
Use when you need maximum flexibility or custom logic that other nodes can't handle.
Quickly add, remove, or rename fields without writing any code.
Reduce long or noisy data into a shorter, structured version that's easier to work with.
Automatically eliminate repeated entries so the workflow processes only unique items.
Let only valid, useful, or matching data pass through while blocking the rest.
Together, these tools transform messy API responses into a consistent, predictable structure that keeps workflows efficient, low-cost, and error-free.
Code Node
The Code node runs a small piece of JavaScript to directly edit workflow data. It's the most flexible option for shaping or fixing inconsistent inputs from APIs. While drag-and-drop tools cover most cases, sometimes only custom code can produce the exact format a following node expects.
Use it to add, remove, or normalize fields, or reshape objects and arrays. Just remember that n8n always expects the node to return an array of items, or the workflow will fail.
return items;
If you don't know JavaScript, you can describe the transformation to an AI model like ChatGPT or Gemini, show what comes in and what should go out, and it can generate the needed code. Use Code only when simpler nodes (Edit Fields, Filter, Remove Duplicates) can't achieve the goal.
Edit Fields
The Edit Fields node is a simple point-and-click tool for adding, removing, or renaming fields. It defines what your data looks like going forward and prevents unnecessary data from slowing the workflow.
It's best used right after getting raw API data to strip junk, rename cryptic keys, or add fixed values before the next steps.
Summarize
The Summarize node shortens long or repetitive input into concise highlights or summaries, usually using an LLM. Itβs ideal for reviews, transcripts, or logs that would otherwise be too large to process efficiently.
Summarizing early reduces token costs and makes results easier to review, but remember, summaries are not factual sources, only compressed versions of data.
Remove Duplicates
The Remove Duplicates node takes a list and deletes repeated items based on a chosen field, such as email, ID, or SKU. It prevents double entries in databases or repeated API calls.
Normalization should come first, so values like ASIN123 and asin123 don't slip past deduplication.
Filter
The Filter node acts as a rule-based gate, letting only specific items continue in the workflow. It's best to filter early, keeping only relevant data and preventing unnecessary operations downstream.
Think of it as a pre-check that protects expensive steps (AI, APIs, databases) from garbage data.
Thanks for your feedback!