Turning Raw Data into AI Output
An n8n template can be extended to not just pull or clean data, but to actually produce meaningful AI-generated output. This part takes the imported workflow from the previous lesson and makes it generate text automatically by aggregating split items, pinning stable data, and feeding one structured payload into the AI Agent.
Up to now, the workflow could only pull and split data. Now you'll go further and learn how to:
- Recombine the split items into one AI-friendly payload;
- Pin data so Rainforest API isn't called again during testing;
- Feed that structured data into the AI Agent properly;
- Change the AI's tone or style with a single word.
This is the moment when the workflow stops being a demo diagram and starts producing real, client-ready results.
What Split Out Actually Produced
After the last lesson, the workflow already fetched products from a seller using Rainforest API, then split them into multiple items, for example, 16 separate product entries.
A common mistake is connecting the Split Out node directly to the AI Agent, expecting it to summarize everything. That fails because the AI only receives one item at a time. It doesn't see the full picture, and can't write a meaningful overview.
Split Out is great for per-item logic, but not for writing one summary of everything.
Add an Aggregate Node
To make AI see all data at once, add an Aggregate node after Split Out. Set it to combine all items into a single list or array. This node takes multiple entries and merges them into one structured item that holds all product details.
Now, instead of sending 16 separate messages to AI, you're sending one rich context block.
Pin the Data
Before running more tests, pin the node output.
This stops n8n from calling the Rainforest API every time, saving tokens and speeding up prompt tuning. Downstream nodes will reuse the pinned response until it's unpinned.
For any workflow that calls a paid API, pin early and unpin only when doing a full end-to-end run.
Confirm the Aggregated Output
After running the Aggregate node, n8n should show one item instead of many. Inside that single item, you'll see an array containing titles, ASINs, links, images, and other product fields.
This is the context blob, exactly what should be passed to the AI Agent.
Feed Data Into the AI Agent
Inside the AI Agent node, open the User message or prompt field and drag in the aggregated data field (for example: {{$json["data"]}}).
On the left, you'll see the expression. On the right, n8n shows a live preview, that's what the AI will actually receive. If this preview doesn't show the real product data, the AI won't produce a good summary.
Always check that the right-hand preview contains structured content.
Execute the AI Agent node. The AI should return a short write-up mentioning product names, ASINs, prices, ratings, and seller information.
This confirms that the workflow is now feeding live, structured data into the AI, not static examples.
Takk for tilbakemeldingene dine!
Spør AI
Spør AI
Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår
Awesome!
Completion rate improved to 4.17
Turning Raw Data into AI Output
Sveip for å vise menyen
An n8n template can be extended to not just pull or clean data, but to actually produce meaningful AI-generated output. This part takes the imported workflow from the previous lesson and makes it generate text automatically by aggregating split items, pinning stable data, and feeding one structured payload into the AI Agent.
Up to now, the workflow could only pull and split data. Now you'll go further and learn how to:
- Recombine the split items into one AI-friendly payload;
- Pin data so Rainforest API isn't called again during testing;
- Feed that structured data into the AI Agent properly;
- Change the AI's tone or style with a single word.
This is the moment when the workflow stops being a demo diagram and starts producing real, client-ready results.
What Split Out Actually Produced
After the last lesson, the workflow already fetched products from a seller using Rainforest API, then split them into multiple items, for example, 16 separate product entries.
A common mistake is connecting the Split Out node directly to the AI Agent, expecting it to summarize everything. That fails because the AI only receives one item at a time. It doesn't see the full picture, and can't write a meaningful overview.
Split Out is great for per-item logic, but not for writing one summary of everything.
Add an Aggregate Node
To make AI see all data at once, add an Aggregate node after Split Out. Set it to combine all items into a single list or array. This node takes multiple entries and merges them into one structured item that holds all product details.
Now, instead of sending 16 separate messages to AI, you're sending one rich context block.
Pin the Data
Before running more tests, pin the node output.
This stops n8n from calling the Rainforest API every time, saving tokens and speeding up prompt tuning. Downstream nodes will reuse the pinned response until it's unpinned.
For any workflow that calls a paid API, pin early and unpin only when doing a full end-to-end run.
Confirm the Aggregated Output
After running the Aggregate node, n8n should show one item instead of many. Inside that single item, you'll see an array containing titles, ASINs, links, images, and other product fields.
This is the context blob, exactly what should be passed to the AI Agent.
Feed Data Into the AI Agent
Inside the AI Agent node, open the User message or prompt field and drag in the aggregated data field (for example: {{$json["data"]}}).
On the left, you'll see the expression. On the right, n8n shows a live preview, that's what the AI will actually receive. If this preview doesn't show the real product data, the AI won't produce a good summary.
Always check that the right-hand preview contains structured content.
Execute the AI Agent node. The AI should return a short write-up mentioning product names, ASINs, prices, ratings, and seller information.
This confirms that the workflow is now feeding live, structured data into the AI, not static examples.
Takk for tilbakemeldingene dine!