Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Preparing RSS Data for LLMs | Omdannelse af ODT til et Visuelt Workflow
AI-Automatiseringsarbejdsgange med n8n

bookPreparing RSS Data for LLMs

Sometimes RSS data arrives inconsistent or overloaded, and for this you trim each article down to the essentials so the LLM can produce a clean tweet every time. The goal is simple, each article should reach the LLM in a clean, compact form that turns into a single tweet.

  • Aggregate a small feed and test if the LLM can handle it;
  • If mapping looks clunky, normalize with a Code node;
  • Loop over items with a batch size of 1 so each article is processed into a single tweet.

Start by aggregating the feed in small batches. Use Aggregate to combine all items into a single list, creating one item that contains an array of about 25 articles in JSON format. This gives you a quick, low-complexity setup. Test this aggregated result with your LLM by mapping the array into the Context field. If the output looks unclear or inconsistent, move on to normalization.

To normalize, copy a sample of the RSS JSON and ask your LLM to produce a Code node that removes HTML, extracts the first image URL, standardizes fields such as title, text, url, guid, and publishedAt, removes near-duplicate titles, and returns one clean item per article as an array. Place this Code node immediately after the RSS or RSS Read node.

Next, replace the aggregate path with a loop. Use Loop or Split in Batches with a batch size of one to emit one article at a time, which is ideal for generating a single tweet per pass. Finally, add your chat model inside the loop, map the normalized article text (and any hooks) into Context, and provide a short, clear system instruction for tweet tone and style.

question mark

What is the correct sequence of steps to turn RSS articles into ready-to-post tweets in n8n using the approach described in this chapter?

Select the correct answer

Var alt klart?

Hvordan kan vi forbedre det?

Tak for dine kommentarer!

Sektion 4. Kapitel 2

Spørg AI

expand

Spørg AI

ChatGPT

Spørg om hvad som helst eller prøv et af de foreslåede spørgsmål for at starte vores chat

Awesome!

Completion rate improved to 4.17

bookPreparing RSS Data for LLMs

Stryg for at vise menuen

Sometimes RSS data arrives inconsistent or overloaded, and for this you trim each article down to the essentials so the LLM can produce a clean tweet every time. The goal is simple, each article should reach the LLM in a clean, compact form that turns into a single tweet.

  • Aggregate a small feed and test if the LLM can handle it;
  • If mapping looks clunky, normalize with a Code node;
  • Loop over items with a batch size of 1 so each article is processed into a single tweet.

Start by aggregating the feed in small batches. Use Aggregate to combine all items into a single list, creating one item that contains an array of about 25 articles in JSON format. This gives you a quick, low-complexity setup. Test this aggregated result with your LLM by mapping the array into the Context field. If the output looks unclear or inconsistent, move on to normalization.

To normalize, copy a sample of the RSS JSON and ask your LLM to produce a Code node that removes HTML, extracts the first image URL, standardizes fields such as title, text, url, guid, and publishedAt, removes near-duplicate titles, and returns one clean item per article as an array. Place this Code node immediately after the RSS or RSS Read node.

Next, replace the aggregate path with a loop. Use Loop or Split in Batches with a batch size of one to emit one article at a time, which is ideal for generating a single tweet per pass. Finally, add your chat model inside the loop, map the normalized article text (and any hooks) into Context, and provide a short, clear system instruction for tweet tone and style.

question mark

What is the correct sequence of steps to turn RSS articles into ready-to-post tweets in n8n using the approach described in this chapter?

Select the correct answer

Var alt klart?

Hvordan kan vi forbedre det?

Tak for dine kommentarer!

Sektion 4. Kapitel 2
some-alt