Manually enriching LinkedIn profiles – a process often called lead enrichment – is a bottleneck for most sales and recruiting teams. You have a spreadsheet of promising URLs, but turning them into actionable leads means hours of copying job titles, company info, contact details, and summaries.
This tutorial introduces a fully automated alternative. We'll build a system using Lindy and Bright Data that monitors a Google Sheet and automatically enriches any new LinkedIn URL with the data you need.
By the end of this guide, you'll have set up a complete system that automatically enriches your LinkedIn profiles.
Your new workflow will:
The best part? This entire setup is done in Lindy's visual workflow builder. You'll connect it all with Bright Data's API without writing a single line of code.
The LinkedIn Profile Enrichment Automation we'll build in this article works across multiple business scenarios:
Before we dive into building, let's understand the two platforms we'll be using and why they're perfect for this task.
Lindy is a no-code platform that lets you create AI-powered automations, or "AI agents". These agents can act as a personal assistant or handle complex tasks without any programming. It gives you a visual way to build workflows that can understand context, make decisions, and adapt, using natural language processing and machine learning.
Key capabilities for our project:
Bright Data provides the engine for our agent. It handles all the complex parts of scraping public LinkedIn data – including proxy management, CAPTCHA bypassing, and anti-blocking technology – and delivers it as clean, structured data through an API.
Why Bright Data is perfect for this task:
Lindy orchestrates the entire workflow while Bright Data handles the technical complexity of data extraction. Think of Lindy as the conductor and Bright Data as the orchestra.
This division of labor is powerful because:
Before we start building, make sure you have all the required accounts set up:
Your Google Sheet serves as both the input and output for this automation. Let's structure it properly.
A quick note on how this works:
Here's what your sheet should look like before processing:

And after processing:

Now that your sheet is ready, let's move to setting up Bright Data.
To connect Lindy to Bright Data, you only need 2 pieces of information: your API token and the dataset ID for the LinkedIn profile scraper.
Bright Data's Web Scraper API uses a unique dataset ID to identify which pre-built scraper to run. The ID for the "LinkedIn People Profile" scraper is static.
While you can find this ID in the "API request builder" tab for the scraper in the Bright Data dashboard, we're providing it directly to save you time: gd_l1viktl72bvl7bjuj0
Just copy this ID. That's it.
You now have your 2 essential items:
Now that Bright Data is configured, let's build the Lindy workflow.
This is where everything comes together. We've made this easy for you, so you have 2 options to get your agent running.
If you're in a hurry, just use our pre-built template. Click the link below to clone the complete, pre-built agent. Lindy will then ask you to connect your Google account and paste in your Bright Data API key. Clone the LinkedIn Enrichment Agent template.
We highly recommend this! We'll walk you through, node-by-node, how to build this agent from scratch. This is the best way to learn how this agent's logic works, so you can customize it for your own projects later.
Before we build, let's look at the map. This is what our final agent will look like in the Lindy workflow builder.

Logically, the workflow follows this 9-step process:
New Row Added (Trigger)
↓
Check Valid URL (Logic Gate)
↓
Trigger LinkedIn Data Collection (HTTP Request)
↓
Wait for Data Collection (Timer - 1 minute)
↓
Check Collection Status (HTTP Request)
↓
Verify Data Ready (Logic Gate with Retry Loop)
├─ If Running → Wait (Timer) → Check Status Again
└─ If Ready → Continue
↓
Fetch LinkedIn Profile Data (HTTP Request)
↓
Extract Profile Information (AI Agent)
↓
Update Sheet with Enriched Data (Google Sheets Action)Now, let's build it, node by node.
{{templates}}
What it does: Monitors your Google Sheet and fires the workflow when a new row is added.
Configuration:
Settings to note (this is important):
What it does: Validates the URL and checks if the row has already been processed.
Why it matters: This gate is critical. It prevents wasting Bright Data credits on invalid URLs or re-processing profiles you've already completed.
Configuration:
the LinkedIn Profile URL starts with "https://" AND the Enrichment Status column is emptyWhat it does: Initiates the scraping job with Bright Data's API.
Configuration:
This tells Lindy's AI to extract the LinkedIn URL from the trigger data and format it as:
[{"url": "https://www.linkedin.com/in/actual-profile-url/"}]Response handling: This API call will return a JSON response containing a snapshot_id. Lindy automatically captures this output, which we'll use in the next steps.
What it does: Pauses the workflow for 1 minute to give Bright Data time to scrape the profile.
Why it's necessary: LinkedIn scraping isn't instantaneous. This 1-minute buffer is crucial because the scraper must:
While the actual scraping might only take a few seconds, the total time can vary. A 1-minute buffer is a safe and reliable choice to ensure the data is ready.
Configuration:
What it does: Polls Bright Data's API to see if the scraping job is complete.
Configuration:
Response structure (this is important for the next step):
This node will return a JSON response with a status. We only care about 2 possible responses:
What it does: This is the most critical decision point in the workflow. It reads the status from Node 5 and decides whether to fetch the data (if "ready") or to wait and check again (if "running").
This node is what creates our powerful polling loop.
Check Collection Status (Node 5)
↓
Verify Data Ready (Node 6)
├─ Path A (If "status" is "ready") → Go to Node 7
└─ Path B (If "status" is "running") → Go to a new Timer
↓
(Loop back to Node 5)Configuration:
What it does: Retrieves the complete JSON profile data now that the job is "ready".
Configuration:
Response data (this is important for the next step):
The output of this node will be a large, comprehensive JSON object. It will look something like this:
[
{
"name": "Jay Sheth",
"location": "San Francisco, California",
"position": "Senior Product Manager",
"current_company": {
"name": "Google Deepmind",
"link": "https://www.linkedin.com/company/googledeepmind?trk=..."
},
"about": "Experienced product leader...",
"experience": [...],
"education": [...]
}
]Note the current_company.link field – this contains the company's LinkedIn URL, but often with tracking parameters that we'll need to clean.
What it does: Uses Lindy's AI capabilities to intelligently parse the massive JSON response from Node 7 and extract only the 6 clean fields we want.
Why AI is needed:
Configuration:
The previous step returned a JSON object with data from a single LinkedIn profile.
From this data, carefully extract the following fields:
- Full name
- Current company name
- Company LinkedIn URL (from current_company.link)
IMPORTANT: Remove any tracking parameters after "?"
Clean URL example: https://fr.linkedin.com/company/lindyai
- Position (job title)
- City/location
- Full text of the about section
If any field is not found, return 'N/A' for that field.
Provide the extracted data in a structured format.Why these guidelines work:
What it does: Writes the clean, extracted data from our AI Agent (Node 8) back to the original Google Sheet row.
Configuration:
Why are we using AI prompts for mapping?
This is the power of Lindy. Instead of a brittle, manual mapping, we're using Lindy's AI to understand the data from the previous step and find the correct value.
Once all nodes are created and configured, double-check your workflow connections. The most important is the polling loop (the connection from the "Verify Data Ready" node's "running" path back to the "Check Collection Status" node).
Before running this on a large dataset, let's test with a few profiles to ensure everything works correctly.
1. Add a test profile
2. Monitor workflow execution
3. Verify the results
After 2-4 minutes, check your Google Sheet:
If your workflow isn't behaving as expected, don't worry. Here is a checklist of the most common issues and how to solve them in seconds.
Issue: The workflow doesn't trigger at all.
Issue: The workflow stops at "Check Valid URL" (Node 2).
Issue: The workflow fails at an "HTTP Request" node (Node 3, 5, or 7).
Issue: The workflow is "stuck in a loop" (or "running" for a long time).
Issue: The spreadsheet updates, but all the new fields just say "N/A".
Issue: The spreadsheet never updates (the last step fails).
Once a single profile works, you're ready to scale.
A quick note on credits: Before you add hundreds of URLs, remember that each new row will trigger one workflow and use credits on both Lindy and Bright Data.
If you're ready to take it to the next level, here are 6 optional upgrades you can add in minutes.
{{cta}}
You now have a complete, working solution that combines Lindy's intelligent AI orchestration with Bright Data's reliable, structured web data.
This specific tutorial is a powerful example, but it's just one of many workflows you can automate with Lindy and Bright Data. This pattern – triggering, polling, and parsing – is the key. You can now re-use it to build even more powerful agents.

Lindy saves you two hours a day by proactively managing your inbox, meetings, and calendar, so you can focus on what actually matters.
