back to articles
Development
Jul 14, 202515 min read

How I Used Claude Code & AI Tools to Boost My Website’s Performance Score

It’s no surprise nowadays that the performance of a website is a crucial component of its success. In this article, I’ll show how I used Claude Code and real-time diagnostics to identify and fix performance issues on my website using AI.


As my website began taking form, I turned my attention to SEO. My goal was to make it slightly more discoverable. I usually start by checking performance, so I ran my site through Google's PageSpeed Insights. The tool is quite straightforward. You simply enter the URL and run the analysis.

After some time, the first results were in and… 74 for mobile. Room for improvements. Here, Google gives us a list of insights to improve our results. In the past, fixing these issues would take a long time and risk breaking the site.

Tools used

AI
  • Claude Code
  • Puppeteer MCP
API
  • Google PageSpeed Insights API

Setting up Claude Code to help with the performance improvement

My first thought after seeing the result was: how can I use AI to improve this score without spending days reviewing each issue? I didn’t want to manually fix things or risk breaking other parts of the site.

I knew Claude Code would be perfect for this task. Since Claude Code has full project context and excels at coding tasks, I decided to create a plan to fix the issues.

The first blocker I had was that Claude didn’t have access to my PageSpeed Insights report. To fix that, I first set up the Puppeteer MCP so that Claude Code could access it.

Puppeteer MCP

The Puppeteer MCP server provides browser capabilities to our LLM (Claude in my context), enabling it to interact with web pages, take screenshots, and execute JavaScript in a real browser environment.

To set up the MCP, I had to create the file .mcp.json at the root of my project folder. Then, in this file I simply added this code:

JSON Code
{
"mcpServers": {
  "puppeteer": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-puppeteer"]
  }
}
}
json

With this, Claude Code should be able to access the PageSpeed Insights report. However, if we don’t want to spend time trying to figure out the exact XPATH to the item's details, it’s best to leverage the PageSpeed Insights API with the report.

To do so, I had to create a project to get an API key… Doing so is pretty straightforward but requires a Google account. I simply went to the PageSpeed API page and clicked "Get a Key".

Then, I was prompted to create a project. I simply had to give it a name and was able to move forward. Once the project was created, I copied the key to my project documentation. With that, I was ready to start tackling the performance improvements.

Before moving forward, I strongly recommend that you commit your project in its current state in case something goes wrong.

Using Claude Code to plan the performance improvement implementation

To get started, I always like to run a prompt similar to this to make sure Claude has enough context about my project to implement what we’re trying to do:

TEXT Code
Read my @claude.md file to get familiar with my project, then review my codebase to fine-tune your understanding of how my project is structured and how it works. Give me a description of the project and the structure and ask me to confirm if your understanding is accurate.
text

This ensures Claude understands the project’s structure and goals, making its responses more efficient and accurate.

Now, the next prompt is where we actually start using Claude to build an implementation plan for the project. To be able to run the following prompt, you’ll need to have set up the Puppeteer MCP and your PageSpeed Insight API Key as covered in the last section. Also, you will need the URL to the PageSpeed report you ran. Replace the [YOUR TEST URL] with the link to your PageSpeed report and the [YOUR API KEY] with your API key.

TEXT Code
CORE KNOWLEDGE:
- PAGESPEED INSIGHT LIVE TEST URL: [YOUR TEST URL]
- PAGESPEED INSIGHT API DOCUMENTATION: https://developers.google.com/speed/docs/insights/v5/get-started
- PAGESPEED INSIGHT API KEY: [YOUR API KEY]

CRITICAL CHECK:
- Confirm if the Puppeteer MCP is configured. If not, ask the user to configure it.

IMMEDIATE TASK:
1. Create an empty markdown file ‘pagespeed_improvements.md’
2. Use Puppeteer to go to the PAGESPEED INSIGHT LIVE TEXT URL
3. Extract the details of each critical item (identified with a red triangle icon) under the INSIGHTS and DIAGNOSTICS section. Add these critical items to ‘pagespeed_improvements.md’
4. Use PageSpeed Insight API with the PAGESPEED INSIGHT API KEY to get the details associated with each critical item documented in ‘pagespeed_improvements.md’ AND ONLY THESE ITEMS.
5. Update ‘pagespeed_improvements.md’ with the details of each critical item identified AND ONLY THESE ITEMS.
6. Update ‘pagespeed_improvements.md’ with a plan to fix each critical item identified AND ONLY THESE ITEMS. 

CRITICAL INSTRUCTION: 
- Ignore any item not tagged as critical (red triangle icon).
- ONLY address the issues identified as critical (red triangle icon) in the INSIGHTS and DIAGNOSTICS section of the report.
- Do not add anything that is not directly related to one of the critical items identified with Puppeteer in the ‘pagespeed_improvements’ document.
- For each ‘fix’ priority, document which of the critical items identified with Puppeteer it addresses clearly.

SUCCESS CRITERIA:
- The critical items have been properly identified
- The details of the critical items have been properly identified
- The plan to fix these items has been created in ‘pagespeed_improvements.md’
- The user confirmed the critical items were as identified
text

In essence, this prompt makes Claude generate a file called pagespeed_improvements.md, using Puppeteer to identify the critical items and the PageSpeed API to retrieve their full details. Even if Google updates the HTML or CSS of the report page, this method should remain functional as long as they continue to tag critical issues with a red triangle icon.

I found that starting by generating a plan first improves Claude's output quality while helping to avoid context limitations.

In my experience, Claude's output quality drops significantly when auto-compaction kicks in, likely due to loss of key context. As such, by having a plan that guides what needs to be done, we can easily keep Claude on the right path across multiple instances.

Implementing the performance improvements

At this stage, I simply built a prompt like this:

TEXT Code
Please read these files in order to understand the project context:

CORE CONTEXT(Read First):
- pagespeed_improvements.md - Guide to implement the performance improvements

IMMEDIATE TASK:
- Implement the performance improvements detailed in pagespeed_improvements.md

ADDITIONAL SAFEGUARDS:
1. Provide a detailed to-dos list to the user and ask for confirmation before starting any implementation work
2. After each to-do item completion, update the pagespeed_improvements.md file to explain what was done


SUCCESS CRITERIA:
- The performance improvements were implemented
- The page loading speed has improved dramatically
- The site works as intended, none of the changes have had undesired effects
- No errors are present when launching the site with `npm run dev`
- ESLint passed
- The user has tested the implementation using `npm run dev`

CLOSING NOTES:
- Once the user confirms the implementation was a success, update @claude.md with the relevant updates that were completed
text

This prompt tells Claude which documents to read before starting. It then gives implementation tasks and asks for user confirmation before proceeding. Once the user approves the list, it starts working and consistently updates the planning document. In my experience, having Claude update the plan as it works is critical. If you ever restart the session due to context limits, the file gives it everything it needs to pick up where it left off.

Finally, we have some success criteria that are pretty similar to those of stories which product managers are quite used to and a final task where the user confirms the implementation was a success, which triggers the update of the claude.md file.

Once Claude is done, we can test the server in development to make sure everything is still working as expected. If that is the case, we can push the updates to git and to the production server.

I recommend waiting at least 30 minutes+ before running another PageSpeed insights to give some time for the updates to propagate.

Closing thoughts

Using Claude Code to improve my site performance has been quite an easy and straightforward process. Doing this kind of work manually in the past could’ve taken a very long time and could end up breaking things which could then trigger a never-ending case of whack-a-mole trying to identify what worked and what did not.

This approach lets us leverage LLMs to boost performance without playing detective.

More articles

All articles
First look at n8n to build an automated image factory
Automation

First look at n8n to build an automated image factory

The latest pushes in artificial intelligence have unlocked multiple new capabilities. Among them is the capability to automate different processes. In this article, I will discuss how I’ve used n8n to build an image generation factory.

Aug 10, 2025
20min

About the author

Maxim St-Hilaire is a Staff Product Manager at Udemy leading MarTech product strategy. Programmer-turned-PM. Bilingual, based in Toronto.

Read more about Maxim