Bing Launches ‘AI Performance’ in Webmaster Tools
Bing Webmaster Tools has officially launched an “AI Performance” section in public preview as of February 2026 which allows publishers to see exactly how their content is being used to generate answers in Microsoft Copilot and Bing’s AI summaries.
This is the first time we have had access to granular data regarding AI citations. It moves us away from the black box era of generative search and offers concrete metrics like total citations and grounding queries to help site owners track their visibility in the new search environment.
Finally some data we can actually use
I have been waiting for this one. For the last couple of years we have essentially been flying blind when it comes to AI search. We knew it was happening. We saw the traffic shifts. But we had no idea what was going on under the hood.

It was frustrating.
We could see our rankings in traditional search results but the AI answers were a total mystery. Now Bing Webmaster Tools has finally thrown us a bone. The new AI Performance dashboard gives us a look at how our content feeds into the machine. It is currently in public preview but the data is already more robust than I expected for a first release.
It feels like a pivotal moment. We are moving from guessing which pages are authoritative enough for a robot to quote to actually seeing the proof. Microsoft says this is an early step toward tools for Generative Engine Optimization or GEO.
I am not sure I love the acronym yet but the concept is sound. We need to know if we are part of the conversation or if we are being ignored.
What the dashboard actually shows us
The interface is clean. It reminds me a bit of the standard performance reports we are used to but the metrics are completely different. You don’t see clicks here. You don’t see click-through rates.
That might freak some people out.
Instead we get metrics that are specific to how Large Language Models function. The dashboard breaks down citation counts and shows us the top-performing pages that the AI references. It also provides a timeline chart that tracks visibility trends over time.
This is crucial because AI answers are not static. They change based on the user’s context and the model’s updates.
I spent some time digging through the data for a few client sites this morning. The “Page-level citation activity” is probably where I will be spending most of my time. It shows a breakdown of citation counts by specific URL.
You can see up to three months of historical data. This lets us correlate content updates with spikes in AI visibility.
If you optimized a page last month and suddenly it is getting cited ten times a day that is a win. Even if it doesn’t result in a direct click immediately it proves the content is trusted. Trust is the currency here.
The metrics that matter most

There are a few core metrics you need to get your head around. The first is Total citations. This is simply the number of times your site was cited as a source in an AI answer. It is a vanity metric in some ways but it gives you a high-level view of your authority.
Then you have Average cited pages. This is the daily average of unique URLs from your site that are referenced. If this number is low it means the AI only trusts a small section of your site.
If it is high it means you have broad topical authority. I prefer seeing a high number here.
The most interesting one is Grounding queries. These are sample key phrases that the AI systems use to retrieve and cite your content. They represent the intent behind the retrieval.
Are they informational? Are they navigational? It helps you figure out why the AI picked you.
I think this is going to change how we do keyword research. We aren’t just looking for what people type into a search bar anymore. We are looking for the concepts that trigger an AI to look for evidence. It is a subtle difference but an important one.
Bing is beating Google to the punch
I have to give credit where it is due. Microsoft is moving faster than Google on this. Google Search Console is great but their reporting on AI Overviews is still pretty vague.
They fold AI data into the overall performance reporting. You can’t easily separate it out.
Bing Webmaster Tools is giving us a dedicated report. They are treating AI citations as a distinct metric worth tracking on its own. That is a philosophical difference.
Google seems to want to blur the lines between search and AI. Bing is acknowledging that they are different experiences for the user and the publisher.
This transparency gives Bing an edge with the SEO community. We like data. We like granularity. If I can see exactly which URL is driving AI citations on Bing I can learn from that and apply those lessons elsewhere.
Google might have the market share but Bing has the better tooling right now.
Optimizing for the answer engine
So how do we actually improve these numbers? The guidance from Bing is fairly consistent with what we have known about SEO for years but with a twist. They emphasize clarity and structure. The AI needs to be able to parse your content easily.

You need to use clear headings. You need to back up your claims with evidence. The Prometheus model which powers a lot of this prioritizes “grounded” content.
That means content that is factual and supported by data. If you are just writing fluff the AI is going to ignore you.
I have noticed that pages with good entity consistency perform better. If you are writing about a specific topic make sure you are using the correct terminology and linking to related entities. It helps the AI understand the context.
Also freshness matters. If your content is outdated the AI won’t cite it for current queries.
I was looking at a client’s site yesterday that had huge traffic but zero citations. The content was unstructured walls of text. We broke it up with H2s and bullet points and I suspect we will see those citation numbers go up. It is about making the information accessible to the machine.
The technical side of being cited
It is not just about the words on the page. The technical foundation plays a huge role here. Bing has mentioned that page speed is a factor for AI retrieval. This makes sense.
The AI has to generate an answer in milliseconds. It can’t wait for your slow server to respond.
They specifically mention low Largest Contentful Paint or LCP. If your page loads instantly via caching and has minimal redirects it is more likely to be picked up. I think a lot of people overlook this.
They think AI is just about text analysis. It is also about performance engineering.
Another thing is the use of multimodal content. Images and tables help. Copilot often generates experiences that include visuals. If your content has relevant images with good alt text it increases the chances of being cited.
It helps the AI create a richer answer.
There is a strange quirk in the reporting though. Sometimes the data can be a bit delayed. It is not real-time. You are looking at trends over weeks not minutes.
You have to be patient. I also noticed a spelling mistake in one of the tooltips the other day so even Microsoft isn’t perfect. They might need to fix that accomadation for us perfectionists.
Why this changes the client conversation
This is the tricky part. Clients are used to seeing clicks. They want to know how much traffic they got. When you show them a report that says they were cited 500 times but it resulted in zero clicks they might get confused. Or angry.
You have to frame it differently. This is about brand awareness and influence. If you are the source of the answer you are the authority. That has value even if it doesn’t result in a visit right that second.
It builds trust with the user.
I tell my clients that this is top-of-funnel activity. The user is getting their answer. If they want to go deeper they will click. But if you aren’t even in the answer you don’t exist to them.
It is better to be cited and seen than to be invisible.
We are shifting from a traffic-only mindset to a visibility mindset. It is a hard sell sometimes. But it is the reality of where search is going. The “ten blue links” are not the only game in town anymore.
Final Thoughts
This launch is a big deal. It validates what many of us have been saying for a while. AI search is here and it is measurable. The Bing AI Performance dashboard gives us the tools we need to start optimizing for this new reality.
It is not perfect. I would love to see more data on click-throughs from these citations. But it is a start. It allows us to have informed conversations about Bing Webmaster Tools data rather than just speculating.
I am going to be spending a lot of time in this dashboard over the next few months.
We have to adapt. The industry changes fast and if you aren’t looking at these numbers you are going to get left behind. It is exciting in a way. It keeps us on our toes.
Just remember to check your robots.txt if you don’t want to play. But honestly I think you should play.
