AI Adoption
Learn about AI adoption patterns for your team
Track how your team is adopting AI coding assistants and visualize usage patterns across different tools, team members, and time periods.
AI adoption doesn't happen uniformly—it varies by individual preference, seniority, delivery pressure, and organizational enablement efforts. Our research shows that simply purchasing AI tools doesn't guarantee adoption. Successful rollouts require deliberate enablement, peer learning, and creating space for experimentation outside of high-pressure delivery cycles.
Daily Active Users (DAUs)

What it is:
Daily Active Users (DAU) measures the percentage of your team actively using AI coding assistants each day. When viewing weekly charts, we show the average of daily percentages across that week. As a bonus, we've included DAU insights not just for your Multitudes contributors but for anyone across the organization who's using these AI tools (the non-contributors). That means you'll be able to see how leaders and people outside of engineering are using AI tools.
Why it matters:
DAU helps you understand adoption momentum and identify patterns over time. A high DAU indicates that AI tools have become part of your team's daily workflow. Sharp increases often follow enablement sessions or peer learning initiatives, while dips during high-pressure periods (like before a product launch) may indicate that developers don't have time to learn a new tool, or don't see the value of AI in making them faster.
How we calculate it:
For each group (team, level, geography, tool or non-contributors), we calculate the percentage of people who used AI tools on a given day out of everyone in the group, including active and inactive AI users, and those without AI accounts or integration with Multitudes. In most views, we count anyone who used any AI tool. In the AI Tool view, we calculate DAU separately for each specific tool—so if someone uses both Cursor and Claude Code on the same day, they count towards the DAU for both tools.
Non-contributors refer to Multitudes users that aren't Multitudes contributors – people whose data isn't normally reflected in Multitudes (see more about how we define Contributors). We have included it in this and the Intensity of Usage chart (below) because we recognize the importance of understanding AI impact beyond traditional engineering roles.
What good looks like
In our research, we found that teams with strong AI adoption typically see 50%+ DAU among their high AI users when measured across all seven days of the week. However, in the app we’ve set the benchmark at 35% DAU to account for weekend exclusions in our research methodology (weighting 50% for 5 days a week = 50% * 5/7 = 35.7%).
This adjusted target provides a more appropriate comparison for typical workweek usage patterns.Note that DAU is just the starting point for whether AI could be having an impact in your organization; you’ll need to combine this with AI Impact Measurement to understand outcomes. Look for sustained usage over time rather than initial spikes that fade.
Intensity of Usage

What it is:
Intensity of Usage complements the DAU metric – while DAU shows how often people are using AI tooling, this chart shows how deeply people are using these tools. We show intensity of usage based on cost, input tokens, and lines accepted. Like the DAU chart, we combine activity across all your AI tools. Also like the DAU chart, we include this data not only for Multitudes contributors but for anyone across the organization who's using these AI tools (the non-contributors).
Why it matters:
Intensity of Usage helps you understand how deeply your teams are engaging with AI tooling, not just how often people are logging in. This matters because two teams can have similar DAU, but very different patterns of usage: One may be using AI lightly for occasional assistance, while another may be relying on it heavily as part of everyday work. Looking at intensity alongside DAU helps you identify your AI superusers.
How we calculate it:
We offer three metrics to measure intensity.
Cost — The cost of AI tool usage. Note that this is different from spend for subscription-based plans. Instead, it's showing what the LLM reported was the cost of the queries that the user sent. We show this in USD.
Input tokens — The number of tokens sent to AI models. We show input tokens rather than output tokens because the input tokens are more linked to user actions. In addition, using reasoning mode can dramatically increase the number of output tokens even if the user inputted the same prompt as someone not working in reasoning mode.
Lines accepted — The number of AI-suggested lines of code that were accepted. We use lines accepted (rather than lines suggested) because lines suggested can vary by model, so it makes it harder to compare across models and tools, and because lines accepted shows what a human decided was close enough to start with as a base (even if the human then made significant changes later).
Not all metrics are available for every AI tool integration. A summary of data availability by AI tool integration is shown below:
Lines accepted
✅
✅
✅
✅
Input tokens
✅
✅
✅
❌
Cost
✅
✅
✅
❌
For each metric, intensity of usage can be shown either as a total for your selected teams or as an average per person. The average-per-person view is calculated by taking the total usage for the selected metric (e.g., total lines accepted by everyone in the group) and dividing it by the number of contributors in the group (e.g., the people on the selected team).
As with the DAU chart above, one of the tabs on this chart allows you to view it for non-Multitudes contributors, people whose data isn't normally reflected in Multitudes (see more about how we define Contributors). We have included it in this here because we recognize the importance of understanding AI impact beyond traditional engineering roles.
Super-users

What it is
Super-users are team members who demonstrate consistently high AI tool usage. These individuals have often developed practices and workflows that work well on your specific codebase.
Why it matters
Super-users are a great source of insights on how to get more from your AI tooling.
Super-users are usually high AI users because they enjoy learning the latest AI practices, and they’ve already learned what works best on your organization’s codebase. (For example, AI still struggles with complex codebases – but your AI super-users may have developed unique ways to get around this.)
Our research shows that not only do these users have great principles for using AI well, but their peers really want to learn from them, so super-users are a great place to start with your peer-to-peer learning initiatives.
How we identify super-users
Super-users are identified based on two factors: usage intensity and consistency. Super-users are identified for each tool, based on the previous 28 days of activity.
A super-user must meet both of the following criteria:
Super-users must be in the top 10% of usage on that specific tool. We look at user activity in each tool. We first look at the top 10% of users by cost of queries – because this metric is more consistent across tools and providers. If users are tied on cost, we break ties using input tokens (which reflect how much context was sent to the AI), followed by lines accepted. Lines accepted is the least preferred because some AI queries won't generate any lines accepted, so it's not as consistent as cost and tokens.
Consistent activity over the measurement period. Users must have at least 10 days of active use on a tool to qualify as super-users.
This dual-criteria approach ensures we identify users who are both heavily engaged and consistently using AI tools, rather than those with occasional spikes in usage.
Someone could be a super-user for multiple tools too, in which case we would show them as a super-user for each of those tool.
How to use information on super-users
Identify and interview your super-users to understand their workflows
Document specific prompts and practices that work on your codebase
Create peer-to-peer learning opportunities where super-users demonstrate real examples
Scale proven practices in playbooks and standards (e.g., in shared rules or markdown files that you recommend people put into LLM memory)
Software engineers trust their peers more than external hype – and sometimes even more than they trust leaders. Showing how AI works on your actual codebase, shared by team members they respect, is one of the most effective things you can do to accelerate adoption.
Run an AI Impact Survey

With the pace of change in AI, it’s extra important to understand how your team is feeling about these tools, and how they're using them. That's why we offer an AI survey in Multitudes.
What you can find out by running this survey
How satisfied your people feel with AI
What use cases people are using AI for
The AI skills your team feels most and least effective with
How people perceive that AI is impacting their productivity and hours worked; we'll then help you cross-compare this with our telemetry data
How to run an AI survey
Make a copy of the survey: Clicking this link will prompt you to copy our template Google Form. Running the survey yourself ensures all data collected belongs to you. If you want the survey template in a different format besides Google Form, reach out to [email protected].
Make edits to the form: You'll need to edit the description; note that we've bolded areas you need to complete, for how you'll use the data and the due date. You are welcome to add additional questions – but note that we’ll only show the questions from the original template in the Multitudes app. We recommend that you give people a couple weeks to complete the survey.
Share the survey: Share this survey with everyone at your org who writes code. You’ll likely need to remind people a couple times to get enough responses.
Getting results into Multitudes: After you close the survey, share a spreadsheet of responses with [email protected]
Last updated
Was this helpful?

