Most database performance conversations focus on missing indexes — queries running slow because PostgreSQL is doing sequential scans when it should be using an index. That's a real problem. But there's an equally common problem that gets far less attention: indexes that exist but are never used.
Unused indexes are not free. Every index on a table must be updated on every INSERT, UPDATE, and DELETE. PostgreSQL has to maintain it, vacuum it, and keep it in memory. An index that no query ever touches is paying a write-time tax with zero read-time benefit.
In a long-lived Rails application, the accumulation is gradual and invisible. A migration adds an index that made sense at the time. The query it supported gets rewritten, or the feature it served gets removed, or a composite index added later makes it redundant. The index stays. 3 years later you have a table with 14 indexes, 4 of which are never touched, and your write throughput has quietly degraded.
Why unused indexes hurt
Write overhead. Every INSERT into a table updates all its indexes. Every UPDATE to an indexed column updates the relevant indexes. For a busy table with many indexes, the overhead is significant — and it compounds under load.
Autovacuum contention. PostgreSQL's autovacuum process cleans up dead tuples left behind by updates and deletes. It has to process every index on a table, not just the live ones. More indexes means more work per vacuum cycle, which means either more frequent interruptions or a larger backlog of dead tuples.
Shared buffer pressure. Indexes compete for space in PostgreSQL's shared buffer cache. An unused index sitting in cache is displacing table data or active indexes that could improve read performance.
Bloat. Indexes accumulate bloat over time as pages fill with dead index entries. An unused index bloats just as much as an active one, wasting storage and slowing vacuum further.
Before you start: check when stats were reset
PostgreSQL's index usage statistics come from the pg_stat_user_indexes view, which accumulates scan counts since the last statistics reset. Before acting on any of this data, check how long the stats have been collecting:
SELECT
stats_reset,
now() - stats_reset AS age
FROM pg_stat_bgwriter;
If stats_reset was yesterday, an index with idx_scan = 0 tells you nothing useful — you need weeks or months of data to be confident. Stats are reset when you run SELECT pg_stat_reset() or when PostgreSQL is reinstalled. If you're working with a database that's been through a migration or recent maintenance, treat the results with appropriate scepticism.
Finding unused indexes
SELECT
schemaname,
tablename,
indexname,
idx_scan AS scans,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND schemaname NOT IN ('pg_catalog', 'pg_toast')
ORDER BY pg_relation_size(indexrelid) DESC;
This returns every index that has never been used since stats were last reset, sorted by size so the most wasteful ones appear first. On a busy application with months of stats, this list is your shortlist for removal.
Finding rarely used indexes
Zero scans is the clear case. But indexes with very low scan counts relative to their maintenance cost are also worth reviewing:
SELECT
schemaname,
tablename,
indexname,
idx_scan AS scans,
idx_tup_read AS tuples_read,
idx_tup_fetch AS tuples_fetched,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE idx_scan < 50
AND schemaname NOT IN ('pg_catalog', 'pg_toast')
ORDER BY pg_relation_size(indexrelid) DESC;
An index that's been scanned 12 times in six months on a table with millions of writes is almost certainly not earning its keep.
Checking for duplicate indexes
Rails migrations make duplicate indexes easy to accumulate — especially if indexes are added by different developers or if a composite index makes a single-column one redundant:
SELECT
indrelid::regclass AS table_name,
array_agg(indexrelid::regclass) AS indexes,
array_agg(indkey::text) AS column_positions
FROM pg_index
GROUP BY indrelid, indkey
HAVING count(*) > 1
ORDER BY indrelid::regclass::text;
A more readable version that shows column names:
SELECT
t.relname AS table_name,
ix.indkey AS index_columns,
array_agg(i.relname) AS duplicate_indexes,
pg_size_pretty(
sum(pg_relation_size(i.oid))
) AS combined_size
FROM pg_index ix
JOIN pg_class t ON t.oid = ix.indrelid
JOIN pg_class i ON i.oid = ix.indexrelid
WHERE t.relkind = 'r'
GROUP BY t.relname, ix.indkey
HAVING count(*) > 1
ORDER BY sum(pg_relation_size(i.oid)) DESC;
What not to drop
Before removing anything, check whether the index is serving a constraint:
SELECT
indexname,
indexdef
FROM pg_indexes
WHERE tablename = 'your_table'
AND (indexdef ILIKE '%unique%' OR indexname IN (
SELECT conname FROM pg_constraint
WHERE conrelid = 'your_table'::regclass
AND contype IN ('p', 'u')
));
Never drop:
- Primary key indexes
- Unique indexes enforcing a constraint (even if
idx_scan = 0— they're not used for reads, but they prevent duplicate writes) - Partial indexes used for constraint exclusion
- Indexes on foreign key columns if they support referential integrity checks
For unique indexes, idx_scan = 0 means no SELECT has used it — but it's still doing work on every INSERT and UPDATE to enforce uniqueness. Whether that's worth keeping is a business logic question, not a performance one.
Checking vacuum status
Unused indexes make vacuum more expensive. Here's how to see which tables have the most dead tuple accumulation and when they were last cleaned:
SELECT
schemaname,
tablename,
n_live_tup AS live_tuples,
n_dead_tup AS dead_tuples,
round(
n_dead_tup::numeric / NULLIF(n_live_tup + n_dead_tup, 0) * 100,
2
) AS dead_pct,
last_vacuum,
last_autovacuum,
vacuum_count,
autovacuum_count
FROM pg_stat_user_tables
WHERE n_dead_tup > 0
ORDER BY n_dead_tup DESC;
A dead_pct consistently above 10–20% on a busy table suggests autovacuum isn't keeping up. The most common causes are: too many indexes slowing each vacuum cycle, autovacuum thresholds set too conservatively, or a long-running transaction blocking vacuum from advancing.
Checking whether autovacuum is running
SELECT
pid,
datname,
now() - query_start AS running_for,
query
FROM pg_stat_activity
WHERE query LIKE 'autovacuum:%'
ORDER BY query_start;
If this returns nothing on a table with high dead tuple counts, autovacuum may be blocked. Check for long-running transactions that are holding back the oldest transaction ID:
SELECT
pid,
usename,
application_name,
now() - xact_start AS transaction_age,
now() - query_start AS query_age,
state,
left(query, 100) AS query_snippet
FROM pg_stat_activity
WHERE xact_start IS NOT NULL
ORDER BY xact_start
LIMIT 10;
A transaction that's been open for hours will prevent autovacuum from reclaiming dead tuples on any table it touched — regardless of how many indexes you remove.
Checking table and index bloat
For a quick bloat estimate without a heavyweight extension:
SELECT
tablename,
pg_size_pretty(pg_total_relation_size(quote_ident(tablename))) AS total_size,
pg_size_pretty(pg_relation_size(quote_ident(tablename))) AS table_size,
pg_size_pretty(
pg_total_relation_size(quote_ident(tablename))
- pg_relation_size(quote_ident(tablename))
) AS index_size,
round(
(pg_total_relation_size(quote_ident(tablename))
- pg_relation_size(quote_ident(tablename)))::numeric
/ NULLIF(pg_total_relation_size(quote_ident(tablename)), 0) * 100,
2
) AS index_pct
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(quote_ident(tablename)) DESC
LIMIT 20;
Tables where index_pct is above 70–80% are worth scrutinising. It doesn't necessarily mean the indexes are bad, but it's a signal to cross-reference with the unused index query above.
Removing indexes safely
Never drop an index without using CONCURRENTLY. Without it, PostgreSQL acquires an AccessExclusiveLock that blocks all reads and writes for the duration:
DROP INDEX CONCURRENTLY IF EXISTS public.index_name;
CONCURRENTLY allows normal table operations to proceed during the drop. It takes longer, but on a production database it's the only safe option.
For extra caution on a busy system, disable the index first and monitor for a few days before dropping it:
-- Mark the index as invalid (PostgreSQL won't use it for queries)
UPDATE pg_index SET indisvalid = false
WHERE indexrelid = 'public.index_name'::regclass;
-- After confirming no impact, drop it properly
DROP INDEX CONCURRENTLY IF EXISTS public.index_name;
In a Rails application, document the removal in the migration comment:
class RemoveUnusedIndexOnOrdersUserId < ActiveRecord::Migration[8.0]
# idx_scan: 0 over 6 months (stats since 2025-08-01)
# Superseded by index_orders_on_user_id_and_status
def change
remove_index :orders, name: :index_orders_on_user_id, algorithm: :concurrently
end
end
A regular maintenance habit
Run the unused index query quarterly on your production database. Set a bookmark. The stats accumulate quietly, and so does the waste. A 30-minute review once a quarter is cheap compared to the write throughput you recover from removing indexes that have been dead weight for years.
If you'd like help auditing your database performance or reviewing your PostgreSQL setup as part of a broader infrastructure review, get in touch.