Memory leaks are the silent killers of Android apps. Your app works perfectly in testing, passes code review, and ships to production—only to start behaving strangely after users spend time navigating through your features. Pages load more slowly, animations stutter, and the app may eventually crash with cryptic error messages.
At Inellipse, we encountered this issue while developing our video player application, which is built with Jetpack Compose and ExoPlayer. We were focused on optimizing the player screen to enable faster channel switching. Still, during testing, we noticed something alarming: the app was becoming progressively slower the more we switched between channels. A quick investigation revealed that ExoPlayer instances weren't being properly released, and event listeners were accumulating in memory with each navigation.
We quickly realized that finding and fixing memory leaks requires a methodical approach, the right tools, and patience to follow the data. Below, we'll share how we used Android Studio Memory Profiler to systematically track down and eliminate these leaks.
Android Studio's Memory Profiler is incredibly powerful, but when you first open it, the interface can be overwhelming. There are graphs, timelines, buttons, and tabs—and if you don't know what you're looking for, it's easy to get lost in the data.
When you open a heap dump, you're looking at a snapshot of every single object in memory. Production apps can have hundreds of thousands of objects allocated at any given time, and not every retained object is a leak—some are legitimately needed. With Compose and ExoPlayer together, there are multiple lifecycle layers to consider—leaked composables, player instances, or listeners can each keep large amounts of memory alive.
Our initial attempts were frustratingly unproductive. We watched the memory graph go up and down, but without forcing garbage collection or taking strategic snapshots, we couldn't tell what was normal behavior versus actual leaks. Taking random heap dumps gave us massive files with no clear starting point. Searching for "ExoPlayer" overwhelmed us with instances we couldn't evaluate properly.
We developed a step-by-step process that transformed Memory Profiler into our most valuable debugging asset.
Step 1: Record a Controlled Session
Instead of just running the app, we started a Memory Profiler recording with a specific test plan: navigate to the player screen, force garbage collection (using the trash can icon), note the baseline, switch channels, force GC again, and check if memory returned to baseline.
This immediately revealed the problem: after switching channels multiple times, memory kept climbing by approximately 40-50MB per switch, even after garbage collection.
Notice the dotted line steadily climbing upward in a staircase pattern—each step represents a channel switch where memory wasn't released. In the heap dump table below, you can see over 13,000 allocations and excessive ExoPlayer-related objects: LoadControlsParameters (2,861 instances), AnalyticsListener$Events (5,485 instances), and other player components with counts far exceeding what a single active player should need.

Step 2: Capture and Compare Heap Dumps
We captured dumps at strategic moments: before navigating to the player (baseline), after playing one channel, after switching to a second channel and forcing GC, and after a third switch. By comparing these dumps, we could see exactly what objects were accumulating—player instances and listeners that should have been cleaned up.
Step 3: Master the Heap Dump Interface
We switched from "Arrange by class" to "Arrange by package" to focus on our app's objects. The instance count column was crucial—seeing 14 ExoPlayer instances when only one should be active was an immediate red flag.
Most importantly, sorting by Retained Size instead of Shallow Size revealed which objects were actually holding onto massive amounts of memory.
Step 4: Follow the Reference Chain
We examined the "References" section in the bottom panel, which shows the exact path of references keeping an object alive. Memory Profiler displays this as an expandable tree that you read from bottom to top, tracing backward from the leaked object to whatever was holding onto it. The reference chains revealed issues in how we were managing the player lifecycle and event listeners in our composables.
Even with the right workflow, interpreting Memory Profiler's data required learning to distinguish between normal behavior and actual problems.
Shallow Size is just the memory the object itself occupies—usually tiny. A single ExoPlayer instance showed only about 800 bytes of shallow size, which seemed harmless.
Retained Size is the total memory that would be freed if this object were garbage collected, including everything it references. Each leaked ExoPlayer's retained size was over 45MB—a massive leak when multiplied by several instances.
Small objects can hold references to huge amounts of memory—video buffers, cached frames, media codec instances, surface textures, and event listener collections. All of this stayed in memory because the ExoPlayer instances weren't being properly released when switching channels.
Memory Profiler's Allocation Tracker helped us understand when memory was being allocated. We recorded allocations while switching between channels several times, then reviewed the timeline. This showed that new player instances and listeners were being created with each switch—but critically, the old ones weren't being disposed of.
The real-time memory graph became our health check. Healthy patterns show memory rising during activity and dropping sharply after GC, creating a sawtooth pattern with a stable baseline. Our leak showed a steadily rising baseline that never fully dropped, creating a staircase effect.
Once we identified the root causes through Memory Profiler's heap dumps and reference chains, we implemented targeted fixes.
The primary issue was improper listener management—our player listeners were being added but never explicitly removed before calling player.release(). We refactored our cleanup logic to ensure all listeners are properly removed in the DisposableEffect's onDispose callback.
We also restructured our approach to player lifecycle management. Instead of creating new ExoPlayer instances for each channel switch, we now reuse a single player instance throughout the session. When switching channels, we simply load new media into the existing player rather than creating and disposing of instances repeatedly. This change alone eliminated the accumulation of player instances we saw in the heap dumps.
After implementing these fixes, we ran Memory Profiler again to verify the improvements:

The difference is striking. The memory graph now shows a healthy sawtooth pattern with memory properly dropping after garbage collection. Allocations dropped dramatically from over 13,000 to 5,028, and object counts are now reasonable. Most importantly, the memory baseline remains stable even after multiple channel switches.
After systematically using Memory Profiler's features to identify and fix the leaks:
The biggest lesson? Android Studio Memory Profiler isn't just a monitoring tool—it's a complete investigation platform. Learning to use heap dumps, allocation tracking, and the reference chain viewer transforms memory debugging from guesswork into a systematic process. The tool gives you all the answers; you just need to know which buttons to press and what questions to ask.
Have you encountered similar ExoPlayer memory issues in your Android projects? Share your experiences in the comments below or connect with us at Inellipse!
Memory leaks are the silent killers of Android apps. Your app works perfectly in testing...
As businesses continue to digitize their workflows, the demand for intelligent document processing...
At Inellipse, we've been closely following everything that came out of the Roku Developer Summit 2025 and it...