Let’s consider a common problem: Your 12-factor Rails app has an operation, like uploading file to attach to an email, which causes your Heroku memory to jump past its allocation point (what you pay for when you choose a Dyno size). Remember, you will see something like this on the Metrics tab when looking at your Heroku app. A healthy application will look nice & normal. Here, we can see the the app uses about 256 mb out of its allocated 512 memory footprint allowed by the Dynos.
You can observe a small memory leak that happens gradually. This is a natural effect of Ruby’s, and so Heroku will automatically restart each of your dynos once every 24 hours.
Other than this, you generally don’t have to think about your memory usage other than to configure your concurrent workers.
In a problematic app — for example, an app with memory spikes — the thick purple line going over the dotted red line shows when the memory allocation went over the dyno’s allowed threshold— just before 9AM, in this example.
1/ Ruby is memory managed it doesn’t release memory back to the operating system right away
If your app code claims a lot of memory for loading a file into memory (or some other operation), then the runtime will hold it for a bit because it doesn’t know if you’re gonna be using that much memory again soon.
A quick & dirty solution is to tell Ruby objects to become nil when you stop using them:
big_object = nil
Although you have no direct control over Ruby’s garbage collector (“GC”), this will explicitly tell the GC that you’re no longer going to use that object reference and it will add the memory space to where it frees up.
On Heroku, you will see two levels of errors: R14 when your app is at 125% of its memory allocation and R15 when your app is at 150% of its memory allocation. Heroku will not switch off your dynos for R14, but it will for R15. (That’s why the former is called “Memory quota exceeded” and the latter is called “Memory quota vastly exceeded.”)