I’ve used this “dynamic” approach before and have moved away from it in favor of pre-generated image sizes. Several problems that are hard to solve:
- If you get hit with a bunch of identical requests at once (same image, same size), they will all be a cache miss and cause major CPU spikes unless you have some kind of queuing system.
- Getting hit with too many different requests at once (different images or sizes) can also overwhelm the system. Again, queuing would solve it
- Even with queuing, too many requests too close together will result in timeouts while images are being processed
- You must carefully check the request parameters to make sure appropriate sizes are requested
- Someone could potentially mess up your cache by sending requests for every possible image size within bounds
The security could be solved by fixed/named sizes:
YOUR_DOMAIN/images/image.jpg?size=medium, but that will not solve the performance issues. In my opinion there are no reasonable ways to solve the performance issues because resizing images on demand is a fundamentally flawed approach. It means keeping connections open and waiting for a potentially long running request, which is generally the opposite of what HTTP is supposed to do.
The solution I’ve favored in recent years (and the one I’m building in Adonis) is one that pre-generates sizes when the image initially comes into the system. That is a better place to queue them for processing and size generation, and that solution can scale because it causes you to address the case of the image not being immediately available (such as returning a placeholder).