Challenge your grasp of backend Python with practical scenarios inspired by large-scale systems and automation. Each question examines core skills highlighted in projects that develop real-world engineering habits.
What is a reliable method to compress large, actively growing log files in a backend application without losing recent write operations?
Explanation: Compressing and renaming log files above a size threshold preserves new writes and avoids risking loss of active data. Compressing all files daily may waste resources compressing very small or inactive files. Overwriting old files risks data loss, and deleting logs before compressing defeats the purpose of data retention.
When processing user-supplied JSON data in a backend service, what approach helps both prevent system failures and deliver clear error messages?
Explanation: Strict data validation helps ensure that only appropriate data enters the system, and user-friendly messages guide users to correct issues. Trusting input is insecure, generic errors obscure the problem, and silent error handling leads to hidden bugs that are hard to trace.
Which method effectively prevents overuse and abuse of a backend API by tracking user requests?
Explanation: Per-user rate limiting uses counters and timestamps to restrict the number of requests allowed within a given period, promoting fairness and system reliability. Allowing unlimited requests can lead to resource exhaustion, boosting memory doesn't solve abuse, and random rejection can frustrate legitimate users.
How can you reliably run periodic tasks such as backups or cleanups within a Python backend system?
Explanation: Automated tools ensure timely, error-free execution of periodic tasks without developer intervention. Running on every request is wasteful, relying on manual reminders is unreliable, and waiting until server restarts delays essential operations.
What is an effective way to manage unexpected or malformed input data in backend processing loops to keep systems reliable?
Explanation: Catching and logging errors allows the program to skip problematic data, improving reliability while aiding debugging. Aborting the loop on first error limits robustness, ignoring errors hides potential issues, and making assumptions about input correctness is unsafe.