Debugging slow CGIs

From genomewiki
Jump to navigationJump to search

See see Debugging cgi-scripts

This page is a collection of considerations and tips for troubleshooting slow CGI response, targeted toward engineers working on the Browser.

  • what does measureTiming say? If nothing is in measureTiming, make sure once this is resolved to add a call somewhere so the slowdown shows up
  • Does the problem occur on mirrors? If yes, it's unlikely to be hgnfs1.
  • Are the trash cleaners running? If not, disk slowness is expected because the trash/ directory quickly fills up with junk, and the file system struggles to work around having so many files in a single directory.
  • Does the problem also occur on hgwdev? If so, debugging is usually easier.
    • note that hgwdev is different. e.g. the trackDb caching is not active there and bigDataUrl/tables are checked, which is something that the RR does not do.
  • If you have a query string for the CGI (like hgTracks) that you know will demonstrate the problem, try runnning "gdb --args hgTracks <querystring>" a few times, ctrl-x and "bt" to get the backtrace. That can help you track down where the CGI is getting stuck.
  • If the CGI is exceptionally long-running and you're having trouble reproducing the issue on the command-line, you can attach to an actively running CGI process (owned by apache) with gdb. This requires sudo privileges for gdb. First, find the process id of the problematic CGI with top or ps aux or ps fax. Next, "sudo gdb <processId>". Sudo is needed because the goal is to attach to a process owned by apache, and normal users don't have permission to do that. Third, in gdb run "attach <pid>". This will interrupt the running CGI and set you up with its active callstack. From there you can run "bt" to get a backtrace and find out where you are, and then go from there.

Examples: