How does GDB deal with large (> 1 GB) debugging files?

I have a problem debugging a C ++ application using a remote GDB session, which is a large code base, and therefore it contains (when compiled with the -O2, '-g', '-DNDEBUG' flags) a large file with debugging information ( 1.1 GB).

Unfortunately, I cannot just use partial symbol tables during debugging, since the debugger skips part of the application all the time, and I cannot set breakpoints there and see the code during debugging.

As a solution to this problem, I execute the following command after connecting to the target:

symbol-file -readnow [path-to-file-with-debugging-info] 

This extends the full character tables. But in this case, GDB just runs out of memory striking 13 GB or more RAM (so far I only have 16 GB available on my machine). This issue has already been identified in the GDB Wiki and is known.

My question is how to work with GDB in this case, when I need full character tables, but GDB requires a huge amount of memory to expand it?

Thanks in advance!

+6
source share
2 answers

Since working with large debug files is a weakness of GDB, the optimal way in this case was to reduce the size of the * .dbg file using debugging symbols not for all application modules, but only for those where debugging will actually occur.

In this case, with the ~ 150 mb * .dbg file and using the DS-5 debugger, I only needed 2.5 GB of RAM, which is acceptable.

0
source

You can try using the gold linker with the option --compress-debug-sections=zlib . This will reduce the size of debugging information. gdb can read compressed debugging information from version 7.0.

+1
source

Source: https://habr.com/ru/post/984782/


All Articles