| [Top] | [Contents] | [Index] | [ ? ] |
This file describes , the GNU symbolic debugger.
This is the Tenth Edition, for Version 7.7.1
Copyright (C) 1988-2014 Free Software Foundation, Inc.
This edition of the GDB manual is dedicated to the memory of Fred Fish. Fred was a long-standing contributor to GDB and to Free software in general. We will miss him.
Summary of 1. A Sample Session A sample session
2. Getting In and Out of Getting in and out of 3. Commands commands 4. Running Programs Under Running programs under 5. Stopping and Continuing Stopping and continuing 6. Running programs backward 7. Recording Inferior's Execution and Replaying It Recording inferior's execution and replaying it 8. Examining the Stack Examining the stack 9. Examining Source Files Examining source files 10. Examining Data Examining data 11. Debugging Optimized Code Debugging optimized code 12. C Preprocessor Macros Preprocessor Macros 13. Tracepoints Debugging remote targets non-intrusively 14. Debugging Programs That Use Overlays Debugging programs that use overlays
15. Using with Different Languages Using with different languages
16. Examining the Symbol Table Examining the symbol table 17. Altering Execution Altering execution 18. Files files 19. Specifying a Debugging Target Specifying a debugging target 20. Debugging Remote Programs Debugging remote programs 21. Configuration-Specific Information Configuration-specific information 22. Controlling 23. Extending 24. Command Interpreters 25. Text User Interface 26. Using under GNU Emacs 27. The GDB/MI Interface 's Machine Interface. 28. Annotations 's annotation interface. 29. JIT Compilation Interface Using the JIT debugging interface. 30. In-Process Agent
31. Reporting Bugs in Reporting bugs in
A. In Memoriam B. Formatting Documentation How to format and print documentation C. Installing Installing GDB D. Maintenance Commands E. Remote Serial Protocol GDB Remote Serial Protocol F. The GDB Agent Expression Mechanism G. Target Descriptions How targets can describe themselves to
H. Operating System Information Getting additional information from the operating system I. Trace File Format GDB trace file format J. .gdb_indexsection format.gdb_index section format K. GNU GENERAL PUBLIC LICENSE GNU General Public License says how you can copy and share GDB L. GNU Free Documentation License The license for this documentation Concept Index Index of concepts Command, Variable, and Function Index Index of commands, variables, functions, and Python data types
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The purpose of a debugger such as is to allow you to see what is going on "inside" another program while it executes--or what another program was doing at the moment it crashed.
can do four main kinds of things (plus other things in support of these) to help you catch bugs in the act:
You can use to debug programs written in C and C++. For more information, see Supported Languages. For more information, see C and C++.
Support for D is partial. For information on D, see D.
Support for Modula-2 is partial. For information on Modula-2, see Modula-2.
Support for OpenCL C is partial. For information on OpenCL C, see OpenCL C.
Debugging Pascal programs which use sets, subranges, file variables, or nested functions does not currently work. does not support entering expressions, printing values, or similar features using Pascal syntax.
can be used to debug programs written in Fortran, although it may be necessary to refer to some variables with a trailing underscore.
can be used to debug programs written in Objective-C, using either the Apple/NeXT or the GNU Objective-C runtime.
Free Software Freely redistributable software Free Software Needs Free Documentation Contributors to Contributors to GDB
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
is free software, protected by the GNU General Public License (GPL). The GPL gives you the freedom to copy or adapt a licensed program--but every person getting a copy also gets with it the freedom to modify that copy (which means that they must get access to the source code), and the freedom to distribute further copies. Typical software companies use copyrights to limit your freedoms; the Free Software Foundation uses the GPL to preserve these freedoms.
Fundamentally, the General Public License is a license which says that you have these freedoms and that you cannot take these freedoms away from anyone else.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The biggest deficiency in the free software community today is not in the software--it is the lack of good free documentation that we can include with the free software. Many of our most important programs do not come with free reference manuals and free introductory texts. Documentation is an essential part of any software package; when an important free software package does not come with a free manual and a free tutorial, that is a major gap. We have many such gaps today.
Consider Perl, for instance. The tutorial manuals that people normally use are non-free. How did this come about? Because the authors of those manuals published them with restrictive terms--no copying, no modification, source files not available--which exclude them from the free software world.
That wasn't the first time this sort of thing happened, and it was far from the last. Many times we have heard a GNU user eagerly describe a manual that he is writing, his intended contribution to the community, only to learn that he had ruined everything by signing a publication contract to make it non-free.
Free documentation, like free software, is a matter of freedom, not price. The problem with the non-free manual is not that publishers charge a price for printed copies--that in itself is fine. (The Free Software Foundation sells printed copies of manuals, too.) The problem is the restrictions on the use of the manual. Free manuals are available in source code form, and give you permission to copy and modify. Non-free manuals do not allow this.
The criteria of freedom for a free manual are roughly the same as for free software. Redistribution (including the normal kinds of commercial redistribution) must be permitted, so that the manual can accompany every copy of the program, both on-line and on paper.
Permission for modification of the technical content is crucial too. When people modify the software, adding or changing features, if they are conscientious they will change the manual too--so they can provide accurate and clear documentation for the modified program. A manual that leaves you no choice but to write a new manual to document a changed version of the program is not really available to our community.
Some kinds of limits on the way modification is handled are acceptable. For example, requirements to preserve the original author's copyright notice, the distribution terms, or the list of authors, are ok. It is also no problem to require modified versions to include notice that they were modified. Even entire sections that may not be deleted or changed are acceptable, as long as they deal with nontechnical topics (like this one). These kinds of restrictions are acceptable because they don't obstruct the community's normal use of the manual.
However, it must be possible to modify all the technical content of the manual, and then distribute the result in all the usual media, through all the usual channels. Otherwise, the restrictions obstruct the use of the manual, it is not free, and we need another manual to replace it.
Please spread the word about this issue. Our community continues to lose manuals to proprietary publishing. If we spread the word that free software needs free reference manuals and free tutorials, perhaps the next person who wants to contribute by writing documentation will realize, before it is too late, that only free manuals contribute to the free software community.
If you are writing documentation, please insist on publishing it under the GNU Free Documentation License or another free documentation license. Remember that this decision requires your approval--you don't have to let the publisher decide. Some commercial publishers will use a free license if you insist, but they will not propose the option; it is up to you to raise the issue and say firmly that this is what you want. If the publisher you are dealing with refuses, please try other publishers. If you're not sure whether a proposed license is free, write to licensing@gnu.org.
You can encourage commercial publishers to sell more free, copylefted manuals and tutorials by buying them, and particularly by buying copies from the publishers that paid for their writing or for major improvements. Meanwhile, try to avoid buying non-free documentation at all. Check the distribution terms of a manual before you buy it, and insist that whoever seeks your business must respect your freedom. Check the history of the book, and try to reward the publishers that have paid or pay the authors to work on it.
The Free Software Foundation maintains a list of free documentation published by other publishers, at http://www.fsf.org/doc/other-free-books.html.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Richard Stallman was the original author of , and of many other GNU programs. Many others have contributed to its development. This section attempts to credit major contributors. One of the virtues of free software is that everyone is free to contribute to it; with regret, we cannot actually acknowledge everyone here. The file `ChangeLog' in the distribution approximates a blow-by-blow account.
Changes much prior to version 2.0 are lost in the mists of time.
Plea: Additions to this section are particularly welcome. If you or your friends (or enemies, to be evenhanded) have been unfairly omitted from this list, we would like to add your names!
So that they may not regard their many labors as thankless, we particularly thank those who shepherded through major releases: Andrew Cagney (releases 6.3, 6.2, 6.1, 6.0, 5.3, 5.2, 5.1 and 5.0); Jim Blandy (release 4.18); Jason Molenda (release 4.17); Stan Shebs (release 4.14); Fred Fish (releases 4.16, 4.15, 4.13, 4.12, 4.11, 4.10, and 4.9); Stu Grossman and John Gilmore (releases 4.8, 4.7, 4.6, 4.5, and 4.4); John Gilmore (releases 4.3, 4.2, 4.1, 4.0, and 3.9); Jim Kingdon (releases 3.5, 3.4, and 3.3); and Randy Smith (releases 3.2, 3.1, and 3.0).
Richard Stallman, assisted at various times by Peter TerMaat, Chris Hanson, and Richard Mlynarik, handled releases through 2.8.
Michael Tiemann is the author of most of the GNU C++ support in , with significant additional contributions from Per Bothner and Daniel Berlin. James Clark wrote the GNU C++ demangler. Early work on C++ was by Peter TerMaat (who also did much general update work leading to release 3.0).
uses the BFD subroutine library to examine multiple object-file formats; BFD was a joint project of David V. Henkel-Wallace, Rich Pixley, Steve Chamberlain, and John Gilmore.
David Johnson wrote the original COFF support; Pace Willison did the original support for encapsulated COFF.
Brent Benson of Harris Computer Systems contributed DWARF 2 support.
Adam de Boor and Bradley Davis contributed the ISI Optimum V support. Per Bothner, Noboyuki Hikichi, and Alessandro Forin contributed MIPS support. Jean-Daniel Fekete contributed Sun 386i support. Chris Hanson improved the HP9000 support. Noboyuki Hikichi and Tomoyuki Hasei contributed Sony/News OS 3 support. David Johnson contributed Encore Umax support. Jyrki Kuoppala contributed Altos 3068 support. Jeff Law contributed HP PA and SOM support. Keith Packard contributed NS32K support. Doug Rabson contributed Acorn Risc Machine support. Bob Rusk contributed Harris Nighthawk CX-UX support. Chris Smith contributed Convex support (and Fortran debugging). Jonathan Stone contributed Pyramid support. Michael Tiemann contributed SPARC support. Tim Tucker contributed support for the Gould NP1 and Gould Powernode. Pace Willison contributed Intel 386 support. Jay Vosburgh contributed Symmetry support. Marko Mlinar contributed OpenRISC 1000 support.
Andreas Schwab contributed M68K GNU/Linux support.
Rich Schaefer and Peter Schauer helped with support of SunOS shared libraries.
Jay Fenlason and Roland McGrath ensured that and GAS agree about several machine instruction sets.
Patrick Duval, Ted Goldstein, Vikram Koka and Glenn Engel helped develop remote debugging. Intel Corporation, Wind River Systems, AMD, and ARM contributed remote debugging modules for the i960, VxWorks, A29K UDI, and RDI targets, respectively.
Brian Fox is the author of the readline libraries providing command-line editing and command history.
Andrew Beers of SUNY Buffalo wrote the language-switching code, the Modula-2 support, and contributed the Languages chapter of this manual.
Fred Fish wrote most of the support for Unix System Vr4. He also enhanced the command-completion support to cover C++ overloaded symbols.
Hitachi America (now Renesas America), Ltd. sponsored the support for H8/300, H8/500, and Super-H processors.
NEC sponsored the support for the v850, Vr4xxx, and Vr5xxx processors.
Mitsubishi (now Renesas) sponsored the support for D10V, D30V, and M32R/D processors.
Toshiba sponsored the support for the TX39 Mips processor.
Matsushita sponsored the support for the MN10200 and MN10300 processors.
Fujitsu sponsored the support for SPARClite and FR30 processors.
Kung Hsu, Jeff Law, and Rick Sladkey added support for hardware watchpoints.
Michael Snyder added support for tracepoints.
Stu Grossman wrote gdbserver.
Jim Kingdon, Peter Schauer, Ian Taylor, and Stu Grossman made nearly innumerable bug fixes and cleanups throughout .
The following people at the Hewlett-Packard Company contributed support for the PA-RISC 2.0 architecture, HP-UX 10.20, 10.30, and 11.0 (narrow mode), HP's implementation of kernel threads, HP's aC++ compiler, and the Text User Interface (nee Terminal User Interface): Ben Krepp, Richard Title, John Bishop, Susan Macchia, Kathy Mann, Satish Pai, India Paul, Steve Rehrauer, and Elena Zannoni. Kim Haase provided HP-specific information in this manual.
DJ Delorie ported to MS-DOS, for the DJGPP project. Robert Hoehne made significant contributions to the DJGPP port.
Cygnus Solutions has sponsored maintenance and much of its development since 1991. Cygnus engineers who have worked on fulltime include Mark Alexander, Jim Blandy, Per Bothner, Kevin Buettner, Edith Epstein, Chris Faylor, Fred Fish, Martin Hunt, Jim Ingham, John Gilmore, Stu Grossman, Kung Hsu, Jim Kingdon, John Metzler, Fernando Nasser, Geoffrey Noer, Dawn Perchik, Rich Pixley, Zdenek Radouch, Keith Seitz, Stan Shebs, David Taylor, and Elena Zannoni. In addition, Dave Brolley, Ian Carmichael, Steve Chamberlain, Nick Clifton, JT Conklin, Stan Cox, DJ Delorie, Ulrich Drepper, Frank Eigler, Doug Evans, Sean Fagan, David Henkel-Wallace, Richard Henderson, Jeff Holcomb, Jeff Law, Jim Lemke, Tom Lord, Bob Manson, Michael Meissner, Jason Merrill, Catherine Moore, Drew Moseley, Ken Raeburn, Gavin Romig-Koch, Rob Savoye, Jamie Smith, Mike Stump, Ian Taylor, Angela Thomas, Michael Tiemann, Tom Tromey, Ron Unrau, Jim Wilson, and David Zuhn have made contributions both large and small.
Andrew Cagney, Fernando Nasser, and Elena Zannoni, while working for Cygnus Solutions, implemented the original GDB/MI interface.
Jim Blandy added support for preprocessor macros, while working for Red Hat.
Andrew Cagney designed 's architecture vector. Many people including Andrew Cagney, Stephane Carrez, Randolph Chung, Nick Duffek, Richard Henderson, Mark Kettenis, Grace Sainsbury, Kei Sakamoto, Yoshinori Sato, Michael Snyder, Andreas Schwab, Jason Thorpe, Corinna Vinschen, Ulrich Weigand, and Elena Zannoni, helped with the migration of old architectures to this new framework.
Andrew Cagney completely re-designed and re-implemented 's unwinder framework, this consisting of a fresh new design featuring frame IDs, independent frame sniffers, and the sentinel frame. Mark Kettenis implemented the DWARF 2 unwinder, Jeff Johnston the libunwind unwinder, and Andrew Cagney the dummy, sentinel, tramp, and trad unwinders. The architecture-specific changes, each involving a complete rewrite of the architecture's frame code, were carried out by Jim Blandy, Joel Brobecker, Kevin Buettner, Andrew Cagney, Stephane Carrez, Randolph Chung, Orjan Friberg, Richard Henderson, Daniel Jacobowitz, Jeff Johnston, Mark Kettenis, Theodore A. Roth, Kei Sakamoto, Yoshinori Sato, Michael Snyder, Corinna Vinschen, and Ulrich Weigand.
Christian Zankel, Ross Morley, Bob Wilson, and Maxim Grigoriev from Tensilica, Inc. contributed support for Xtensa processors. Others who have worked on the Xtensa port of in the past include Steve Tjiang, John Newlin, and Scott Foehner.
Michael Eager and staff of Xilinx, Inc., contributed support for the Xilinx MicroBlaze architecture.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can use this manual at your leisure to read all about . However, a handful of commands are enough to get started using the debugger. This chapter illustrates those commands.
One of the preliminary versions of GNU m4 (a generic macro
processor) exhibits the following bug: sometimes, when we change its
quote strings from the default, the commands used to capture one macro
definition within another stop working. In the following short m4
session, we define a macro foo which expands to 0000; we
then use the m4 built-in defn to define bar as the
same thing. However, when we change the open quote string to
<QUOTE> and the close quote string to <UNQUOTE>, the same
procedure fails to define a new synonym baz:
$ cd gnu/m4 $ ./m4 define(foo,0000) foo 0000 define(bar,defn(`foo')) bar 0000 changequote(<QUOTE>,<UNQUOTE>) define(baz,defn(<QUOTE>foo<UNQUOTE>)) baz Ctrl-d m4: End of input: 0: fatal error: EOF in string |
Let us use to try to see what is going on.
$ m4 is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for ; type "show warranty" for details. , Copyright 1999 Free Software Foundation, Inc... () |
reads only enough symbol data to know where to find the rest when needed; as a result, the first prompt comes up very quickly. We now tell to use a narrower display width than usual, so that examples fit in this manual.
() set width 70 |
We need to see how the m4 built-in changequote works.
Having looked at the source, we know the relevant subroutine is
m4_changequote, so we set a breakpoint there with the
break command.
() break m4_changequote Breakpoint 1 at 0x62f4: file builtin.c, line 879. |
Using the run command, we start m4 running under
control; as long as control does not reach the m4_changequote
subroutine, the program runs as usual:
() run Starting program: /work/Editorial/gdb/gnu/m4/m4 define(foo,0000) foo 0000 |
To trigger the breakpoint, we call changequote.
suspends execution of m4, displaying information about the
context where it stops.
changequote(<QUOTE>,<UNQUOTE>)
Breakpoint 1, m4_changequote (argc=3, argv=0x33c70)
at builtin.c:879
879 if (bad_argc(TOKEN_DATA_TEXT(argv[0]),argc,1,3))
|
Now we use the command n (next) to advance execution to
the next line of the current function.
() n 882 set_quotes((argc >= 2) ? TOKEN_DATA_TEXT(argv[1])\ : nil, |
set_quotes looks like a promising subroutine. We can go into it
by using the command s (step) instead of next.
step goes to the next line to be executed in any
subroutine, so it steps into set_quotes.
() s set_quotes (lq=0x34c78 "<QUOTE>", rq=0x34c88 " |
The display that shows the subroutine where m4 is now
suspended (and its arguments) is called a stack frame display. It
shows a summary of the stack. We can use the backtrace
command (which can also be spelled bt), to see where we are
in the stack as a whole: the backtrace command displays a
stack frame for each active subroutine.
() bt #0 set_quotes (lq=0x34c78 "<QUOTE>", rq=0x34c88 " |
We step through a few more lines to see what happens. The first two
times, we can use `s'; the next two times we use n to avoid
falling into the xstrdup subroutine.
() s 0x3b5c 532 if (rquote != def_rquote) () s 0x3b80 535 lquote = (lq == nil || *lq == '\0') ? \ def_lquote : xstrdup(lq); () n 536 rquote = (rq == nil || *rq == '\0') ? def_rquote\ : xstrdup(rq); () n 538 len_lquote = strlen(rquote); |
The last line displayed looks a little odd; we can examine the variables
lquote and rquote to see if they are in fact the new left
and right quotes we specified. We use the command p
(print) to see their values.
() p lquote $1 = 0x35d40 "<QUOTE>" () p rquote $2 = 0x35d50 "<UNQUOTE>" |
lquote and rquote are indeed the new left and right quotes.
To look at some context, we can display ten lines of source
surrounding the current line with the l (list) command.
() l 533 xfree(rquote); 534 535 lquote = (lq == nil || *lq == '\0') ? def_lquote\ : xstrdup (lq); 536 rquote = (rq == nil || *rq == '\0') ? def_rquote\ : xstrdup (rq); 537 538 len_lquote = strlen(rquote); 539 len_rquote = strlen(lquote); 540 } 541 542 void |
Let us step past the two lines that set len_lquote and
len_rquote, and then examine the values of those variables.
() n 539 len_rquote = strlen(lquote); () n 540 } () p len_lquote $3 = 9 () p len_rquote $4 = 7 |
That certainly looks wrong, assuming len_lquote and
len_rquote are meant to be the lengths of lquote and
rquote respectively. We can set them to better values using
the p command, since it can print the value of
any expression--and that expression can include subroutine calls and
assignments.
() p len_lquote=strlen(lquote) $5 = 7 () p len_rquote=strlen(rquote) $6 = 9 |
Is that enough to fix the problem of using the new quotes with the
m4 built-in defn? We can allow m4 to continue
executing with the c (continue) command, and then try the
example that caused trouble initially:
() c Continuing. define(baz,defn(<QUOTE>foo<UNQUOTE>)) baz 0000 |
Success! The new quotes now work just as well as the default ones. The
problem seems to have been just the two typos defining the wrong
lengths. We allow m4 exit by giving it an EOF as input:
Ctrl-d Program exited normally. |
The message `Program exited normally.' is from ; it
indicates m4 has finished executing. We can end our
session with the quit command.
() quit |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter discusses how to start , and how to get out of it. The essentials are:
2.1 Invoking How to start 2.2 Quitting How to quit 2.3 Shell Commands How to use shell commands inside 2.4 Logging Output How to log 's output to a file
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Invoke by running the program . Once started,
reads commands from the terminal until you tell it to exit.
You can also run with a variety of arguments and options,
to specify more of your debugging environment at the outset.
The command-line options described here are designed to cover a variety of situations; in some environments, some of these options may effectively be unavailable.
The most usual way to start is with one argument, specifying an executable program:
program |
You can also start with both an executable program and a core file specified:
program core |
You can, instead, specify a process ID as a second argument, if you want to debug a running process:
program 1234 |
would attach to process 1234 (unless you also have a file
named `1234'; does check for a core file first).
Taking advantage of the second command-line argument requires a fairly complete operating system; when you use as a remote debugger attached to a bare board, there may not be any notion of "process", and there is often no way to get a core dump. will warn you if it is unable to attach or to read core dumps.
You can optionally have pass any arguments after the
executable file to the inferior using --args. This option stops
option processing.
--args gcc -O2 -c foo.c |
to debug gcc, and to set
gcc's command-line arguments (see section 4.3 Your Program's Arguments) to `-O2 -c foo.c'.
You can run without printing the front material, which describes
's non-warranty, by specifying -silent:
-silent |
You can further control how starts up by using command-line options. itself can remind you of the options available.
Type
-help |
to display all available options and briefly describe their use (` -h' is a shorter equivalent).
All options and command line arguments you give are processed in sequential order. The order makes a difference when the `-x' option is used.
2.1.1 Choosing Files Choosing files 2.1.2 Choosing Modes Choosing modes 2.1.3 What Does During Startup What does during startup
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When starts, it reads any arguments other than options as specifying an executable file and core file (or process ID). This is the same as if the arguments were specified by the `-se' and `-c' (or `-p') options respectively. ( reads the first argument that does not have an associated option flag as equivalent to the `-se' option followed by that argument; and the second argument that does not have an associated option flag, if any, as equivalent to the `-c'/`-p' option followed by that argument.) If the second argument begins with a decimal digit, will first attempt to attach to it as a process, and if that fails, attempt to open it as a corefile. If you have a corefile whose name begins with a digit, you can prevent from treating it as a pid by prefixing it with `./', e.g. `./12345'.
If has not been configured to included core file support, such as for most embedded targets, then it will complain about a second argument and ignore it.
Many options have both long and short forms; both are shown in the following list. also recognizes the long forms if you truncate them, so long as enough of the option is present to be unambiguous. (If you prefer, you can flag option arguments with `--' rather than `-', though we illustrate the more usual convention.)
-symbols file
-s file
-exec file
-e file
-se file
-core file
-c file
-pid number
-p number
attach command.
-command file
-x file
source command would.
See section Command files.
-eval-command command
-ex command
This option may be used multiple times to call multiple commands. It may also be interleaved with `-command' as required.
-ex 'target sim' -ex 'load' \ -x setbreakpoints -ex 'run' a.out |
-init-command file
-ix file
-init-eval-command command
-iex command
-directory directory
-d directory
-r
-readnow
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can run in various alternative modes--for example, in batch mode or quiet mode.
-nx
-n
`system.gdbinit'
--with-system-gdbinit
configure option (see section C.6 System-wide configuration and settings).
It is loaded first when starts, before command line options
have been processed.
`~/.gdbinit'
`./.gdbinit'
-x and
-ex have been processed. Command line options -x and
-ex are processed last, after `./.gdbinit' has been loaded.
For further documentation on startup processing, See section 2.1.3 What Does During Startup. For documentation on how to write command files, See section Command Files.
-nh
-quiet
-silent
-q
-batch
0 after processing all the
command files specified with `-x' (and all commands from
initialization files, if not inhibited with `-n'). Exit with
nonzero status if an error occurs in executing the commands
in the command files. Batch mode also disables pagination, sets unlimited
terminal width and height see section 22.4 Screen Size, and acts as if set confirm
off were in effect (see section 22.8 Optional Warnings and Messages).
Batch mode may be useful for running as a filter, for example to download and run a program on another computer; in order to make this more useful, the message
Program exited normally. |
(which is ordinarily issued whenever a program running under control terminates) is not issued when running in batch mode.
-batch-silent
stdout is prevented (stderr is
unaffected). This is much quieter than `-silent' and would be useless
for an interactive session.
This is particularly useful when using targets that give `Loading section' messages, for example.
Note that targets that give their output via , as opposed to
writing directly to stdout, will also be made silent.
-return-child-result
This option is useful in conjunction with `-batch' or `-batch-silent', when is being used as a remote program loader or simulator interface.
-nowindows
-nw
-windows
-w
-cd directory
-data-directory directory
-fullname
-f
-annotate level
The annotation mechanism has largely been superseded by GDB/MI (see section 27. The GDB/MI Interface).
--args
-baud bps
-b bps
-l timeout
-tty device
-t device
-tui
-interpreter interp
`--interpreter=mi' (or `--interpreter=mi2') causes to use the GDB/MI interface (see section The GDB/MI Interface) included since version 6.0. The previous GDB/MI interface, included in version 5.3 and selected with `--interpreter=mi1', is deprecated. Earlier GDB/MI interfaces are no longer supported.
-write
-statistics
-version
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Here's the description of what does during session startup:
If you wish to disable the auto-loading during startup, you must do something like the following:
$ gdb -iex "set auto-load python-scripts off" myprogram |
Option `-ex' does not work because the auto-loading is then turned off too late.
Init files use the same syntax as command files (see section 23.1.3 Command Files) and are processed by in the same way. The init file in your home directory can set options (such as `set complaints') that affect subsequent processing of command line options and operands. Init files are not executed if you use the `-nx' option (see section Choosing Modes).
To display the list of init files loaded by gdb at startup, you can use gdb --help.
The init files are normally called `.gdbinit'. The DJGPP port of uses the name `gdb.ini', due to the limitations of file names imposed by DOS filesystems. The Windows port of uses the standard name, but if it finds a `gdb.ini' file in your home directory, it warns you about that and suggests to rename the file to the standard name.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
quit [expression]
q
quit command (abbreviated
q), or type an end-of-file character (usually Ctrl-d). If you
do not supply expression, will terminate normally;
otherwise it will terminate using the result of expression as the
error code.
An interrupt (often Ctrl-c) does not exit from , but rather terminates the action of any command that is in progress and returns to command level. It is safe to type the interrupt character at any time because does not allow it to take effect until a time when it is safe.
If you have been using to control an attached process or
device, you can release it with the detach command
(see section Debugging an Already-running Process).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you need to execute occasional shell commands during your
debugging session, there is no need to leave or suspend ; you can
just use the shell command.
shell command-string
!command-string
! and command-string.
If it exists, the environment variable SHELL determines which
shell to run. Otherwise uses the default shell
(`/bin/sh' on Unix systems, `COMMAND.COM' on MS-DOS, etc.).
The utility make is often needed in development environments.
You do not have to use the shell command for this purpose in
:
make make-args
make program with the specified
arguments. This is equivalent to `shell make make-args'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You may want to save the output of commands to a file. There are several commands to control 's logging.
set logging on
set logging off
set logging file file
set logging overwrite [on|off]
overwrite if
you want set logging on to overwrite the logfile instead.
set logging redirect [on|off]
redirect if you want output to go only to the log file.
show logging
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can abbreviate a command to the first few letters of the command name, if that abbreviation is unambiguous; and you can repeat certain commands by typing just RET. You can also use the TAB key to get to fill out the rest of a word in a command (or to show you the alternatives available, if there is more than one possibility).
3.1 Command Syntax How to give commands to 3.2 Command Completion Command completion 3.3 Getting Help How to ask for help
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A command is a single line of input. There is no limit on
how long it can be. It starts with a command name, which is followed by
arguments whose meaning depends on the command name. For example, the
command step accepts an argument which is the number of times to
step, as in `step 5'. You can also use the step command
with no arguments. Some commands do not allow any arguments.
command names may always be truncated if that abbreviation is
unambiguous. Other possible command abbreviations are listed in the
documentation for individual commands. In some cases, even ambiguous
abbreviations are allowed; for example, s is specially defined as
equivalent to step even though there are other commands whose
names start with s. You can test abbreviations by using them as
arguments to the help command.
A blank line as input to (typing just RET) means to
repeat the previous command. Certain commands (for example, run)
will not repeat this way; these are commands whose unintentional
repetition might cause trouble and which you are unlikely to want to
repeat. User-defined commands can disable this feature; see
dont-repeat.
The list and x commands, when you repeat them with
RET, construct new arguments rather than repeating
exactly as typed. This permits easy scanning of source or memory.
can also use RET in another way: to partition lengthy
output, in a way similar to the common utility more
(see section Screen Size). Since it is easy to press one
RET too many in this situation, disables command
repetition after any command that generates this sort of display.
Any text from a # to the end of the line is a comment; it does nothing. This is useful mainly in command files (see section Command Files).
The Ctrl-o binding is useful for repeating a complex sequence of commands. This command accepts the current line, like RET, and then fetches the next line relative to the current line from the history for editing.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can fill in the rest of a word in a command for you, if there is only one possibility; it can also show you what the valid possibilities are for the next word in a command, at any time. This works for commands, subcommands, and the names of symbols in your program.
Press the TAB key whenever you want to fill out the rest of a word. If there is only one possibility, fills in the word, and waits for you to finish the command (or press RET to enter it). For example, if you type
() info bre TAB |
fills in the rest of the word `breakpoints', since that is
the only info subcommand beginning with `bre':
() info breakpoints |
You can either press RET at this point, to run the info
breakpoints command, or backspace and enter something else, if
`breakpoints' does not look like the command you expected. (If you
were sure you wanted info breakpoints in the first place, you
might as well just type RET immediately after `info bre',
to exploit command abbreviations rather than command completion).
If there is more than one possibility for the next word when you press TAB, sounds a bell. You can either supply more characters and try again, or just press TAB a second time; displays all the possible completions for that word. For example, you might want to set a breakpoint on a subroutine whose name begins with `make_', but when you type b make_TAB just sounds the bell. Typing TAB again displays all the function names in your program that begin with those characters, for example:
() b make_ TAB sounds bell; press TAB again, to see: make_a_section_from_file make_environ make_abs_section make_function_type make_blockvector make_pointer_type make_cleanup make_reference_type make_command make_symbol_completion_list () b make_ |
After displaying the available possibilities, copies your partial input (`b make_' in the example) so you can finish the command.
If you just want to see the list of alternatives in the first place, you can press M-? rather than pressing TAB twice. M-? means META ?. You can type this either by holding down a key designated as the META shift on your keyboard (if there is one) while typing ?, or as ESC followed by ?.
Sometimes the string you need, while logically a "word", may contain
parentheses or other characters that normally excludes from
its notion of a word. To permit word completion to work in this
situation, you may enclose words in ' (single quote marks) in
commands.
The most likely situation where you might need this is in typing the
name of a C++ function. This is because C++ allows function
overloading (multiple definitions of the same function, distinguished
by argument type). For example, when you want to set a breakpoint you
may need to distinguish whether you mean the version of name
that takes an int parameter, name(int), or the version
that takes a float parameter, name(float). To use the
word-completion facilities in this situation, type a single quote
' at the beginning of the function name. This alerts
that it may need to consider more information than usual
when you press TAB or M-? to request word completion:
() b 'bubble( M-? bubble(double,double) bubble(int,int) () b 'bubble( |
In some cases, can tell that completing a name requires using quotes. When this happens, inserts the quote for you (while completing as much as it can) if you do not type the quote in the first place:
() b bub TAB alters your input line to the following, and rings a bell: () b 'bubble( |
In general, can tell that a quote is needed (and inserts it) if you have not yet started typing the argument list when you ask for completion on an overloaded symbol.
For more information about overloaded functions, see C++ Expressions. You can use the command set
overload-resolution off to disable overload resolution;
see Features for C++.
When completing in an expression which looks up a field in a structure, also tries(2) to limit completions to the field names available in the type of the left-hand-side:
() p gdb_stdout.M-? magic to_fputs to_rewind to_data to_isatty to_write to_delete to_put to_write_async_safe to_flush to_read |
This is because the gdb_stdout is a variable of the type
struct ui_file that is defined in sources as
follows:
struct ui_file
{
int *magic;
ui_file_flush_ftype *to_flush;
ui_file_write_ftype *to_write;
ui_file_write_async_safe_ftype *to_write_async_safe;
ui_file_fputs_ftype *to_fputs;
ui_file_read_ftype *to_read;
ui_file_delete_ftype *to_delete;
ui_file_isatty_ftype *to_isatty;
ui_file_rewind_ftype *to_rewind;
ui_file_put_ftype *to_put;
void *to_data;
}
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can always ask itself for information on its commands,
using the command help.
help
h
help (abbreviated h) with no arguments to
display a short list of named classes of commands:
() help
List of classes of commands:
aliases -- Aliases of other commands
breakpoints -- Making program stop at certain points
data -- Examining data
files -- Specifying and examining files
internals -- Maintenance commands
obscure -- Obscure features
running -- Running the program
stack -- Examining the stack
status -- Status inquiries
support -- Support facilities
tracepoints -- Tracing of program execution without
stopping the program
user-defined -- User-defined commands
Type "help" followed by a class name for a list of
commands in that class.
Type "help" followed by command name for full
documentation.
Command name abbreviations are allowed if unambiguous.
()
|
help class
status:
() help status
Status inquiries.
List of commands:
info -- Generic command for showing things
about the program being debugged
show -- Generic command for showing things
about the debugger
Type "help" followed by command name for full
documentation.
Command name abbreviations are allowed if unambiguous.
()
|
help command
help argument, displays a
short paragraph on how to use that command.
apropos args
apropos command searches through all of the
commands, and their documentation, for the regular expression specified in
args. It prints out all matches found. For example:
apropos alias |
results in:
alias -- Define a new command that is an alias of an existing command aliases -- Aliases of other commands d -- Delete some breakpoints or auto-display expressions del -- Delete some breakpoints or auto-display expressions delete -- Delete some breakpoints or auto-display expressions |
complete args
complete args command lists all the possible completions
for the beginning of a command. Use args to specify the beginning of the
command you want completed. For example:
complete i |
results in:
if ignore info inspect |
This is intended for use by GNU Emacs.
In addition to help, you can use the commands info
and show to inquire about the state of your program, or the state
of itself. Each command supports many topics of inquiry; this
manual introduces each of them in the appropriate context. The listings
under info and under show in the Command, Variable, and
Function Index point to all the sub-commands. See section Command, Variable, and Function Index.
info
i) is for describing the state of your
program. For example, you can show the arguments passed to a function
with info args, list the registers currently in use with info
registers, or list the breakpoints you have set with info breakpoints.
You can get a complete list of the info sub-commands with
help info.
set
set. For example, you can set the prompt to a $-sign with
set prompt $.
show
info, show is for describing the state of
itself.
You can change most of the things you can show, by using the
related command set; for example, you can control what number
system is used for displays with set radix, or simply inquire
which is currently in use with show radix.
To display all the settable parameters and their current
values, you can use show with no arguments; you may also use
info set. Both commands produce the same display.
Here are three miscellaneous show subcommands, all of which are
exceptional in lacking corresponding set commands:
show version
show copying
info copying
show warranty
info warranty
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When you run a program under , you must first generate debugging information when you compile it.
You may start with its arguments, if any, in an environment of your choice. If you are doing native debugging, you may redirect your program's input and output, debug an already running process, or kill a child process.
4.1 Compiling for Debugging Compiling for debugging 4.2 Starting your Program Starting your program 4.3 Your Program's Arguments Your program's arguments 4.4 Your Program's Environment Your program's environment
4.5 Your Program's Working Directory Your program's working directory 4.6 Your Program's Input and Output Your program's input and output 4.7 Debugging an Already-running Process Debugging an already-running process 4.8 Killing the Child Process Killing the child process
4.9 Debugging Multiple Inferiors and Programs Debugging multiple inferiors and programs 4.10 Debugging Programs with Multiple Threads Debugging programs with multiple threads 4.11 Debugging Forks Debugging forks 4.12 Setting a Bookmark to Return to Later Setting a bookmark to return to later
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In order to debug a program effectively, you need to generate debugging information when you compile it. This debugging information is stored in the object file; it describes the data type of each variable or function and the correspondence between source line numbers and addresses in the executable code.
To request debugging information, specify the `-g' option when you run the compiler.
Programs that are to be shipped to your customers are compiled with optimizations, using the `-O' compiler option. However, some compilers are unable to handle the `-g' and `-O' options together. Using those compilers, you cannot generate optimized executables containing debugging information.
, the GNU C/C++ compiler, supports `-g' with or without `-O', making it possible to debug optimized code. We recommend that you always use `-g' whenever you compile a program. You may think your program is correct, but there is no sense in pushing your luck. For more information, see 11. Debugging Optimized Code.
Older versions of the GNU C compiler permitted a variant option `-gg' for debugging information. no longer supports this format; if your GNU C compiler has this option, do not use it.
knows about preprocessor macros and can show you their expansion (see section 12. C Preprocessor Macros). Most compilers do not include information about preprocessor macros in the debugging information if you specify the `-g' flag alone. Version 3.1 and later of , the GNU C compiler, provides macro information if you are using the DWARF debugging format, and specify the option `-g3'.
See section `Options for Debugging Your Program or GCC' in Using the GNU Compiler Collection (GCC), for more information on options affecting debug information.
You will have the best debugging experience if you use the latest version of the DWARF debugging format that your compiler supports. DWARF is currently the most expressive and best supported debugging format in .
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
run
r
run command to start your program under .
You must first specify the program name (except on VxWorks) with an
argument to (see section Getting In and Out of ), or by using the file or exec-file command
(see section Commands to Specify Files).
If you are running your program in an execution environment that
supports processes, run creates an inferior process and makes
that process run your program. In some environments without processes,
run jumps to the start of your program. Other targets,
like `remote', are always running. If you get an error
message like this one:
The "remote" target does not support "run". Try "help target" or "continue". |
then use continue to run your program. You may need load
first (see load).
The execution of a program is affected by certain information it receives from its superior. provides ways to specify this information, which you must do before starting your program. (You can change it after starting your program, but such changes only affect your program the next time you start it.) This information may be divided into four categories:
run command. If a shell is available on your target, the shell
is used to pass the arguments, so that you may use normal conventions
(such as wildcard expansion or variable substitution) in describing
the arguments.
In Unix systems, you can control which shell is used with the
SHELL environment variable.
See section Your Program's Arguments.
set environment and unset
environment to change parts of the environment that affect
your program. See section Your Program's Environment.
cd command in .
See section Your Program's Working Directory.
run command line, or you can use the tty command to
set a different device for your program.
See section Your Program's Input and Output.
Warning: While input and output redirection work, you cannot use pipes to pass the output of the program you are debugging to another program; if you attempt this, is likely to wind up debugging the wrong program.
When you issue the run command, your program begins to execute
immediately. See section Stopping and Continuing, for discussion
of how to arrange for your program to stop. Once your program has
stopped, you may call functions in your program, using the print
or call commands. See section Examining Data.
If the modification time of your symbol file has changed since the last time read its symbols, discards its symbol table, and reads it again. When it does this, tries to retain your current breakpoints.
start
main, but
other languages such as Ada do not require a specific name for their
main procedure. The debugger provides a convenient way to start the
execution of the program and to stop at the beginning of the main
procedure, depending on the language used.
The `start' command does the equivalent of setting a temporary breakpoint at the beginning of the main procedure and then invoking the `run' command.
Some programs contain an elaboration phase where some startup code is
executed before the main procedure is called. This depends on the
languages used to write your program. In C++, for instance,
constructors for static and global objects are executed before
main is called. It is therefore possible that the debugger stops
before reaching the main procedure. However, the temporary breakpoint
will remain to halt execution.
Specify the arguments to give to your program as arguments to the `start' command. These arguments will be given verbatim to the underlying `run' command. Note that the same arguments will be reused if no argument is provided during subsequent calls to `start' or `run'.
It is sometimes necessary to debug the program during elaboration. In
these cases, using the start command would stop the execution of
your program too late, as the program would have already completed the
elaboration phase. Under these circumstances, insert breakpoints in your
elaboration code before running your program.
set exec-wrapper wrapper
show exec-wrapper
unset exec-wrapper
You can use any program that eventually calls execve with
its arguments as a wrapper. Several standard Unix utilities do
this, e.g. env and nohup. Any Unix shell script ending
with exec "$@" will also work.
For example, you can use env to pass an environment variable to
the debugged program, without setting the variable in your shell's
environment:
() set exec-wrapper env 'LD_PRELOAD=libtest.so' () run |
This command is available when debugging locally on most targets, excluding DJGPP, Cygwin, MS Windows, and QNX Neutrino.
set disable-randomization
set disable-randomization on
This feature is implemented only on certain targets, including GNU/Linux. On GNU/Linux you can get the same behavior using
() set exec-wrapper setarch `uname -m` -R |
set disable-randomization off
On targets where it is available, virtual address space randomization protects the programs against certain kinds of security attacks. In these cases the attacker needs to know the exact location of a concrete executable code. Randomizing its location makes it impossible to inject jumps misusing a code at its expected addresses.
Prelinking shared libraries provides a startup performance advantage but it makes addresses in these libraries predictable for privileged processes by having just unprivileged access at the target system. Reading the shared library binary gives enough information for assembling the malicious code misusing it. Still even a prelinked shared library can get loaded at a new random address just requiring the regular relocation process during the startup. Shared libraries not already prelinked are always loaded at a randomly chosen address.
Position independent executables (PIE) contain position independent code
similar to the shared libraries and therefore such executables get loaded at
a randomly chosen address upon startup. PIE executables always load even
already prelinked shared libraries at a random address. You can build such
executable using gcc -fPIE -pie.
Heap (malloc storage), stack and custom mmap areas are always placed randomly (as long as the randomization is enabled).
show disable-randomization
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The arguments to your program can be specified by the arguments of the
run command.
They are passed to a shell, which expands wildcard characters and
performs redirection of I/O, and thence to your program. Your
SHELL environment variable (if it exists) specifies what shell
uses. If you do not define SHELL, uses
the default shell (`/bin/sh' on Unix).
On non-Unix systems, the program is usually invoked directly by , which emulates I/O redirection via the appropriate system calls, and the wildcard characters are expanded by the startup code of the program, not by the shell.
run with no arguments uses the same arguments used by the previous
run, or those set by the set args command.
set args
set args has no arguments, run executes your program
with no arguments. Once you have run your program with arguments,
using set args before the next run is the only way to run
it again without arguments.
show args
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The environment consists of a set of environment variables and their values. Environment variables conventionally record such things as your user name, your home directory, your terminal type, and your search path for programs to run. Usually you set up environment variables with the shell and they are inherited by all the other programs you run. When debugging, it can be useful to try running your program with a modified environment without having to start over again.
path directory
PATH environment variable
(the search path for executables) that will be passed to your program.
The value of PATH used by does not change.
You may specify several directory names, separated by whitespace or by a
system-dependent separator character (`:' on Unix, `;' on
MS-DOS and MS-Windows). If directory is already in the path, it
is moved to the front, so it is searched sooner.
You can use the string `$cwd' to refer to whatever is the current
working directory at the time searches the path. If you
use `.' instead, it refers to the directory where you executed the
path command. replaces `.' in the
directory argument (with the current path) before adding
directory to the search path.
show paths
PATH
environment variable).
show environment [varname]
environment as env.
set environment varname [=value]
For example, this command:
set env USER = foo |
tells the debugged program, when subsequently run, that its user is named `foo'. (The spaces around `=' are used for clarity here; they are not actually required.)
unset environment varname
unset environment removes the variable from the environment,
rather than assigning it an empty value.
Warning: On Unix systems, runs your program using
the shell indicated
by your SHELL environment variable if it exists (or
/bin/sh if not). If your SHELL variable names a shell
that runs an initialization file--such as `.cshrc' for C-shell, or
`.bashrc' for BASH--any variables you set in that file affect
your program. You may wish to move setting of environment variables to
files that are only run when you sign on, such as `.login' or
`.profile'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Each time you start your program with run, it inherits its
working directory from the current working directory of .
The working directory is initially whatever it inherited
from its parent process (typically the shell), but you can specify a new
working directory in with the cd command.
The working directory also serves as a default for the commands that specify files for to operate on. See section Commands to Specify Files.
cd [directory]
pwd
It is generally impossible to find the current working directory of
the process being debugged (since a program can change its directory
during its run). If you work on a system where is
configured with the `/proc' support, you can use the info
proc command (see section 21.1.3 SVR4 Process Information) to find out the
current working directory of the debuggee.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, the program you run under does input and output to the same terminal that uses. switches the terminal to its own terminal modes to interact with you, but it records the terminal modes your program was using and switches back to them when you continue running your program.
info terminal
You can redirect your program's input and/or output using shell
redirection with the run command. For example,
run > outfile |
starts your program, diverting its output to the file `outfile'.
Another way to specify where your program should do input and output is
with the tty command. This command accepts a file name as
argument, and causes this file to be the default for future run
commands. It also resets the controlling terminal for the child
process, for future run commands. For example,
tty /dev/ttyb |
directs that processes started with subsequent run commands
default to do input and output on the terminal `/dev/ttyb' and have
that as their controlling terminal.
An explicit redirection in run overrides the tty command's
effect on the input/output device, but not its effect on the controlling
terminal.
When you use the tty command or redirect input in the run
command, only the input for your program is affected. The input
for still comes from your terminal. tty is an alias
for set inferior-tty.
You can use the show inferior-tty command to tell to
display the name of the terminal that will be used for future runs of your
program.
set inferior-tty /dev/ttyb
show inferior-tty
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
attach process-id
info files shows your active
targets.) The command takes as argument a process ID. The usual way to
find out the process-id of a Unix process is with the ps utility,
or with the `jobs -l' shell command.
attach does not repeat if you press RET a second time after
executing the command.
To use attach, your program must be running in an environment
which supports processes; for example, attach does not work for
programs on bare-board targets that lack an operating system. You must
also have permission to send the process a signal.
When you use attach, the debugger finds the program running in
the process first by looking in the current working directory, then (if
the program is not found) by using the source file search path
(see section Specifying Source Directories). You can also use
the file command to load the program. See section Commands to Specify Files.
The first thing does after arranging to debug the specified
process is to stop it. You can examine and modify an attached process
with all the commands that are ordinarily available when
you start processes with run. You can insert breakpoints; you
can step and continue; you can modify storage. If you would rather the
process continue running, you may use the continue command after
attaching to the process.
detach
detach command to release it from control. Detaching
the process continues its execution. After the detach command,
that process and become completely independent once more, and you
are ready to attach another process or start one with run.
detach does not repeat if you press RET again after
executing the command.
If you exit while you have an attached process, you detach
that process. If you use the run command, you kill that process.
By default, asks for confirmation if you try to do either of these
things; you can control whether or not you need to confirm by using the
set confirm command (see section Optional Warnings and Messages).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
kill
This command is useful if you wish to debug a core dump instead of a running process. ignores any core dump file while your program is running.
On some operating systems, a program cannot be executed outside
while you have breakpoints set on it inside . You can use the
kill command in this situation to permit running your program
outside the debugger.
The kill command is also useful if you wish to recompile and
relink your program, since on many systems it is impossible to modify an
executable file while it is running in a process. In this case, when you
next type run, notices that the file has changed, and
reads the symbol table again (while trying to preserve your current
breakpoint settings).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
lets you run and debug multiple programs in a single session. In addition, on some systems may let you run several programs simultaneously (otherwise you have to exit from one before starting another). In the most general case, you can have multiple threads of execution in each of multiple processes, launched from multiple executables.
represents the state of each program execution with an object called an inferior. An inferior typically corresponds to a process, but is more general and applies also to targets that do not have processes. Inferiors may be created before a process runs, and may be retained after a process exits. Inferiors have unique identifiers that are different from process ids. Usually each inferior will also have its own distinct address space, although some embedded targets may have several inferiors running in different parts of a single address space. Each inferior may in turn have multiple threads running in it.
To find out what inferiors exist at any moment, use info
inferiors:
info inferiors
displays for each inferior (in this order):
An asterisk `*' preceding the inferior number indicates the current inferior.
For example,
() info inferiors Num Description Executable 2 process 2307 hello * 1 process 3401 goodbye |
To switch focus between inferiors, use the inferior command:
inferior infno
You can get multiple executables into a debugging session via the
add-inferior and clone-inferior commands. On some
systems can add inferiors to the debug session
automatically by following calls to fork and exec. To
remove inferiors from the debugging session use the
remove-inferiors command.
add-inferior [ -copies n ] [ -exec executable ]
file command with the executable name as its argument.
clone-inferior [ -copies n ] [ infno ]
() info inferiors Num Description Executable * 1 process 29964 helloworld () clone-inferior Added inferior 2. 1 inferiors added. () info inferiors Num Description Executable 2 <null> helloworld * 1 process 29964 helloworld |
You can now simply switch focus to inferior 2 and run it.
remove-inferiors infno...
kill or detach command first.
To quit debugging one of the running inferiors that is not the current
inferior, you can either detach from it by using the detach
inferior command (allowing it to run independently), or kill it
using the kill inferiors command:
detach inferior infno...
info inferiors,
but its Description will show `<null>'.
kill inferiors infno...
info inferiors, but its
Description will show `<null>'.
After the successful completion of a command such as detach,
detach inferiors, kill or kill inferiors, or after
a normal process exit, the inferior is still valid and listed with
info inferiors, ready to be restarted.
To be notified when inferiors are started or exit under 's
control use set print inferior-events:
set print inferior-events
set print inferior-events on
set print inferior-events off
set print inferior-events command allows you to enable or
disable printing of messages when notices that new
inferiors have started or that inferiors have exited or have been
detached. By default, these messages will not be printed.
show print inferior-events
Many commands will work the same with multiple programs as with a
single program: e.g., print myglobal will simply display the
value of myglobal in the current inferior.
Occasionaly, when debugging itself, it may be useful to
get more info about the relationship of inferiors, programs, address
spaces in a debug session. You can do that with the maint
info program-spaces command.
maint info program-spaces
displays for each program space (in this order):
file command.
An asterisk `*' preceding the program space number indicates the current program space.
In addition, below each program space line, prints extra information that isn't suitable to display in tabular form. For example, the list of inferiors bound to the program space.
() maint info program-spaces
Id Executable
2 goodbye
Bound inferiors: ID 1 (process 21561)
* 1 hello
|
Here we can see that no inferior is running the program hello,
while process 21561 is running the program goodbye. On
some targets, it is possible that multiple inferiors are bound to the
same program space. The most common example is that of debugging both
the parent and child processes of a vfork call. For example,
() maint info program-spaces
Id Executable
* 1 vfork-test
Bound inferiors: ID 2 (process 18050), ID 1 (process 18045)
|
Here, both inferior 2 and inferior 1 are running in the same program
space as a result of inferior 1 having executed a vfork call.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In some operating systems, such as HP-UX and Solaris, a single program may have more than one thread of execution. The precise semantics of threads differ from one operating system to another, but in general the threads of a single program are akin to multiple processes--except that they share one address space (that is, they can all examine and modify the same variables). On the other hand, each thread has its own registers and execution stack, and perhaps private memory.
provides these facilities for debugging multi-thread programs:
libthread_db to use if the default choice
isn't compatible with the program.
Warning: These facilities are not yet available on every configuration where the operating system supports threads. If your does not support threads, these commands have no effect. For example, a system without thread support shows no output from `info threads', and always rejects thethreadcommand, like this:
() info threads () thread 1 Thread ID 1 not known. Use the "info threads" command to see the IDs of currently known threads.
The thread debugging facility allows you to observe all threads while your program runs--but whenever takes control, one thread in particular is always the focus of debugging. This thread is called the current thread. Debugging commands show program information from the perspective of the current thread.
Whenever detects a new thread in your program, it displays the target system's identification for the thread with a message in the form `[New systag]'. systag is a thread identifier whose form varies depending on the particular system. For example, on GNU/Linux, you might see
[New Thread 0x41e02940 (LWP 25582)] |
when notices a new thread. In contrast, on an SGI system, the systag is simply something like `process 368', with no further qualifier.
For debugging purposes, associates its own thread number--always a single integer--with each thread in your program.
info threads [id...]
thread name, below), or, in some cases, by the
program itself.
An asterisk `*' to the left of the thread number indicates the current thread.
For example,
() info threads
Id Target Id Frame
3 process 35 thread 27 0x34e5 in sigpause ()
2 process 35 thread 23 0x34e5 in sigpause ()
* 1 process 35 thread 13 main (argc=1, argv=0x7ffffff8)
at threadtest.c:68
|
On Solaris, you can display more information about user threads with a Solaris-specific command:
maint info sol-threads
thread threadno
() thread 2
[Switching to thread 2 (Thread 0xb7fdab70 (LWP 12747))]
#0 some_function (ignore=0x0) at example.c:8
8 printf ("hello\n");
|
As with the `[New ...]' message, the form of the text after `Switching to' depends on your system's conventions for identifying threads.
The debugger convenience variable `$_thread' contains the number of the current thread. You may find this useful in writing breakpoint conditional expressions, command scripts, and so forth. See See section Convenience Variables, for general information on convenience variables.
thread apply [threadno | all] command
thread apply command allows you to apply the named
command to one or more threads. Specify the numbers of the
threads that you want affected with the command argument
threadno. It can be a single thread number, one of the numbers
shown in the first field of the `info threads' display; or it
could be a range of thread numbers, as in 2-4. To apply a
command to all threads, type thread apply all command.
thread name [name]
On some systems, such as GNU/Linux, is able to determine the name of the thread as given by the OS. On these systems, a name specified with `thread name' will override the system-give name, and removing the user-specified name will cause to once again display the system-specified name.
thread find [regexp]
As well as being the complement to the `thread name' command, this command also allows you to identify a thread by its target systag. For instance, on GNU/Linux, the target systag is the LWP id.
() thread find 26688 Thread 4 has target id 'Thread 0x41e02940 (LWP 26688)' () info thread 4 Id Target Id Frame 4 Thread 0x41e02940 (LWP 26688) 0x00000031ca6cd372 in select () |
set print thread-events
set print thread-events on
set print thread-events off
set print thread-events command allows you to enable or
disable printing of messages when notices that new threads have
started or that threads have exited. By default, these messages will
be printed if detection of these events is supported by the target.
Note that these messages cannot be disabled on all targets.
show print thread-events
See section Stopping and Starting Multi-thread Programs, for more information about how behaves when you stop and start programs with multiple threads.
See section Setting Watchpoints, for information about watchpoints in programs with multiple threads.
set libthread-db-search-path [path]
libthread_db.
If you omit path, `libthread-db-search-path' will be reset to
its default value ($sdir:$pdir on GNU/Linux and Solaris systems).
Internally, the default value comes from the LIBTHREAD_DB_SEARCH_PATH
macro.
On GNU/Linux and Solaris systems, uses a "helper"
libthread_db library to obtain information about threads in the
inferior process. will use `libthread-db-search-path'
to find libthread_db. also consults first if inferior
specific thread debugging library loading is enabled
by `set auto-load libthread-db' (see section 22.7.2 Automatically loading thread debugging library).
A special entry `$sdir' for `libthread-db-search-path' refers to the default system directories that are normally searched for loading shared libraries. The `$sdir' entry is the only kind not needing to be enabled by `set auto-load libthread-db' (see section 22.7.2 Automatically loading thread debugging library).
A special entry `$pdir' for `libthread-db-search-path'
refers to the directory from which libpthread
was loaded in the inferior process.
For any libthread_db library finds in above directories,
attempts to initialize it with the current inferior process.
If this initialization fails (which could happen because of a version
mismatch between libthread_db and libpthread),
will unload libthread_db, and continue with the next directory.
If none of libthread_db libraries initialize successfully,
will issue a warning and thread debugging will be disabled.
Setting libthread-db-search-path is currently implemented
only on some platforms.
show libthread-db-search-path
set debug libthread-db
show debug libthread-db
libthread_db-related events.
Use 1 to enable, 0 to disable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On most systems, has no special support for debugging
programs which create additional processes using the fork
function. When a program forks, will continue to debug the
parent process and the child process will run unimpeded. If you have
set a breakpoint in any code which the child then executes, the child
will get a SIGTRAP signal which (unless it catches the signal)
will cause it to terminate.
However, if you want to debug the child process there is a workaround
which isn't too painful. Put a call to sleep in the code which
the child process executes after the fork. It may be useful to sleep
only if a certain environment variable is set, or a certain file exists,
so that the delay need not occur when you don't want to run
on the child. While the child is sleeping, use the ps program to
get its process ID. Then tell (a new invocation of
if you are also debugging the parent process) to attach to
the child process (see section 4.7 Debugging an Already-running Process). From that point on you can debug
the child process just like any other process which you attached to.
On some systems, provides support for debugging programs that
create additional processes using the fork or vfork functions.
Currently, the only platforms with this feature are HP-UX (11.x and later
only?) and GNU/Linux (kernel version 2.5.60 and later).
By default, when a program forks, will continue to debug the parent process and the child process will run unimpeded.
If you want to follow the child process instead of the parent process,
use the command set follow-fork-mode.
set follow-fork-mode mode
fork or
vfork. A call to fork or vfork creates a new
process. The mode argument can be:
parent
child
show follow-fork-mode
fork or vfork call.
On Linux, if you want to debug both the parent and child processes, use the
command set detach-on-fork.
set detach-on-fork mode
on
follow-fork-mode) will be detached and allowed to run
independently. This is the default.
off
follow-fork-mode) is debugged as usual, while the other
is held suspended.
show detach-on-fork
If you choose to set `detach-on-fork' mode off, then
will retain control of all forked processes (including nested forks).
You can list the forked processes under the control of by
using the info inferiors command, and switch from one fork
to another by using the inferior command (see section Debugging Multiple Inferiors and Programs).
To quit debugging one of the forked processes, you can either detach
from it by using the detach inferiors command (allowing it
to run independently), or kill it using the kill inferiors
command. See section Debugging Multiple Inferiors and Programs.
If you ask to debug a child process and a vfork is followed by an
exec, executes the new target up to the first
breakpoint in the new target. If you have a breakpoint set on
main in your original program, the breakpoint will also be set on
the child process's main.
On some systems, when a child process is spawned by vfork, you
cannot debug the child or parent until an exec call completes.
If you issue a run command to after an exec
call executes, the new target restarts. To restart the parent
process, use the file command with the parent executable name
as its argument. By default, after an exec call executes,
discards the symbols of the previous executable image.
You can change this behaviour with the set follow-exec-mode
command.
set follow-exec-mode mode
Set debugger response to a program call of exec. An
exec call replaces the program image of a process.
follow-exec-mode can be:
new
exec call can be restarted afterwards by restarting the
original inferior.
For example:
() info inferiors (gdb) info inferior Id Description Executable * 1 <null> prog1 () run process 12020 is executing new program: prog2 Program exited normally. () info inferiors Id Description Executable * 2 <null> prog2 1 <null> prog1 |
same
exec call, with
e.g., the run command, restarts the executable the process was
running after the exec call. This is the default mode.
For example:
() info inferiors Id Description Executable * 1 <null> prog1 () run process 12020 is executing new program: prog2 Program exited normally. () info inferiors Id Description Executable * 1 <null> prog2 |
You can use the catch command to make stop whenever
a fork, vfork, or exec call is made. See section Setting Catchpoints.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On certain operating systems(3), is able to save a snapshot of a program's state, called a checkpoint, and come back to it later.
Returning to a checkpoint effectively undoes everything that has
happened in the program since the checkpoint was saved. This
includes changes in memory, registers, and even (within some limits)
system state. Effectively, it is like going back in time to the
moment when the checkpoint was saved.
Thus, if you're stepping thru a program and you think you're getting close to the point where things go wrong, you can save a checkpoint. Then, if you accidentally go too far and miss the critical statement, instead of having to restart your program from the beginning, you can just go back to the checkpoint and start again from there.
This can be especially useful if it takes a lot of time or steps to reach the point where you think the bug occurs.
To use the checkpoint/restart method of debugging:
checkpoint
checkpoint command takes no arguments, but each checkpoint
is assigned a small integer id, similar to a breakpoint id.
info checkpoints
Checkpoint ID
Process ID
Code Address
Source line, or label
restart checkpoint-id
Note that breakpoints, variables, command history etc. are not affected by restoring a checkpoint. In general, a checkpoint only restores things that reside in the program being debugged, not in the debugger.
delete checkpoint checkpoint-id
Returning to a previously saved checkpoint will restore the user state of the program being debugged, plus a significant subset of the system (OS) state, including file pointers. It won't "un-write" data from a file, but it will rewind the file pointer to the previous location, so that the previously written data can be overwritten. For files opened in read mode, the pointer will also be restored so that the previously read data can be read again.
Of course, characters that have been sent to a printer (or other external device) cannot be "snatched back", and characters received from eg. a serial device can be removed from internal program buffers, but they cannot be "pushed back" into the serial pipeline, ready to be received again. Similarly, the actual contents of files that have been changed cannot be restored (at this time).
However, within those constraints, you actually can "rewind" your program to a previously saved point in time, and begin debugging it again -- and you can change the course of events so as to debug a different execution path this time.
Finally, there is one bit of internal program state that will be different when you return to a checkpoint -- the program's process id. Each checkpoint will have a unique process id (or pid), and each will be different from the program's original pid. If your program has saved a local copy of its process id, this could potentially pose a problem.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On some systems such as GNU/Linux, address space randomization is performed on new processes for security reasons. This makes it difficult or impossible to set a breakpoint, or watchpoint, on an absolute address if you have to restart the program, since the absolute location of a symbol will change from one execution to the next.
A checkpoint, however, is an identical copy of a process. Therefore if you create a checkpoint at (eg.) the start of main, and simply return to that checkpoint instead of restarting the process, you can avoid the effects of address randomization and your symbols will all stay in the same place.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The principal purposes of using a debugger are so that you can stop your program before it terminates; or so that, if your program runs into trouble, you can investigate and find out why.
Inside , your program may stop for any of several reasons,
such as a signal, a breakpoint, or reaching a new line after a
command such as step. You may then examine and
change variables, set new breakpoints or remove old ones, and then
continue execution. Usually, the messages shown by provide
ample explanation of the status of your program--but you can also
explicitly request this information at any time.
info program
5.1 Breakpoints, Watchpoints, and Catchpoints Breakpoints, watchpoints, and catchpoints 5.2 Continuing and Stepping Resuming execution 5.3 Skipping Over Functions and Files Skipping over functions and files 5.4 Signals 5.5 Stopping and Starting Multi-thread Programs Stopping and starting multi-thread programs
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A breakpoint makes your program stop whenever a certain point in
the program is reached. For each breakpoint, you can add conditions to
control in finer detail whether your program stops. You can set
breakpoints with the break command and its variants (see section Setting Breakpoints), to specify the place where your program
should stop by line number, function name or exact address in the
program.
On some systems, you can set breakpoints in shared libraries before
the executable is run. There is a minor limitation on HP-UX systems:
you must wait until the executable is run in order to set breakpoints
in shared library routines that are not called directly by the program
(for example, routines that are arguments in a pthread_create
call).
A watchpoint is a special breakpoint that stops your program when the value of an expression changes. The expression may be a value of a variable, or it could involve values of one or more variables combined by operators, such as `a + b'. This is sometimes called data breakpoints. You must use a different command to set watchpoints (see section Setting Watchpoints), but aside from that, you can manage a watchpoint like any other breakpoint: you enable, disable, and delete both breakpoints and watchpoints using the same commands.
You can arrange to have values from your program displayed automatically whenever stops at a breakpoint. See section Automatic Display.
A catchpoint is another special breakpoint that stops your program
when a certain kind of event occurs, such as the throwing of a C++
exception or the loading of a library. As with watchpoints, you use a
different command to set a catchpoint (see section Setting Catchpoints), but aside from that, you can manage a catchpoint like any
other breakpoint. (To stop when your program receives a signal, use the
handle command; see Signals.)
assigns a number to each breakpoint, watchpoint, or catchpoint when you create it; these numbers are successive integers starting with one. In many of the commands for controlling various features of breakpoints you use the breakpoint number to say which breakpoint you want to change. Each breakpoint may be enabled or disabled; if disabled, it has no effect on your program until you enable it again.
Some commands accept a range of breakpoints on which to operate. A breakpoint range is either a single breakpoint number, like `5', or two such numbers, in increasing order, separated by a hyphen, like `5-7'. When a breakpoint range is given to a command, all breakpoints in that range are operated on.
5.1.1 Setting Breakpoints Setting breakpoints 5.1.2 Setting Watchpoints Setting watchpoints 5.1.3 Setting Catchpoints Setting catchpoints 5.1.4 Deleting Breakpoints Deleting breakpoints 5.1.5 Disabling Breakpoints Disabling breakpoints 5.1.6 Break Conditions Break conditions 5.1.7 Breakpoint Command Lists Breakpoint command lists 5.1.8 Dynamic Printf Dynamic printf 5.1.9 How to save breakpoints to a file How to save breakpoints in a file 5.1.10 Static Probe Points Listing static probe points 5.1.11 "Cannot insert breakpoints" 5.1.12 "Breakpoint address adjusted..."
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Breakpoints are set with the break command (abbreviated
b). The debugger convenience variable `$bpnum' records the
number of the breakpoint you've set most recently; see Convenience Variables, for a discussion of what you can do with
convenience variables.
break location
When using source languages that permit overloading of symbols, such as C++, a function name may refer to more than one possible place to break. See section Ambiguous Expressions, for a discussion of that situation.
It is also possible to insert a breakpoint that will stop the program only if a specific thread (see section 5.5.4 Thread-Specific Breakpoints) or a specific task (see section 15.4.9.5 Extensions for Ada Tasks) hits that breakpoint.
break
break sets a breakpoint at
the next instruction to be executed in the selected stack frame
(see section Examining the Stack). In any selected frame but the
innermost, this makes your program stop as soon as control
returns to that frame. This is similar to the effect of a
finish command in the frame inside the selected frame--except
that finish does not leave an active breakpoint. If you use
break without an argument in the innermost frame, stops
the next time it reaches the current location; this may be useful
inside loops.
normally ignores breakpoints when it resumes execution, until at least one instruction has been executed. If it did not do this, you would be unable to proceed past a breakpoint without first disabling the breakpoint. This rule applies whether or not the breakpoint already existed when your program stopped.
break ... if cond
tbreak args
break command, and the breakpoint is set in the same
way, but the breakpoint is automatically deleted after the first time your
program stops there. See section Disabling Breakpoints.
hbreak args
break command and the breakpoint is set in the same way, but the
breakpoint requires hardware support and some target hardware may not
have this support. The main purpose of this is EPROM/ROM code
debugging, so you can set a breakpoint at an instruction without
changing the instruction. This can be used with the new trap-generation
provided by SPARClite DSU and most x86-based targets. These targets
will generate traps when a program accesses some data or instruction
address that is assigned to the debug registers. However the hardware
breakpoint registers can take a limited number of breakpoints. For
example, on the DSU, only two data breakpoints can be set at a time, and
will reject this command if more than two are used. Delete
or disable unused hardware breakpoints before setting new ones
(see section Disabling Breakpoints).
See section Break Conditions.
For remote targets, you can restrict the number of hardware
breakpoints will use, see set remote hardware-breakpoint-limit.
thbreak args
hbreak command and the breakpoint is set in
the same way. However, like the tbreak command,
the breakpoint is automatically deleted after the
first time your program stops there. Also, like the hbreak
command, the breakpoint requires hardware support and some target hardware
may not have this support. See section Disabling Breakpoints.
See also Break Conditions.
rbreak regex
break command. You can delete them, disable them, or make
them conditional the same way as any other breakpoint.
The syntax of the regular expression is the standard one used with tools
like `grep'. Note that this is different from the syntax used by
shells, so for instance foo* matches all functions that include
an fo followed by zero or more os. There is an implicit
.* leading and trailing the regular expression you supply, so to
match only functions that begin with foo, use ^foo.
When debugging C++ programs, rbreak is useful for setting
breakpoints on overloaded functions that are not members of any special
classes.
The rbreak command can be used to set breakpoints in
all the functions in a program, like this:
() rbreak . |
rbreak file:regex
rbreak is called with a filename qualification, it limits
the search for functions matching the given regular expression to the
specified file. This can be used, for example, to set breakpoints on
every function in a given file:
() rbreak file.c:. |
The colon separating the filename qualifier from the regex may optionally be surrounded by spaces.
info breakpoints [n...]
info break [n...]
If a breakpoint is conditional, there are two evaluation modes: "host" and
"target". If mode is "host", breakpoint condition evaluation is done by
on the host's side. If it is "target", then the condition
is evaluated by the target. The info break command shows
the condition on the line following the affected breakpoint, together with
its condition evaluation mode in between parentheses.
Breakpoint commands, if any, are listed after that. A pending breakpoint is allowed to have a condition specified for it. The condition is not parsed for validity until a shared library is loaded that allows the pending breakpoint to resolve to a valid location.
info break with a breakpoint
number n as argument lists only that breakpoint. The
convenience variable $_ and the default examining-address for
the x command are set to the address of the last breakpoint
listed (see section Examining Memory).
info break displays a count of the number of times the breakpoint
has been hit. This is especially useful in conjunction with the
ignore command. You can ignore a large number of breakpoint
hits, look at the breakpoint info to see how many times the breakpoint
was hit, and then run again, ignoring one less than that number. This
will get you quickly to the last hit of that breakpoint.
For a breakpoints with an enable count (xref) greater than 1,
info break also displays that count.
allows you to set any number of breakpoints at the same place in your program. There is nothing silly or meaningless about this. When the breakpoints are conditional, this is even useful (see section Break Conditions).
It is possible that a breakpoint corresponds to several locations in your program. Examples of this situation are:
In all those cases, will insert a breakpoint at all the relevant locations.
A breakpoint with multiple locations is displayed in the breakpoint table using several rows--one header row, followed by one row for each breakpoint location. The header row has `<MULTIPLE>' in the address column. The rows for individual locations contain the actual addresses for locations, and show the functions to which those locations belong. The number column for a location is of the form breakpoint-number.location-number.
For example:
Num Type Disp Enb Address What
1 breakpoint keep y <MULTIPLE>
stop only if i==1
breakpoint already hit 1 time
1.1 y 0x080486a2 in void foo<int>() at t.cc:8
1.2 y 0x080486ca in void foo<double>() at t.cc:8
|
Each location can be individually enabled or disabled by passing
breakpoint-number.location-number as argument to the
enable and disable commands. Note that you cannot
delete the individual locations from the list, you can only delete the
entire list of locations that belong to their parent breakpoint (with
the delete num command, where num is the number of
the parent breakpoint, 1 in the above example). Disabling or enabling
the parent breakpoint (see section 5.1.5 Disabling Breakpoints) affects all of the locations
that belong to that breakpoint.
It's quite common to have a breakpoint inside a shared library. Shared libraries can be loaded and unloaded explicitly, and possibly repeatedly, as the program is executed. To support this use case, updates breakpoint locations whenever any shared library is loaded or unloaded. Typically, you would set a breakpoint in a shared library at the beginning of your debugging session, when the library is not loaded, and when the symbols from the library are not available. When you try to set breakpoint, will ask you if you want to set a so called pending breakpoint---breakpoint whose address is not yet resolved.
After the program is run, whenever a new shared library is loaded, reevaluates all the breakpoints. When a newly loaded shared library contains the symbol or line referred to by some pending breakpoint, that breakpoint is resolved and becomes an ordinary breakpoint. When a library is unloaded, all breakpoints that refer to its symbols or source lines become pending again.
This logic works for breakpoints with multiple locations, too. For example, if you have a breakpoint in a C++ template function, and a newly loaded shared library has an instantiation of that template, a new location is added to the list of locations for the breakpoint.
Except for having unresolved address, pending breakpoints do not differ from regular breakpoints. You can set conditions or commands, enable and disable them and perform other breakpoint operations.
provides some additional commands for controlling what happens when the `break' command cannot resolve breakpoint address specification to an address:
set breakpoint pending auto
set breakpoint pending on
set breakpoint pending off
show breakpoint pending
The settings above only affect the break command and its
variants. Once breakpoint is set, it will be automatically updated
as shared libraries are loaded and unloaded.
For some targets, can automatically decide if hardware or
software breakpoints should be used, depending on whether the
breakpoint address is read-only or read-write. This applies to
breakpoints set with the break command as well as to internal
breakpoints set by commands like next and finish. For
breakpoints set with hbreak, will always use hardware
breakpoints.
You can control this automatic behaviour with the following commands::
set breakpoint auto-hw on
set breakpoint auto-hw off
normally implements breakpoints by replacing the program code at the breakpoint address with a special instruction, which, when executed, given control to the debugger. By default, the program code is so modified only when the program is resumed. As soon as the program stops, restores the original instructions. This behaviour guards against leaving breakpoints inserted in the target should gdb abrubptly disconnect. However, with slow remote targets, inserting and removing breakpoint can reduce the performance. This behavior can be controlled with the following commands::
set breakpoint always-inserted off
set breakpoint always-inserted on
set breakpoint always-inserted auto
breakpoint always-inserted mode is on. If is
controlling the inferior in all-stop mode, behaves as if
breakpoint always-inserted mode is off.
handles conditional breakpoints by evaluating these conditions when a breakpoint breaks. If the condition is true, then the process being debugged stops, otherwise the process is resumed.
If the target supports evaluating conditions on its end, may download the breakpoint, together with its conditions, to it.
This feature can be controlled via the following commands:
set breakpoint condition-evaluation host
set breakpoint condition-evaluation target
set breakpoint condition-evaluation auto
itself sometimes sets breakpoints in your program for
special purposes, such as proper handling of longjmp (in C
programs). These internal breakpoints are assigned negative numbers,
starting with -1; `info breakpoints' does not display them.
You can see these breakpoints with the maintenance command
`maint info breakpoints' (see maint info breakpoints).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can use a watchpoint to stop execution whenever the value of an expression changes, without having to predict a particular place where this may happen. (This is sometimes called a data breakpoint.) The expression may be as simple as the value of a single variable, or as complex as many variables combined by operators. Examples include:
int occupies 4 bytes).
You can set a watchpoint on an expression even if the expression can
not be evaluated yet. For instance, you can set a watchpoint on
`*global_ptr' before `global_ptr' is initialized.
will stop when your program sets `global_ptr' and
the expression produces a valid value. If the expression becomes
valid in some other way than changing a variable (e.g. if the memory
pointed to by `*global_ptr' becomes readable as the result of a
malloc call), may not stop until the next time
the expression changes.
Depending on your system, watchpoints may be implemented in software or hardware. does software watchpointing by single-stepping your program and testing the variable's value each time, which is hundreds of times slower than normal execution. (But this may still be worth it, to catch errors where you have no clue what part of your program is the culprit.)
On some systems, such as HP-UX, PowerPC, GNU/Linux and most other x86-based targets, includes support for hardware watchpoints, which do not slow down the running of your program.
watch [-l|-location] expr [thread threadnum] [mask maskvalue]
() watch foo |
If the command includes a [thread threadnum]
argument, breaks only when the thread identified by
threadnum changes the value of expr. If any other threads
change the value of expr, will not break. Note
that watchpoints restricted to a single thread in this way only work
with Hardware Watchpoints.
Ordinarily a watchpoint respects the scope of variables in expr
(see below). The -location argument tells to
instead watch the memory referred to by expr. In this case,
will evaluate expr, take the address of the result,
and watch the memory at that address. The type of the result is used
to determine the size of the watched memory. If the expression's
result does not have an address, then will print an
error.
The [mask maskvalue] argument allows creation
of masked watchpoints, if the current architecture supports this
feature (e.g., PowerPC Embedded architecture, see 21.3.7 PowerPC Embedded.) A masked watchpoint specifies a mask in addition
to an address to watch. The mask specifies that some bits of an address
(the bits which are reset in the mask) should be ignored when matching
the address accessed by the inferior against the watchpoint address.
Thus, a masked watchpoint watches many addresses simultaneously--those
addresses whose unmasked bits are identical to the unmasked bits in the
watchpoint address. The mask argument implies -location.
Examples:
() watch foo mask 0xffff00ff () watch *0xdeadbeef mask 0xffffff00 |
rwatch [-l|-location] expr [thread threadnum] [mask maskvalue]
awatch [-l|-location] expr [thread threadnum] [mask maskvalue]
info watchpoints [n...]
info break (see section 5.1.1 Setting Breakpoints).
If you watch for a change in a numerically entered address you need to dereference it, as the address itself is just a constant number which will never change. refuses to create a watchpoint that watches a never-changing value:
() watch 0x600850 Cannot watch constant value 0x600850. () watch *(int *) 0x600850 Watchpoint 1: *(int *) 6293584 |
sets a hardware watchpoint if possible. Hardware watchpoints execute very quickly, and the debugger reports a change in value at the exact instruction where the change occurs. If cannot set a hardware watchpoint, it sets a software watchpoint, which executes more slowly and reports the change in value at the next statement, not the instruction, after the change occurs.
You can force to use only software watchpoints with the
set can-use-hw-watchpoints 0 command. With this variable set to
zero, will never try to use hardware watchpoints, even if
the underlying system supports them. (Note that hardware-assisted
watchpoints that were set before setting
can-use-hw-watchpoints to zero will still use the hardware
mechanism of watching expression values.)
set can-use-hw-watchpoints
show can-use-hw-watchpoints
For remote targets, you can restrict the number of hardware watchpoints will use, see set remote hardware-breakpoint-limit.
When you issue the watch command, reports
Hardware watchpoint num: expr |
if it was able to set a hardware watchpoint.
Currently, the awatch and rwatch commands can only set
hardware watchpoints, because accesses to data that don't change the
value of the watched expression cannot be detected without examining
every instruction as it is being executed, and does not do
that currently. If finds that it is unable to set a
hardware breakpoint with the awatch or rwatch command, it
will print a message like this:
Expression cannot be implemented with read/access watchpoint. |
Sometimes, cannot set a hardware watchpoint because the data type of the watched expression is wider than what a hardware watchpoint on the target machine can handle. For example, some systems can only watch regions that are up to 4 bytes wide; on such systems you cannot set hardware watchpoints for an expression that yields a double-precision floating-point number (which is typically 8 bytes wide). As a work-around, it might be possible to break the large region into a series of smaller ones and watch them with separate watchpoints.
If you set too many hardware watchpoints, might be unable to insert all of them when you resume the execution of your program. Since the precise number of active watchpoints is unknown until such time as the program is about to be resumed, might not be able to warn you about this when you set the watchpoints, and the warning will be printed only when the program is resumed:
Hardware watchpoint num: Could not insert watchpoint |
If this happens, delete or disable some of the watchpoints.
Watching complex expressions that reference many variables can also exhaust the resources available for hardware-assisted watchpoints. That's because needs to watch every variable in the expression with separately allocated resources.
If you call a function interactively using print or call,
any watchpoints you have set will be inactive until reaches another
kind of breakpoint or the call completes.
automatically deletes watchpoints that watch local
(automatic) variables, or expressions that involve such variables, when
they go out of scope, that is, when the execution leaves the block in
which these variables were defined. In particular, when the program
being debugged terminates, all local variables go out of scope,
and so only watchpoints that watch global variables remain set. If you
rerun the program, you will need to set all such watchpoints again. One
way of doing that would be to set a code breakpoint at the entry to the
main function and when it breaks, set all the watchpoints.
In multi-threaded programs, watchpoints will detect changes to the watched expression from every thread.
Warning: In multi-threaded programs, software watchpoints have only limited usefulness. If creates a software watchpoint, it can only watch the value of an expression in a single thread. If you are confident that the expression can only change due to the current thread's activity (and if you are also confident that no other thread can become current), then you can use software watchpoints as usual. However, may not notice when a non-current thread's activity changes the expression. (Hardware watchpoints, in contrast, watch an expression in all threads.)
See set remote hardware-watchpoint-limit.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can use catchpoints to cause the debugger to stop for certain
kinds of program events, such as C++ exceptions or the loading of a
shared library. Use the catch command to set a catchpoint.
catch event
throw
catch
exception
catch exception Program_Error),
the debugger will stop only when this specific exception is raised.
Otherwise, the debugger stops execution when any Ada exception is raised.
When inserting an exception catchpoint on a user-defined exception whose
name is identical to one of the exceptions defined by the language, the
fully qualified name must be used as the exception name. Otherwise,
will assume that it should stop on the pre-defined exception
rather than the user-defined one. For instance, assuming an exception
called Constraint_Error is defined in package Pck, then
the command to use to catch such exceptions is catch exception
Pck.Constraint_Error.
exception unhandled
assert
exec
exec. This is currently only available for HP-UX
and GNU/Linux.
syscall
syscall [name | number] ...
name can be any system call name that is valid for the underlying OS. Just what syscalls are valid depends on the OS. On GNU and Unix systems, you can find the full list of valid syscall names on `/usr/include/asm/unistd.h'.
Normally, knows in advance which syscalls are valid for each OS, so you can use the command-line completion facilities (see section command completion) to list the available choices.
You may also specify the system call numerically. A syscall's number is the value passed to the OS's syscall dispatcher to identify the requested service. When you specify the syscall by its name, uses its database of syscalls to convert the name into the corresponding numeric code, but using the number directly may be useful if 's database does not have the complete list of syscalls on your system (e.g., because lags behind the OS upgrades).
The example below illustrates how this command works if you don't provide arguments to it:
() catch syscall Catchpoint 1 (syscall) () r Starting program: /tmp/catch-syscall Catchpoint 1 (call to syscall 'close'), \ 0xffffe424 in __kernel_vsyscall () () c Continuing. Catchpoint 1 (returned from syscall 'close'), \ 0xffffe424 in __kernel_vsyscall () () |
Here is an example of catching a system call by name:
() catch syscall chroot Catchpoint 1 (syscall 'chroot' [61]) () r Starting program: /tmp/catch-syscall Catchpoint 1 (call to syscall 'chroot'), \ 0xffffe424 in __kernel_vsyscall () () c Continuing. Catchpoint 1 (returned from syscall 'chroot'), \ 0xffffe424 in __kernel_vsyscall () () |
An example of specifying a system call numerically. In the case below, the syscall number has a corresponding entry in the XML file, so finds its name and prints it:
() catch syscall 252 Catchpoint 1 (syscall(s) 'exit_group') () r Starting program: /tmp/catch-syscall Catchpoint 1 (call to syscall 'exit_group'), \ 0xffffe424 in __kernel_vsyscall () () c Continuing. Program exited normally. () |
However, there can be situations when there is no corresponding name in XML file for that syscall number. In this case, prints a warning message saying that it was not able to find the syscall name, but the catchpoint will be set anyway. See the example below:
() catch syscall 764 warning: The number '764' does not represent a known syscall. Catchpoint 2 (syscall 764) () |
If you configure using the `--without-expat' option, it will not be able to display syscall names. Also, if your architecture does not have an XML file describing its system calls, you will not be able to see the syscall names. It is important to notice that these two features are used for accessing the syscall name database. In either case, you will see a warning like this:
() catch syscall warning: Could not open "syscalls/i386-linux.xml" warning: Could not load the syscall XML file 'syscalls/i386-linux.xml'. GDB will not be able to display syscall names. Catchpoint 1 (syscall) () |
Of course, the file name will change depending on your architecture and system.
Still using the example above, you can also try to catch a syscall by its number. In this case, you would see something like:
() catch syscall 252 Catchpoint 1 (syscall(s) 252) |
Again, in this case would not be able to display syscall's names.
fork
fork. This is currently only available for HP-UX
and GNU/Linux.
vfork
vfork. This is currently only available for HP-UX
and GNU/Linux.
load [regexp]
unload [regexp]
signal [signal... | `all']
With no arguments, this catchpoint will catch any signal that is not used internally by , specifically, all signals except `SIGTRAP' and `SIGINT'.
With the argument `all', all signals, including those used by , will be caught. This argument cannot be used with other signal names.
Otherwise, the arguments are a list of signal names as given to
handle (see section 5.4 Signals). Only signals specified in this list
will be caught.
One reason that catch signal can be more useful than
handle is that you can attach commands and conditions to the
catchpoint.
When a signal is caught by a catchpoint, the signal's stop and
print settings, as specified by handle, are ignored.
However, whether the signal is still delivered to the inferior depends
on the pass setting; this can be changed in the catchpoint's
commands.
tcatch event
Use the info break command to list the current catchpoints.
There are currently some limitations to C++ exception handling
(catch throw and catch catch) in :
Sometimes catch is not the best way to debug exception handling:
if you need to know exactly where an exception is raised, it is better to
stop before the exception handler is called, since that way you
can see the stack before any unwinding takes place. If you set a
breakpoint in an exception handler instead, it may not be easy to find
out where the exception was raised.
To stop just before an exception handler is called, you need some
knowledge of the implementation. In the case of GNU C++, exceptions are
raised by calling a library function named __raise_exception
which has the following ANSI C interface:
/* addr is where the exception identifier is stored.
id is the exception identifier. */
void __raise_exception (void **addr, void *id);
|
To make the debugger catch all exceptions before any stack
unwinding takes place, set a breakpoint on __raise_exception
(see section Breakpoints; Watchpoints; and Exceptions).
With a conditional breakpoint (see section Break Conditions) that depends on the value of id, you can stop your program when a specific exception is raised. You can use multiple conditional breakpoints to stop your program when any of a number of exceptions are raised.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is often necessary to eliminate a breakpoint, watchpoint, or catchpoint once it has done its job and you no longer want your program to stop there. This is called deleting the breakpoint. A breakpoint that has been deleted no longer exists; it is forgotten.
With the clear command you can delete breakpoints according to
where they are in your program. With the delete command you can
delete individual breakpoints, watchpoints, or catchpoints by specifying
their breakpoint numbers.
It is not necessary to delete a breakpoint to proceed past it. automatically ignores breakpoints on the first instruction to be executed when you continue execution without changing the execution address.
clear
clear location
clear function
clear filename:function
clear linenum
clear filename:linenum
delete [breakpoints] [range...]
set
confirm off). You can abbreviate this command as d.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Rather than deleting a breakpoint, watchpoint, or catchpoint, you might prefer to disable it. This makes the breakpoint inoperative as if it had been deleted, but remembers the information on the breakpoint so that you can enable it again later.
You disable and enable breakpoints, watchpoints, and catchpoints with
the enable and disable commands, optionally specifying
one or more breakpoint numbers as arguments. Use info break to
print a list of all breakpoints, watchpoints, and catchpoints if you
do not know which numbers to use.
Disabling and enabling a breakpoint that has multiple locations affects all of its locations.
A breakpoint, watchpoint, or catchpoint can have any of several different states of enablement:
break command starts out in this state.
tbreak command starts out in this state.
You can use the following commands to enable or disable breakpoints, watchpoints, and catchpoints:
disable [breakpoints] [range...]
disable as dis.
enable [breakpoints] [range...]
enable [breakpoints] once range...
enable [breakpoints] count count range...
enable [breakpoints] delete range...
tbreak command start out in this state.
Except for a breakpoint set with tbreak (see section Setting Breakpoints), breakpoints that you set are initially enabled;
subsequently, they become disabled or enabled only when you use one of
the commands above. (The command until can set and delete a
breakpoint of its own, but it does not change the state of your other
breakpoints; see Continuing and Stepping.)
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The simplest sort of breakpoint breaks every time your program reaches a specified place. You can also specify a condition for a breakpoint. A condition is just a Boolean expression in your programming language (see section Expressions). A breakpoint with a condition evaluates the expression each time your program reaches it, and your program stops only if the condition is true.
This is the converse of using assertions for program validation; in that situation, you want to stop when the assertion is violated--that is, when the condition is false. In C, if you want to test an assertion expressed by the condition assert, you should set the condition `! assert' on the appropriate breakpoint.
Conditions are also accepted for watchpoints; you may not need them, since a watchpoint is inspecting the value of an expression anyhow--but it might be simpler, say, to just set a watchpoint on a variable name, and specify a condition that tests whether the new value is an interesting one.
Break conditions can have side effects, and may even call functions in your program. This can be useful, for example, to activate functions that log program progress, or to use your own print functions to format special data structures. The effects are completely predictable unless there is another enabled breakpoint at the same address. (In that case, might see the other breakpoint first and stop your program without checking the condition of this one.) Note that breakpoint commands are usually more convenient and flexible than break conditions for the purpose of performing side effects when a breakpoint is reached (see section Breakpoint Command Lists).
Breakpoint conditions can also be evaluated on the target's side if the target supports it. Instead of evaluating the conditions locally, encodes the expression into an agent expression (see section F. The GDB Agent Expression Mechanism) suitable for execution on the target, independently of . Global variables become raw memory locations, locals become stack accesses, and so forth.
In this case, will only be notified of a breakpoint trigger when its condition evaluates to true. This mechanism may provide faster response times depending on the performance characteristics of the target since it does not need to keep informed about every breakpoint trigger, even those with false conditions.
Break conditions can be specified when a breakpoint is set, by using
`if' in the arguments to the break command. See section Setting Breakpoints. They can also be changed at any time
with the condition command.
You can also use the if keyword with the watch command.
The catch command does not recognize the if keyword;
condition is the only way to impose a further condition on a
catchpoint.
condition bnum expression
condition, checks expression immediately for
syntactic correctness, and to determine whether symbols in it have
referents in the context of your breakpoint. If expression uses
symbols not referenced in the context of the breakpoint,
prints an error message:
No symbol "foo" in current context. |
does
not actually evaluate expression at the time the condition
command (or a command that sets a breakpoint with a condition, like
break if ...) is given, however. See section Expressions.
condition bnum
A special case of a breakpoint condition is to stop only when the breakpoint has been reached a certain number of times. This is so useful that there is a special way to do it, using the ignore count of the breakpoint. Every breakpoint has an ignore count, which is an integer. Most of the time, the ignore count is zero, and therefore has no effect. But if your program reaches a breakpoint whose ignore count is positive, then instead of stopping, it just decrements the ignore count by one and continues. As a result, if the ignore count value is n, the breakpoint does not stop the next n times your program reaches it.
ignore bnum count
To make the breakpoint stop the next time it is reached, specify a count of zero.
When you use continue to resume execution of your program from a
breakpoint, you can specify an ignore count directly as an argument to
continue, rather than using ignore. See section Continuing and Stepping.
If a breakpoint has a positive ignore count and a condition, the condition is not checked. Once the ignore count reaches zero, resumes checking the condition.
You could achieve the effect of the ignore count with a condition such as `$foo-- <= 0' using a debugger convenience variable that is decremented each time. See section Convenience Variables.
Ignore counts apply to breakpoints, watchpoints, and catchpoints.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can give any breakpoint (or watchpoint or catchpoint) a series of commands to execute when your program stops due to that breakpoint. For example, you might want to print the values of certain expressions, or enable other breakpoints.
commands [range...]
... command-list ...
end
end to terminate the commands.
To remove all commands from a breakpoint, type commands and
follow it immediately with end; that is, give no commands.
With no argument, commands refers to the last breakpoint,
watchpoint, or catchpoint set (not to the breakpoint most recently
encountered). If the most recent breakpoints were set with a single
command, then the commands will apply to all the breakpoints
set by that command. This applies to breakpoints set by
rbreak, and also applies when a single break command
creates multiple breakpoints (see section Ambiguous Expressions).
Pressing RET as a means of repeating the last command is disabled within a command-list.
You can use breakpoint commands to start your program up again. Simply
use the continue command, or step, or any other command
that resumes execution.
Any other commands in the command list, after a command that resumes
execution, are ignored. This is because any time you resume execution
(even with a simple next or step), you may encounter
another breakpoint--which could have its own command list, leading to
ambiguities about which list to execute.
If the first command you specify in a command list is silent, the
usual message about stopping at a breakpoint is not printed. This may
be desirable for breakpoints that are to print a specific message and
then continue. If none of the remaining commands print anything, you
see no sign that the breakpoint was reached. silent is
meaningful only at the beginning of a breakpoint command list.
The commands echo, output, and printf allow you to
print precisely controlled output, and are often useful in silent
breakpoints. See section Commands for Controlled Output.
For example, here is how you could use breakpoint commands to print the
value of x at entry to foo whenever x is positive.
break foo if x>0 commands silent printf "x is %d\n",x cont end |
One application for breakpoint commands is to compensate for one bug so
you can test for another. Put a breakpoint just after the erroneous line
of code, give it a condition to detect the case in which something
erroneous has been done, and give it commands to assign correct values
to any variables that need them. End with the continue command
so that your program does not stop, and start with the silent
command so that no output is produced. Here is an example:
break 403 commands silent set x = y + 4 cont end |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The dynamic printf command dprintf combines a breakpoint with
formatted printing of your program's data to give you the effect of
inserting printf calls into your program on-the-fly, without
having to recompile it.
In its most basic form, the output goes to the GDB console. However,
you can set the variable dprintf-style for alternate handling.
For instance, you can ask to format the output by calling your
program's printf function. This has the advantage that the
characters go to the program's output device, so they can recorded in
redirects to files and so forth.
If you are doing remote debugging with a stub or agent, you can also ask to have the printf handled by the remote agent. In addition to ensuring that the output goes to the remote program's device along with any other output the program might produce, you can also ask that the dprintf remain active even after disconnecting from the remote target. Using the stub/agent is also more efficient, as it can do everything without needing to communicate with .
dprintf location,template,expression[,expression...]
set dprintf-style style
gdb
printf command.
call
printf).
agent
gdbserver) handle
the output itself. This style is only available for agents that
support running commands on the target.
set dprintf-function function
call. By
default its value is printf. You may set it to any expression.
that can evaluate to a function, as per the call
command.
set dprintf-channel channel
dprintf-function, in the manner of
fprintf and similar functions. Otherwise, the dprintf format
string will be the first argument, in the manner of printf.
As an example, if you wanted dprintf output to go to a logfile
that is a standard I/O stream assigned to the variable mylog,
you could do the following:
(gdb) set dprintf-style call
(gdb) set dprintf-function fprintf
(gdb) set dprintf-channel mylog
(gdb) dprintf 25,"at line 25, glob=%d\n",glob
Dprintf 1 at 0x123456: file main.c, line 25.
(gdb) info break
1 dprintf keep y 0x00123456 in main at main.c:25
call (void) fprintf (mylog,"at line 25, glob=%d\n",glob)
continue
(gdb)
|
Note that the info break displays the dynamic printf commands
as normal breakpoint commands; you can thus easily see the effect of
the variable settings.
set disconnected-dprintf on
set disconnected-dprintf off
dprintf commands should continue to run if
has disconnected from the target. This only applies
if the dprintf-style is agent.
show disconnected-dprintf off
dprintf.
does not check the validity of function and channel, relying on you to supply values that are meaningful for the contexts in which they are being used. For instance, the function and channel may be the values of local variables, but if that is the case, then all enabled dynamic prints must be at locations within the scope of those locals. If evaluation fails, will report an error.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To save breakpoint definitions to a file use the save
breakpoints command.
save breakpoints [filename]
source command (see section 23.1.3 Command Files). Note that watchpoints
with expressions involving local variables may fail to be recreated
because it may not be possible to access the context where the
watchpoint is valid anymore. Because the saved breakpoint definitions
are simply a sequence of commands that recreate the
breakpoints, you can edit the file in your favorite editing program,
and remove the breakpoint definitions you're not interested in, or
that can no longer be recreated.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports SDT probes in the code. SDT stands for Statically Defined Tracing, and the probes are designed to have a tiny runtime code and data footprint, and no dynamic relocations. They are usable from assembly, C and C++ languages. See http://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation for a good reference on how the SDT probes are implemented.
Currently, SystemTap (http://sourceware.org/systemtap/)
SDT probes are supported on ELF-compatible systems. See
http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps
for more information on how to add SystemTap SDT probes
in your applications.
Some probes have an associated semaphore variable; for instance, this
happens automatically if you defined your probe using a DTrace-style
`.d' file. If your probe has a semaphore, will
automatically enable it when you specify a breakpoint using the
`-probe-stap' notation. But, if you put a breakpoint at a probe's
location by some other method (e.g., break file:line), then
will not automatically set the semaphore.
You can examine the available static static probes using info
probes, with optional arguments:
info probes stap [provider [name [objfile]]]
If given, name is a regular expression to match against probe names when selecting which probes to list. If omitted, probe names are not considered when deciding whether to display them.
If given, objfile is a regular expression used to select which object files (executable or shared libraries) to examine. If not given, all object files are considered.
info probes all
A probe may specify up to twelve arguments. These are available at the
point at which the probe is defined--that is, when the current PC is
at the probe's location. The arguments are available using the
convenience variables (see section 10.11 Convenience Variables)
$_probe_arg0...$_probe_arg11. Each probe argument is
an integer of the appropriate size; types are not preserved. The
convenience variable $_probe_argc holds the number of arguments
at the current probe point.
These variables are always available, but attempts to access them at any location other than a probe point will cause to give an error message.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you request too many active hardware-assisted breakpoints and watchpoints, you will see this error message:
Stopped; cannot insert breakpoints. You may have requested too many hardware breakpoints and watchpoints. |
This message is printed when you attempt to resume the program, since only then knows exactly how many hardware breakpoints and watchpoints it needs to insert.
When this message is printed, you need to disable or remove some of the hardware-assisted breakpoints and watchpoints, and then continue.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some processor architectures place constraints on the addresses at which breakpoints may be placed. For architectures thus constrained, will attempt to adjust the breakpoint's address to comply with the constraints dictated by the architecture.
One example of such an architecture is the Fujitsu FR-V. The FR-V is a VLIW architecture in which a number of RISC-like instructions may be bundled together for parallel execution. The FR-V architecture constrains the location of a breakpoint instruction within such a bundle to the instruction with the lowest address. honors this constraint by adjusting a breakpoint's address to the first in the bundle.
It is not uncommon for optimized code to have bundles which contain instructions from different source statements, thus it may happen that a breakpoint's address will be adjusted from one source statement to another. Since this adjustment may significantly alter 's breakpoint related behavior from what the user expects, a warning is printed when the breakpoint is first set and also when the breakpoint is hit.
A warning like the one below is printed when setting a breakpoint that's been subject to address adjustment:
warning: Breakpoint address adjusted from 0x00010414 to 0x00010410. |
Such warnings are printed both for user settable and 's internal breakpoints. If you see one of these warnings, you should verify that a breakpoint set at the adjusted address will have the desired affect. If not, the breakpoint in question may be removed and other breakpoints may be set which will have the desired behavior. E.g., it may be sufficient to place the breakpoint at a later instruction. A conditional breakpoint may also be useful in some cases to prevent the breakpoint from triggering too often.
will also issue a warning when stopping at one of these adjusted breakpoints:
warning: Breakpoint 1 address previously adjusted from 0x00010414 to 0x00010410. |
When this warning is encountered, it may be too late to take remedial action except in cases where the breakpoint is hit earlier or more frequently than expected.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Continuing means resuming program execution until your program
completes normally. In contrast, stepping means executing just
one more "step" of your program, where "step" may mean either one
line of source code, or one machine instruction (depending on what
particular command you use). Either when continuing or when stepping,
your program may stop even sooner, due to a breakpoint or a signal. (If
it stops due to a signal, you may want to use handle, or use
`signal 0' to resume execution. See section Signals.)
continue [ignore-count]
c [ignore-count]
fg [ignore-count]
ignore (see section Break Conditions).
The argument ignore-count is meaningful only when your program
stopped due to a breakpoint. At other times, the argument to
continue is ignored.
The synonyms c and fg (for foreground, as the
debugged program is deemed to be the foreground program) are provided
purely for convenience, and have exactly the same behavior as
continue.
To resume execution at a different place, you can use return
(see section Returning from a Function) to go back to the
calling function; or jump (see section Continuing at a Different Address) to go to an arbitrary location in your program.
A typical technique for using stepping is to set a breakpoint (see section Breakpoints; Watchpoints; and Catchpoints) at the beginning of the function or the section of your program where a problem is believed to lie, run your program until it stops at that breakpoint, and then step through the suspect area, examining the variables that are interesting, until you see the problem happen.
step
s.
Warning: If you use thestepcommand while control is within a function that was compiled without debugging information, execution proceeds until control reaches a function that does have debugging information. Likewise, it will not step into a function which is compiled without debugging information. To step through functions without debugging information, use thestepicommand, described below.
The step command only stops at the first instruction of a source
line. This prevents the multiple stops that could otherwise occur in
switch statements, for loops, etc. step continues
to stop if a function that has debugging information is called within
the line. In other words, step steps inside any functions
called within the line.
Also, the step command only enters a function if there is line
number information for the function. Otherwise it acts like the
next command. This avoids problems when using cc -gl
on MIPS machines. Previously, step entered subroutines if there
was any debugging information about the routine.
step count
step, but do so count times. If a
breakpoint is reached, or a signal not related to stepping occurs before
count steps, stepping stops right away.
next [count]
step, but function calls that appear within
the line of code are executed without stopping. Execution stops when
control reaches a different line of code at the original stack level
that was executing when you gave the next command. This command
is abbreviated n.
An argument count is a repeat count, as for step.
The next command only stops at the first instruction of a
source line. This prevents multiple stops that could otherwise occur in
switch statements, for loops, etc.
set step-mode
set step-mode on
set step-mode on command causes the step command to
stop at the first instruction of a function which contains no debug line
information rather than stepping over it.
This is useful in cases where you may be interested in inspecting the machine instructions of a function which has no symbolic info and do not want to automatically skip over this function.
set step-mode off
step command to step over any functions which contains no
debug information. This is the default.
show step-mode
finish
fin.
Contrast this with the return command (see section Returning from a Function).
until
u
next
command, except that when until encounters a jump, it
automatically continues execution until the program counter is greater
than the address of the jump.
This means that when you reach the end of a loop after single stepping
though it, until makes your program continue execution until it
exits the loop. In contrast, a next command at the end of a loop
simply steps back to the beginning of the loop, which forces you to step
through the next iteration.
until always stops your program if it attempts to exit the current
stack frame.
until may produce somewhat counterintuitive results if the order
of machine code does not match the order of the source lines. For
example, in the following excerpt from a debugging session, the f
(frame) command shows that execution is stopped at line
206; yet when we use until, we get to line 195:
() f
#0 main (argc=4, argv=0xf7fffae8) at m4.c:206
206 expand_input();
() until
195 for ( ; argc > 0; NEXTARG) {
|
This happened because, for execution efficiency, the compiler had
generated code for the loop closure test at the end, rather than the
start, of the loop--even though the test in a C for-loop is
written before the body of the loop. The until command appeared
to step back to the beginning of the loop when it advanced to this
expression; however, it has not really gone to an earlier
statement--not in terms of the actual machine code.
until with no argument works by means of single
instruction stepping, and hence is slower than until with an
argument.
until location
u location
until without an argument. The specified
location is actually reached only if it is in the current frame. This
implies that until can be used to skip over recursive function
invocations. For instance in the code below, if the current location is
line 96, issuing until 99 will execute the program up to
line 99 in the same invocation of factorial, i.e., after the inner
invocations have returned.
94 int factorial (int value)
95 {
96 if (value > 1) {
97 value *= factorial (value - 1);
98 }
99 return (value);
100 }
|
advance location
until, but advance will
not skip over recursive function calls, and the target location doesn't
have to be in the same frame as the current one.
stepi
stepi arg
si
It is often useful to do `display/i $pc' when stepping by machine instructions. This makes automatically display the next instruction to be executed, each time your program stops. See section Automatic Display.
An argument is a repeat count, as in step.
nexti
nexti arg
ni
An argument is a repeat count, as in next.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The program you are debugging may contain some functions which are
uninteresting to debug. The skip comand lets you tell to
skip a function or all functions in a file when stepping.
For example, consider the following C function:
101 int func()
102 {
103 foo(boring());
104 bar(boring());
105 }
|
Suppose you wish to step into the functions foo and bar, but you
are not interested in stepping through boring. If you run step
at line 103, you'll enter boring(), but if you run next, you'll
step over both foo and boring!
One solution is to step into boring and use the finish
command to immediately exit it. But this can become tedious if boring
is called from many places.
A more flexible solution is to execute skip boring. This instructs
never to step into boring. Now when you execute
step at line 103, you'll step over boring and directly into
foo.
You can also instruct to skip all functions in a file, with, for
example, skip file boring.c.
skip [linespec]
skip function [linespec]
If you do not specify linespec, the function you're currently debugging will be skipped.
(If you have a function called file that you want to skip, use
skip function file.)
skip file [filename]
If you do not specify filename, functions whose source lives in the file you're currently debugging will be skipped.
Skips can be listed, deleted, disabled, and enabled, much like breakpoints. These are the commands for managing your list of skips:
info skip [range]
info skip prints the following information about each skip:
info skip will show the function's
address here.
skip delete [range]
skip enable [range]
skip disable [range]
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A signal is an asynchronous event that can happen in a program. The
operating system defines the possible kinds of signals, and gives each
kind a name and a number. For example, in Unix SIGINT is the
signal a program gets when you type an interrupt character (often Ctrl-c);
SIGSEGV is the signal a program gets from referencing a place in
memory far away from all the areas in use; SIGALRM occurs when
the alarm clock timer goes off (which happens only if your program has
requested an alarm).
Some signals, including SIGALRM, are a normal part of the
functioning of your program. Others, such as SIGSEGV, indicate
errors; these signals are fatal (they kill your program immediately) if the
program has not specified in advance some other way to handle the signal.
SIGINT does not indicate an error in your program, but it is normally
fatal so it can carry out the purpose of the interrupt: to kill the program.
has the ability to detect any occurrence of a signal in your program. You can tell in advance what to do for each kind of signal.
Normally, is set up to let the non-erroneous signals like
SIGALRM be silently passed to your program
(so as not to interfere with their role in the program's functioning)
but to stop your program immediately whenever an error signal happens.
You can change these settings with the handle command.
info signals
info handle
info signals sig
info handle is an alias for info signals.
catch signal [signal... | `all']
handle signal [keywords...]
The keywords allowed by the handle command can be abbreviated.
Their full names are:
nostop
stop
print keyword as well.
print
noprint
nostop keyword as well.
pass
noignore
pass and noignore are synonyms.
nopass
ignore
nopass and ignore are synonyms.
When a signal stops your program, the signal is not visible to the
program until you
continue. Your program sees the signal then, if pass is in
effect for the signal in question at that time. In other words,
after reports a signal, you can use the handle
command with pass or nopass to control whether your
program sees that signal when you continue.
The default is set to nostop, noprint, pass for
non-erroneous signals such as SIGALRM, SIGWINCH and
SIGCHLD, and to stop, print, pass for the
erroneous signals.
You can also use the signal command to prevent your program from
seeing a signal, or cause it to see a signal it normally would not see,
or to give it any signal at any time. For example, if your program stopped
due to some sort of memory reference error, you might store correct
values into the erroneous variables and continue, hoping to see more
execution; but your program would probably terminate immediately as
a result of the fatal signal once it saw the signal. To prevent this,
you can continue with `signal 0'. See section Giving your Program a Signal.
On some targets, can inspect extra signal information
associated with the intercepted signal, before it is actually
delivered to the program being debugged. This information is exported
by the convenience variable $_siginfo, and consists of data
that is passed by the kernel to the signal handler at the time of the
receipt of a signal. The data type of the information itself is
target dependent. You can see the data type using the ptype
$_siginfo command. On Unix systems, it typically corresponds to the
standard siginfo_t type, as defined in the `signal.h'
system header.
Here's an example, on a GNU/Linux system, printing the stray referenced address that raised a segmentation fault.
() continue
Program received signal SIGSEGV, Segmentation fault.
0x0000000000400766 in main ()
69 *(int *)p = 0;
() ptype $_siginfo
type = struct {
int si_signo;
int si_errno;
int si_code;
union {
int _pad[28];
struct {...} _kill;
struct {...} _timer;
struct {...} _rt;
struct {...} _sigchld;
struct {...} _sigfault;
struct {...} _sigpoll;
} _sifields;
}
() ptype $_siginfo._sifields._sigfault
type = struct {
void *si_addr;
}
() p $_siginfo._sifields._sigfault.si_addr
$1 = (void *) 0x7ffff7ff7000
|
Depending on target support, $_siginfo may also be writable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports debugging programs with multiple threads (see section Debugging Programs with Multiple Threads). There are two modes of controlling execution of your program within the debugger. In the default mode, referred to as all-stop mode, when any thread in your program stops (for example, at a breakpoint or while being stepped), all other threads in the program are also stopped by . On some targets, also supports non-stop mode, in which other threads can continue to run freely while you examine the stopped thread in the debugger.
5.5.1 All-Stop Mode All threads stop when GDB takes control 5.5.2 Non-Stop Mode Other threads continue to execute 5.5.3 Background Execution Running your program asynchronously 5.5.4 Thread-Specific Breakpoints Controlling breakpoints 5.5.5 Interrupted System Calls GDB may interfere with system calls 5.5.6 Observer Mode GDB does not alter program behavior
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In all-stop mode, whenever your program stops under for any reason, all threads of execution stop, not just the current thread. This allows you to examine the overall state of the program, including switching between threads, without worrying that things may change underfoot.
Conversely, whenever you restart the program, all threads start
executing. This is true even when single-stepping with commands
like step or next.
In particular, cannot single-step all threads in lockstep. Since thread scheduling is up to your debugging target's operating system (not controlled by ), other threads may execute more than one statement while the current thread completes a single step. Moreover, in general other threads stop in the middle of a statement, rather than at a clean statement boundary, when the program stops.
You might even find your program stopped in another thread after continuing or even single-stepping. This happens whenever some other thread runs into a breakpoint, a signal, or an exception before the first thread completes whatever you requested.
Whenever stops your program, due to a breakpoint or a signal, it automatically selects the thread where that breakpoint or signal happened. alerts you to the context switch with a message such as `[Switching to Thread n]' to identify the thread.
On some OSes, you can modify 's default behavior by locking the OS scheduler to allow only a single thread to run.
set scheduler-locking mode
off, then there is no
locking and any thread may run at any time. If on, then only the
current thread may run when the inferior is resumed. The step
mode optimizes for single-stepping; it prevents other threads
from preempting the current thread while you are stepping, so that
the focus of debugging does not change unexpectedly.
Other threads only rarely (or never) get a chance to run
when you step. They are more likely to run when you `next' over a
function call, and they are completely free to run when you use commands
like `continue', `until', or `finish'. However, unless another
thread hits a breakpoint during its timeslice, does not change
the current thread away from the thread that you are debugging.
show scheduler-locking
By default, when you issue one of the execution commands such as
continue, next or step, allows only
threads of the current inferior to run. For example, if
is attached to two inferiors, each with two threads, the
continue command resumes only the two threads of the current
inferior. This is useful, for example, when you debug a program that
forks and you want to hold the parent stopped (so that, for instance,
it doesn't run to exit), while you debug the child. In other
situations, you may not be interested in inspecting the current state
of any of the processes is attached to, and you may want
to resume them all until some breakpoint is hit. In the latter case,
you can instruct to allow all threads of all the
inferiors to run with the set schedule-multiple command.
set schedule-multiple
on, all threads of
all processes are allowed to run. When off, only the threads
of the current process are resumed. The default is off. The
scheduler-locking mode takes precedence when set to on,
or while you are stepping and set to step.
show schedule-multiple
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
For some multi-threaded targets, supports an optional mode of operation in which you can examine stopped program threads in the debugger while other threads continue to execute freely. This minimizes intrusion when debugging live systems, such as programs where some threads have real-time constraints or must continue to respond to external events. This is referred to as non-stop mode.
In non-stop mode, when a thread stops to report a debugging event,
only that thread is stopped; does not stop other
threads as well, in contrast to the all-stop mode behavior. Additionally,
execution commands such as continue and step apply by default
only to the current thread in non-stop mode, rather than all threads as
in all-stop mode. This allows you to control threads explicitly in
ways that are not possible in all-stop mode -- for example, stepping
one thread while allowing others to run freely, stepping
one thread while holding all others stopped, or stepping several threads
independently and simultaneously.
To enter non-stop mode, use this sequence of commands before you run or attach to your program:
# Enable the async interface. set target-async 1 # If using the CLI, pagination breaks non-stop. set pagination off # Finally, turn it on! set non-stop on |
You can use these commands to manipulate the non-stop mode setting:
set non-stop on
set non-stop off
show non-stop
Note these commands only reflect whether non-stop mode is enabled,
not whether the currently-executing program is being run in non-stop mode.
In particular, the set non-stop preference is only consulted when
starts or connects to the target program, and it is generally
not possible to switch modes once debugging has started. Furthermore,
since not all targets support non-stop mode, even when you have enabled
non-stop mode, may still fall back to all-stop operation by
default.
In non-stop mode, all execution commands apply only to the current thread
by default. That is, continue only continues one thread.
To continue all threads, issue continue -a or c -a.
You can use 's background execution commands (see section 5.5.3 Background Execution) to run some threads in the background while you continue to examine or step others from . The MI execution commands (see section 27.13 GDB/MI Program Execution) are always executed asynchronously in non-stop mode.
Suspending execution is done with the interrupt command when
running in the background, or Ctrl-c during foreground execution.
In all-stop mode, this stops the whole process;
but in non-stop mode the interrupt applies only to the current thread.
To stop the whole program, use interrupt -a.
Other execution commands do not currently support the -a option.
In non-stop mode, when a thread stops, doesn't automatically make that thread current, as it does in all-stop mode. This is because the thread stop notifications are asynchronous with respect to 's command interpreter, and it would be confusing if unexpectedly changed to a different thread just as you entered a command to operate on the previously current thread.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
's execution commands have two variants: the normal foreground (synchronous) behavior, and a background (asynchronous) behavior. In foreground execution, waits for the program to report that some thread has stopped before prompting for another command. In background execution, immediately gives a command prompt so that you can issue other commands while your program runs.
You need to explicitly enable asynchronous mode before you can use background execution commands. You can use these commands to manipulate the asynchronous mode setting:
set target-async on
set target-async off
show target-async
If the target doesn't support async mode, issues an error message if you attempt to use the background execution commands.
To specify background execution, add a & to the command. For example,
the background form of the continue command is continue&, or
just c&. The execution commands that accept background execution
are:
run
attach
step
stepi
next
nexti
continue
finish
until
Background execution is especially useful in conjunction with non-stop
mode for debugging programs with multiple threads; see 5.5.2 Non-Stop Mode.
However, you can also use these commands in the normal all-stop mode with
the restriction that you cannot issue another execution command until the
previous one finishes. Examples of commands that are valid in all-stop
mode while the program is running include help and info break.
You can interrupt your program while it is running in the background by
using the interrupt command.
interrupt
interrupt -a
Suspend execution of the running program. In all-stop mode,
interrupt stops the whole process, but in non-stop mode, it stops
only the current thread. To stop the whole program in non-stop mode,
use interrupt -a.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When your program has multiple threads (see section Debugging Programs with Multiple Threads), you can choose whether to set breakpoints on all threads, or on a particular thread.
break linespec thread threadno
break linespec thread threadno if ...
Use the qualifier `thread threadno' with a breakpoint command to specify that you only want to stop the program when a particular thread reaches this breakpoint. threadno is one of the numeric thread identifiers assigned by , shown in the first column of the `info threads' display.
If you do not specify `thread threadno' when you set a breakpoint, the breakpoint applies to all threads of your program.
You can use the thread qualifier on conditional breakpoints as
well; in this case, place `thread threadno' before or
after the breakpoint condition, like this:
() break frik.c:13 thread 28 if bartab > lim |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There is an unfortunate side effect when using to debug multi-threaded programs. If one thread stops for a breakpoint, or for some other reason, and another thread is blocked in a system call, then the system call may return prematurely. This is a consequence of the interaction between multiple threads and the signals that uses to implement breakpoints and other events that stop execution.
To handle this problem, your program should check the return value of each system call and react appropriately. This is good programming style anyways.
For example, do not write code like this:
sleep (10); |
The call to sleep will return early if a different thread stops
at a breakpoint or for some other reason.
Instead, write this:
int unslept = 10;
while (unslept > 0)
unslept = sleep (unslept);
|
A system call is allowed to return early, so the system is still conforming to its specification. But does cause your multi-threaded program to behave differently than it would without .
Also, uses internal breakpoints in the thread library to monitor certain events such as thread creation and thread destruction. When such an event happens, a system call in another thread may return prematurely, even though your program does not appear to stop.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you want to build on non-stop mode and observe program behavior without any chance of disruption by , you can set variables to disable all of the debugger's attempts to modify state, whether by writing memory, inserting breakpoints, etc. These operate at a low level, intercepting operations from all commands.
When all of these are set to off, then is said to
be observer mode. As a convenience, the variable
observer can be set to disable these, plus enable non-stop
mode.
Note that will not prevent you from making nonsensical
combinations of these settings. For instance, if you have enabled
may-insert-breakpoints but disabled may-write-memory,
then breakpoints that work by writing trap instructions into the code
stream will still not be able to be placed.
set observer on
set observer off
on, this disables all the permission variables
below (except for insert-fast-tracepoints), plus enables
non-stop debugging. Setting this to off switches back to
normal debugging, though remaining in non-stop mode.
show observer
set may-write-registers on
set may-write-registers off
print, or the
jump command. It defaults to on.
show may-write-registers
set may-write-memory on
set may-write-memory off
print. It
defaults to on.
show may-write-memory
set may-insert-breakpoints on
set may-insert-breakpoints off
on.
show may-insert-breakpoints
set may-insert-tracepoints on
set may-insert-tracepoints off
may-insert-fast-tracepoints. It defaults to on.
show may-insert-tracepoints
set may-insert-fast-tracepoints on
set may-insert-fast-tracepoints off
may-insert-tracepoints. It defaults to on.
show may-insert-fast-tracepoints
set may-interrupt on
set may-interrupt off
off, the
interrupt command will have no effect, nor will
Ctrl-c. It defaults to on.
show may-interrupt
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When you are debugging a program, it is not unusual to realize that you have gone too far, and some event of interest has already happened. If the target environment supports it, can allow you to "rewind" the program by running it backward.
A target environment that supports reverse execution should be able to "undo" the changes in machine state that have taken place as the program was executing normally. Variables, registers etc. should revert to their previous values. Obviously this requires a great deal of sophistication on the part of the target environment; not all target environments can support reverse execution.
When a program is executed in reverse, the instructions that have most recently been executed are "un-executed", in reverse order. The program counter runs backward, following the previous thread of execution in reverse. As each instruction is "un-executed", the values of memory and/or registers that were changed by that instruction are reverted to their previous states. After executing a piece of source code in reverse, all side effects of that code should be "undone", and all variables should be returned to their prior values(4).
If you are debugging in a target environment that supports reverse execution, provides the following commands.
reverse-continue [ignore-count]
rc [ignore-count]
reverse-step [count]
Like the step command, reverse-step will only stop
at the beginning of a source line. It "un-executes" the previously
executed source line. If the previous source line included calls to
debuggable functions, reverse-step will step (backward) into
the called function, stopping at the beginning of the last
statement in the called function (typically a return statement).
Also, as with the step command, if non-debuggable functions are
called, reverse-step will run thru them backward without stopping.
reverse-stepi [count]
reverse-stepi will take you
back from the destination of the jump to the jump instruction itself.
reverse-next [count]
reverse-next will take you back
to the caller of that function, before the function was called,
just as the normal next command would take you from the last
line of a function back to its return to its caller
(5).
reverse-nexti [count]
nexti, reverse-nexti executes a single instruction
in reverse, except that called functions are "un-executed" atomically.
That is, if the previously executed instruction was a return from
another function, reverse-nexti will continue to execute
in reverse until the call to that function (from the current stack
frame) is reached.
reverse-finish
finish command takes you to the point where the
current function returns, reverse-finish takes you to the point
where it was called. Instead of ending up at the end of the current
function invocation, you end up at the beginning.
set exec-direction
set exec-direction reverse
step, stepi, next, nexti, continue, and finish. The return
command cannot be used in reverse mode.
set exec-direction forward
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On some platforms, provides a special process record and replay target that can record a log of the process execution, and replay it later with both forward and reverse execution commands.
When this target is in use, if the execution log includes the record for the next instruction, will debug in replay mode. In the replay mode, the inferior does not really execute code instructions. Instead, all the events that normally happen during code execution are taken from the execution log. While code is not really executed in replay mode, the values of registers (including the program counter register) and the memory of the inferior are still changed as they normally would. Their contents are taken from the execution log.
If the record for the next instruction is not in the execution log, will debug in record mode. In this mode, the inferior executes normally, and records the execution log for future replay.
The process record and replay target supports reverse execution (see section 6. Running programs backward), even if the platform on which the inferior runs does not. However, the reverse execution is limited in this case by the range of the instructions recorded in the execution log. In other words, reverse execution on platforms that don't support it directly can only be done in the replay mode.
When debugging in the reverse direction, will work in replay mode as long as the execution log includes the record for the previous instruction; otherwise, it will work in record mode, if the platform supports reverse execution, or stop if not.
For architecture environments that support process record and replay, provides the following commands:
record method
full recording method. The following
recording methods are available:
full
btrace
This recording method may not be available on all processors.
The process record and replay target can only debug a process that is already running. Therefore, you need first to start the process with the run or start commands, and then start the recording with the record method command.
Both record method and rec method are
aliases of target record-method.
Displaced stepping (see section displaced stepping) will be automatically disabled when process record and replay target is started. That's because the process record and replay target doesn't support displaced stepping.
If the inferior is in the non-stop mode (see section 5.5.2 Non-Stop Mode) or in
the asynchronous execution mode (see section 5.5.3 Background Execution), not
all recording methods are available. The full recording method
does not support these two modes.
record stop
When you stop the process record and replay target in record mode (at the end of the execution log), the inferior will be stopped at the next instruction that would have been recorded. In other words, if you record for a while and then stop recording, the inferior process will be left in the same state as if the recording never happened.
On the other hand, if the process record and replay target is stopped while in replay mode (that is, not at the end of the execution log, but at some earlier point), the inferior process will become "live" at that earlier state, and it will then be possible to continue the usual "live" debugging of the process from that state.
When the inferior process exits, or detaches from it, process record and replay target will automatically stop itself.
record save filename
This command may not be available for all recording methods.
record restore filename
record save.
set record full insn-number-max limit
full
recording method. Default value is 200000.
If limit is a positive number, then will start
deleting instructions from the log once the number of the record
instructions becomes greater than limit. For every new recorded
instruction, will delete the earliest recorded
instruction to keep the number of recorded instructions at the limit.
(Since deleting recorded instructions loses information,
lets you control what happens when the limit is reached, by means of
the stop-at-limit option, described below.)
If limit is zero, will never delete recorded instructions from the execution log. The number of recorded instructions is unlimited in this case.
show record full insn-number-max
full
recording method.
set record full stop-at-limit
full recording method when the
number of recorded instructions reaches the limit. If ON (the
default), will stop when the limit is reached for the
first time and ask you whether you want to stop the inferior or
continue running it and recording the execution log. If you decide
to continue recording, each new recorded instruction will cause the
oldest one to be deleted.
If this option is OFF, will automatically delete the oldest record to make room for each new one, without asking.
show record full stop-at-limit
stop-at-limit.
set record full memory-query
full recording method.
If ON, will query whether to stop the inferior in that
case.
If this option is OFF (the default), will automatically ignore the effect of such instructions on memory. Later, when replays this execution log, it will mark the log of this instruction as not accessible, and it will not affect the replay results.
show record full memory-query
memory-query.
info record
full
full recording method, it shows the state of process
record and its in-memory execution log buffer, including:
btrace
btrace recording method, it shows the number of
instructions that have been recorded and the number of blocks of
sequential control-flow that is formed by the recorded instructions.
record delete
record instruction-history
set record instruction-history-size command. Instructions
are printed in execution order. There are several ways to specify
what part of the execution log to disassemble:
record instruction-history insn
record instruction-history insn, +/-n
+, disassembles
n instructions after instruction number insn. If
n is preceded with -, disassembles n
instructions before instruction number insn.
record instruction-history
record instruction-history -
record instruction-history begin end
This command may not be available for all recording methods.
set record instruction-history-size
record
instruction-history command. The default value is 10.
show record instruction-history-size
record
instruction-history command.
record function-call-history
/l modifier is
specified), and the instructions numbers that form the sequence (if
the /i modifier is specified).
() list 1, 10
1 void foo (void)
2 {
3 }
4
5 void bar (void)
6 {
7 ...
8 foo ();
9 ...
10 }
() record function-call-history /l
1 foo.c:6-8 bar
2 foo.c:2-3 foo
3 foo.c:9-10 bar
|
By default, ten lines are printed. This can be changed using the
set record function-call-history-size command. Functions are
printed in execution order. There are several ways to specify what
to print:
record function-call-history func
record function-call-history func, +/-n
+, prints n functions after
function number func. If n is preceded with -,
prints n functions before function number func.
record function-call-history
record function-call-history -
record function-call-history begin end
This command may not be available for all recording methods.
set record function-call-history-size
record function-call-history command. The default value is 10.
show record function-call-history-size
record function-call-history command.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When your program has stopped, the first thing you need to know is where it stopped and how it got there.
Each time your program performs a function call, information about the call is generated. That information includes the location of the call in your program, the arguments of the call, and the local variables of the function being called. The information is saved in a block of data called a stack frame. The stack frames are allocated in a region of memory called the call stack.
When your program stops, the commands for examining the stack allow you to see all of this information.
One of the stack frames is selected by and many commands refer implicitly to the selected frame. In particular, whenever you ask for the value of a variable in your program, the value is found in the selected frame. There are special commands to select whichever frame you are interested in. See section Selecting a Frame.
When your program stops, automatically selects the
currently executing frame and describes it briefly, similar to the
frame command (see section Information about a Frame).
8.1 Stack Frames Stack frames 8.2 Backtraces 8.3 Selecting a Frame Selecting a frame 8.4 Information About a Frame Information on a frame
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The call stack is divided up into contiguous pieces called stack frames, or frames for short; each frame is the data associated with one call to one function. The frame contains the arguments given to the function, the function's local variables, and the address at which the function is executing.
When your program is started, the stack has only one frame, that of the
function main. This is called the initial frame or the
outermost frame. Each time a function is called, a new frame is
made. Each time a function returns, the frame for that function invocation
is eliminated. If a function is recursive, there can be many frames for
the same function. The frame for the function in which execution is
actually occurring is called the innermost frame. This is the most
recently created of all the stack frames that still exist.
Inside your program, stack frames are identified by their addresses. A stack frame consists of many bytes, each of which has its own address; each kind of computer has a convention for choosing one byte whose address serves as the address of the frame. Usually this address is kept in a register called the frame pointer register (see section $fp) while execution is going on in that frame.
assigns numbers to all existing stack frames, starting with zero for the innermost frame, one for the frame that called it, and so on upward. These numbers do not really exist in your program; they are assigned by to give you a way of designating stack frames in commands.
Some compilers provide a way to compile functions so that they operate without stack frames. (For example, the option
`-fomit-frame-pointer' |
frame args
frame command allows you to move from one stack frame to another,
and to print the stack frame you select. args may be either the
address of the frame or the stack frame number. Without an argument,
frame prints the current stack frame.
select-frame
select-frame command allows you to move from one stack frame
to another without printing the frame. This is the silent version of
frame.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A backtrace is a summary of how your program got where it is. It shows one line per frame, for many frames, starting with the currently executing frame (frame zero), followed by its caller (frame one), and on up the stack.
backtrace
bt
You can stop the backtrace at any time by typing the system interrupt character, normally Ctrl-c.
backtrace n
bt n
backtrace -n
bt -n
backtrace full
bt full
bt full n
bt full -n
The names where and info stack (abbreviated info s)
are additional aliases for backtrace.
In a multi-threaded program, by default shows the
backtrace only for the current thread. To display the backtrace for
several or all of the threads, use the command thread apply
(see section thread apply). For example, if you type thread
apply all backtrace, will display the backtrace for all
the threads; this is handy when you debug a core dump of a
multi-threaded program.
Each line in the backtrace shows the frame number and the function name.
The program counter value is also shown--unless you use set
print address off. The backtrace also shows the source file name and
line number, as well as the arguments to the function. The program
counter value is omitted if it is at the beginning of the code for that
line number.
Here is an example of a backtrace. It was made with the command `bt 3', so it shows the innermost three frames.
#0 m4_traceon (obs=0x24eb0, argc=1, argv=0x2b8c8)
at builtin.c:993
#1 0x6e38 in expand_macro (sym=0x2b600, data=...) at macro.c:242
#2 0x6840 in expand_token (obs=0x0, t=177664, td=0xf7fffb08)
at macro.c:71
(More stack frames follow...)
|
The display for frame zero does not begin with a program counter
value, indicating that your program has stopped at the beginning of the
code for line 993 of builtin.c.
The value of parameter data in frame 1 has been replaced by
.... By default, prints the value of a parameter
only if it is a scalar (integer, pointer, enumeration, etc). See command
set print frame-arguments in 10.8 Print Settings for more details
on how to configure the way function parameter values are printed.
If your program was compiled with optimizations, some compilers will optimize away arguments passed to functions if those arguments are never used after the call. Such optimizations generate code that passes arguments through registers, but doesn't store those arguments in the stack frame. has no way of displaying such arguments in stack frames other than the innermost one. Here's what such a backtrace might look like:
#0 m4_traceon (obs=0x24eb0, argc=1, argv=0x2b8c8)
at builtin.c:993
#1 0x6e38 in expand_macro (sym=<optimized out>) at macro.c:242
#2 0x6840 in expand_token (obs=0x0, t=<optimized out>, td=0xf7fffb08)
at macro.c:71
(More stack frames follow...)
|
The values of arguments that were not saved in their stack frames are shown as `<optimized out>'.
If you need to display the values of such optimized-out arguments, either deduce that from other variables whose values depend on the one you are interested in, or recompile without optimizations.
Most programs have a standard user entry point--a place where system
libraries and startup code transition into user code. For C this is
main(6).
When finds the entry function in a backtrace
it will terminate the backtrace, to avoid tracing into highly
system-specific (and generally uninteresting) code.
If you need to examine the startup code, or limit the number of levels in a backtrace, you can change this behavior:
set backtrace past-main
set backtrace past-main on
set backtrace past-main off
show backtrace past-main
set backtrace past-entry
set backtrace past-entry on
main (or equivalent) is called.
set backtrace past-entry off
show backtrace past-entry
set backtrace limit n
set backtrace limit 0
show backtrace limit
You can control how file names are displayed.
set filename-display
set filename-display relative
set filename-display basename
set filename-display absolute
show filename-display
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Most commands for examining the stack and other data in your program work on whichever stack frame is selected at the moment. Here are the commands for selecting a stack frame; all of them finish by printing a brief description of the stack frame just selected.
frame n
f n
main.
frame addr
f addr
On the SPARC architecture, frame needs two addresses to
select an arbitrary frame: a frame pointer and a stack pointer.
On the MIPS and Alpha architecture, it needs two addresses: a stack pointer and a program counter.
On the 29k architecture, it needs three addresses: a register stack pointer, a program counter, and a memory stack pointer.
up n
down n
down as do.
All of these commands end by printing two lines of output describing the frame. The first line shows the frame number, the function name, the arguments, and the source file and line number of execution in that frame. The second line shows the text of that source line.
For example:
() up
#1 0x22f0 in main (argc=1, argv=0xf7fffbf4, env=0xf7fffbfc)
at env.c:10
10 read_input_file (argv[i]);
|
After such a printout, the list command with no arguments
prints ten lines centered on the point of execution in the frame.
You can also edit the program at the point of execution with your favorite
editing program by typing edit.
See section Printing Source Lines,
for details.
up-silently n
down-silently n
up and down,
respectively; they differ in that they do their work silently, without
causing display of the new frame. They are intended primarily for use
in command scripts, where the output might be unnecessary and
distracting.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are several other commands to print information about the selected stack frame.
frame
f
f. With an
argument, this command is used to select a stack frame.
See section Selecting a Frame.
info frame
info f
The verbose description is useful when something has gone wrong that has made the stack format fail to fit the usual conventions.
info frame addr
info f addr
frame command.
See section Selecting a Frame.
info args
info locals
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can print parts of your program's source, since the debugging information recorded in the program tells what source files were used to build it. When your program stops, spontaneously prints the line where it stopped. Likewise, when you select a stack frame (see section Selecting a Frame), prints the line where execution in that frame has stopped. You can print other portions of source files by explicit command.
If you use through its GNU Emacs interface, you may prefer to use Emacs facilities to view source; see Using under GNU Emacs.
9.1 Printing Source Lines Printing source lines 9.2 Specifying a Location How to specify code locations 9.3 Editing Source Files Editing source files 9.4 Searching Source Files Searching source files 9.5 Specifying Source Directories Specifying source directories 9.6 Source and Machine Code Source and machine code
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To print lines from a source file, use the list command
(abbreviated l). By default, ten lines are printed.
There are several ways to specify what part of the file you want to
print; see 9.2 Specifying a Location, for the full list.
Here are the forms of the list command most commonly used:
list linenum
list function
list
list command, this prints lines following the last lines
printed; however, if the last line printed was a solitary line printed
as part of displaying a stack frame (see section Examining the Stack), this prints lines centered around that line.
list -
By default, prints ten source lines with any of these forms of
the list command. You can change this using set listsize:
set listsize count
list command display count source lines (unless
the list argument explicitly specifies some other number).
Setting count to 0 means there's no limit.
show listsize
list prints.
Repeating a list command with RET discards the argument,
so it is equivalent to typing just list. This is more useful
than listing the same lines again. An exception is made for an
argument of `-'; that argument is preserved in repetition so that
each repetition moves up in the source file.
In general, the list command expects you to supply zero, one or two
linespecs. Linespecs specify source lines; there are several ways
of writing them (see section 9.2 Specifying a Location), but the effect is always
to specify some source line.
Here is a complete description of the possible arguments for list:
list linespec
list first,last
list command has two linespecs, and the
source file of the second linespec is omitted, this refers to
the same source file as the first linespec.
list ,last
list first,
list +
list -
list
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Several commands accept arguments that specify a location of your program's code. Since is a source-level debugger, a location usually specifies some line in the source code; for that reason, locations are also known as linespecs.
Here are all the different ways of specifying a code location that understands:
linenum
-offset
+offset
list command, the current line is the last one
printed; for the breakpoint commands, this is the line at which
execution stopped in the currently selected stack frame
(see section Frames, for a description of stack frames.) When
used as the second of the two linespecs in a list command,
this specifies the line offset lines up or down from the first
linespec.
filename:linenum
function
function:label
filename:function
label
*address
list and edit, this specifies a source
line that contains address. For break and other
breakpoint oriented commands, this can be used to set breakpoints in
parts of your program which do not have debugging information or
source files.
Here address may be any expression valid in the current working language (see section working language) that specifies a code address. In addition, as a convenience, extends the semantics of expressions used in locations to cover the situations that frequently happen during debugging. Here are the various forms of address:
expression
funcaddr
&function. In Ada, this is function'Address
(although the Pascal form also works).
This form specifies the address of the function's first instruction, before the stack frame and arguments have been set up.
'filename'::funcaddr
-pstap|-probe-stap [objfile:[provider:]]name
SystemTap provides a way for
applications to embed static probes. See section 5.1.10 Static Probe Points, for more
information on finding and using static probes. This form of linespec
specifies the location of such a static probe.
If objfile is given, only probes coming from that shared library or executable matching objfile as a regular expression are considered. If provider is given, then only probes from that provider are considered. If several probes match the spec, will insert a breakpoint at each one of those probes.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To edit the lines in a source file, use the edit command.
The editing program of your choice
is invoked with the current line set to
the active line in the program.
Alternatively, there are several ways to specify what part of the file you
want to print if you want to see other parts of the program:
edit location
location. Editing starts at
that location, e.g., at the specified source line of the
specified file. See section 9.2 Specifying a Location, for all the possible forms
of the location argument; here are the forms of the edit
command most commonly used:
edit number
edit function
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
EDITOR before using
. For example, to configure to use the
vi editor, you could use these commands with the sh shell:
EDITOR=/usr/bin/vi export EDITOR gdb ... |
csh shell,
setenv EDITOR /usr/bin/vi gdb ... |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are two commands for searching through the current source file for a regular expression.
forward-search regexp
search regexp
fo.
reverse-search regexp
rev.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Executable programs sometimes do not record the directories of the source files from which they were compiled, just the names. Even when they do, the directories could be moved between the compilation and your debugging session. has a list of directories to search for source files; this is called the source path. Each time wants a source file, it tries all the directories in the list, in the order they are present in the list, until it finds a file with the desired name.
For example, suppose an executable references the file `/usr/src/foo-1.0/lib/foo.c', and our source path is `/mnt/cross'. The file is first looked up literally; if this fails, `/mnt/cross/usr/src/foo-1.0/lib/foo.c' is tried; if this fails, `/mnt/cross/foo.c' is opened; if this fails, an error message is printed. does not look up the parts of the source file name, such as `/mnt/cross/src/foo-1.0/lib/foo.c'. Likewise, the subdirectories of the source path are not searched: if the source path is `/mnt/cross', and the binary refers to `foo.c', would not find it under `/mnt/cross/usr/src/foo-1.0/lib'.
Plain file names, relative file names with leading directories, file names containing dots, etc. are all treated as described above; for instance, if the source path is `/mnt/cross', and the source file is recorded as `../lib/foo.c', would first try `../lib/foo.c', then `/mnt/cross/../lib/foo.c', and after that---`/mnt/cross/foo.c'.
Note that the executable search path is not used to locate the source files.
Whenever you reset or rearrange the source path, clears out any information it has cached about where source files are found and where each line is in the file.
When you start , its source path includes only `cdir'
and `cwd', in that order.
To add other directories, use the directory command.
The search path is used to find both program source files and script files (read using the `-command' option and `source' command).
In addition to the source path, provides a set of commands that manage a list of source path substitution rules. A substitution rule specifies how to rewrite source directories stored in the program's debug information in case the sources were moved to a different directory between compilation and debugging. A rule is made of two strings, the first specifying what needs to be rewritten in the path, and the second specifying how it should be rewritten. In set substitute-path, we name these two parts from and to respectively. does a simple string replacement of from with to at the start of the directory part of the source file name, and uses that result instead of the original file name to look up the sources.
Using the previous example, suppose the `foo-1.0' tree has been
moved from `/usr/src' to `/mnt/cross', then you can tell
to replace `/usr/src' in all source path names with
`/mnt/cross'. The first lookup will then be
`/mnt/cross/foo-1.0/lib/foo.c' in place of the original location
of `/usr/src/foo-1.0/lib/foo.c'. To define a source path
substitution rule, use the set substitute-path command
(see set substitute-path).
To avoid unexpected substitution results, a rule is applied only if the from part of the directory name ends at a directory separator. For instance, a rule substituting `/usr/source' into `/mnt/cross' will be applied to `/usr/source/foo-1.0' but not to `/usr/sourceware/foo-2.0'. And because the substitution is applied only at the beginning of the directory name, this rule will not be applied to `/root/usr/source/baz.c' either.
In many cases, you can achieve the same result using the directory
command. However, set substitute-path can be more efficient in
the case where the sources are organized in a complex tree with multiple
subdirectories. With the directory command, you need to add each
subdirectory of your project. If you moved the entire tree while
preserving its internal organization, then set substitute-path
allows you to direct the debugger to all the sources with one single
command.
set substitute-path is also more than just a shortcut command.
The source path is only used if the file at the original location no
longer exists. On the other hand, set substitute-path modifies
the debugger behavior to look at the rewritten location instead. So, if
for any reason a source file that is not relevant to your executable is
located at the original location, a substitution rule is the only
method available to point at the new location.
You can configure a default source path substitution rule by configuring with the `--with-relocated-sources=dir' option. The dir should be the name of a directory under 's configured prefix (set with `--prefix' or `--exec-prefix'), and directory names in debug information under dir will be adjusted automatically if the installed is moved to a new location. This is useful if , libraries or executables with debug information and corresponding source code are being moved together.
directory dirname ...
dir dirname ...
You can use the string `$cdir' to refer to the compilation directory (if one is recorded), and `$cwd' to refer to the current working directory. `$cwd' is not the same as `.'---the former tracks the current working directory as it changes during your session, while the latter is immediately expanded to the current directory at the time you add an entry to the source path.
directory
set directories path-list
show directories
set substitute-path from to
For example, if the file `/foo/bar/baz.c' was moved to `/mnt/cross/baz.c', then the command
() set substitute-path /usr/src /mnt/cross |
will tell to replace `/usr/src' with `/mnt/cross', which will allow to find the file `baz.c' even though it was moved.
In the case when more than one substitution rule have been defined, the rules are evaluated one by one in the order where they have been defined. The first one matching, if any, is selected to perform the substitution.
For instance, if we had entered the following commands:
() set substitute-path /usr/src/include /mnt/include () set substitute-path /usr/src /mnt/src |
would then rewrite `/usr/src/include/defs.h' into `/mnt/include/defs.h' by using the first rule. However, it would use the second rule to rewrite `/usr/src/lib/foo.c' into `/mnt/src/lib/foo.c'.
unset substitute-path [path]
If no path is specified, then all substitution rules are deleted.
show substitute-path [path]
If no path is specified, then print all existing source path substitution rules.
If your source path is cluttered with directories that are no longer of interest, may sometimes cause confusion by finding the wrong versions of source. You can correct the situation as follows:
directory with no argument to reset the source path to its default value.
directory with suitable arguments to reinstall the
directories you want in the source path. You can add all the
directories in one command.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can use the command info line to map source lines to program
addresses (and vice versa), and the command disassemble to display
a range of addresses as machine instructions. You can use the command
set disassemble-next-line to set whether to disassemble next
source line when execution stops. When run under GNU Emacs
mode, the info line command causes the arrow to point to the
line specified. Also, info line prints addresses in symbolic form as
well as hex.
info line linespec
For example, we can use info line to discover the location of
the object code for the first line of function
m4_changequote:
() info line m4_changequote Line 895 of "builtin.c" starts at pc 0x634c and ends at 0x6350. |
We can also inquire (using *addr as the form for
linespec) what source line covers a particular address:
() info line *0x63ff Line 926 of "builtin.c" starts at pc 0x63e4 and ends at 0x6404. |
After info line, the default address for the x command
is changed to the starting address of the line, so that `x/i' is
sufficient to begin examining the machine code (see section Examining Memory). Also, this address is saved as the value of the
convenience variable $_ (see section Convenience Variables).
disassemble
disassemble /m
disassemble /r
/m modifier and print the raw instructions in hex as well as
in symbolic form by specifying the /r.
The default memory range is the function surrounding the
program counter of the selected frame. A single argument to this
command is a program counter value; dumps the function
surrounding this value. When two arguments are given, they should
be separated by a comma, possibly surrounded by whitespace. The
arguments specify a range of addresses to dump, in one of two forms:
start,end
start,+length
start+length (exclusive).
When 2 arguments are specified, the name of the function is also printed (since there could be several functions in the given range).
The argument(s) can be any expression yielding a numeric value, such as `0x32c4', `&main+10' or `$pc - 8'.
If the range of memory being disassembled contains current program counter,
the instruction at that location is shown with a => marker.
The following example shows the disassembly of a range of addresses of HP PA-RISC 2.0 code:
() disas 0x32c4, 0x32e4 Dump of assembler code from 0x32c4 to 0x32e4: 0x32c4 <main+204>: addil 0,dp 0x32c8 <main+208>: ldw 0x22c(sr0,r1),r26 0x32cc <main+212>: ldil 0x3000,r31 0x32d0 <main+216>: ble 0x3f8(sr4,r31) 0x32d4 <main+220>: ldo 0(r31),rp 0x32d8 <main+224>: addil -0x800,dp 0x32dc <main+228>: ldo 0x588(r1),r26 0x32e0 <main+232>: ldil 0x3000,r31 End of assembler dump. |
Here is an example showing mixed source+assembly for Intel x86, when the program is stopped just after function prologue:
() disas /m main
Dump of assembler code for function main:
5 {
0x08048330 <+0>: push %ebp
0x08048331 <+1>: mov %esp,%ebp
0x08048333 <+3>: sub $0x8,%esp
0x08048336 <+6>: and $0xfffffff0,%esp
0x08048339 <+9>: sub $0x10,%esp
6 printf ("Hello.\n");
=> 0x0804833c <+12>: movl $0x8048440,(%esp)
0x08048343 <+19>: call 0x8048284 <puts@plt>
7 return 0;
8 }
0x08048348 <+24>: mov $0x0,%eax
0x0804834d <+29>: leave
0x0804834e <+30>: ret
End of assembler dump.
|
Here is another example showing raw instructions in hex for AMD x86-64,
(gdb) disas /r 0x400281,+10 Dump of assembler code from 0x400281 to 0x40028b: 0x0000000000400281: 38 36 cmp %dh,(%rsi) 0x0000000000400283: 2d 36 34 2e 73 sub $0x732e3436,%eax 0x0000000000400288: 6f outsl %ds:(%rsi),(%dx) 0x0000000000400289: 2e 32 00 xor %cs:(%rax),%al End of assembler dump. |
Addresses cannot be specified as a linespec (see section 9.2 Specifying a Location).
So, for example, if you want to disassemble function bar
in file `foo.c', you must type `disassemble 'foo.c'::bar'
and not `disassemble foo.c:bar'.
Some architectures have more than one commonly-used set of instruction mnemonics or other syntax.
For programs that were dynamically linked and use shared libraries, instructions that call functions or branch to locations in the shared libraries might show a seemingly bogus location--it's actually a location of the relocation table. On some architectures, might be able to resolve these to actual function names.
set disassembly-flavor instruction-set
disassemble or x/i commands.
Currently this command is only defined for the Intel x86 family. You
can set instruction-set to either intel or att.
The default is att, the AT&T flavor used by default by Unix
assemblers for x86-based targets.
show disassembly-flavor
set disassemble-next-line
show disassemble-next-line
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The usual way to examine data in your program is with the print
command (abbreviated p), or its synonym inspect. It
evaluates and prints the value of an expression of the language your
program is written in (see section Using with Different Languages). It may also print the expression using a
Python-based pretty-printer (see section 10.9 Pretty Printing).
print expr
print /f expr
print
print /f
A more low-level way of examining data is with the x command.
It examines data in memory at a specified address and prints it in a
specified format. See section Examining Memory.
If you are interested in information about types, or about how the
fields of a struct or a class are declared, use the ptype exp
command rather than print. See section Examining the Symbol Table.
Another way of examining values of expressions and type information is
through the Python extension command explore (available only if
the build is configured with --with-python). It
offers an interactive way to start at the highest level (or, the most
abstract level) of the data type of an expression (or, the data type
itself) and explore all the way down to leaf scalar values/fields
embedded in the higher level data types.
explore arg
The working of the explore command can be illustrated with an
example. If a data type struct ComplexStruct is defined in your
C program as
struct SimpleStruct
{
int i;
double d;
};
struct ComplexStruct
{
struct SimpleStruct *ss_p;
int arr[10];
};
|
followed by variable declarations as
struct SimpleStruct ss = { 10, 1.11 };
struct ComplexStruct cs = { &ss, { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 } };
|
then, the value of the variable cs can be explored using the
explore command as follows.
(gdb) explore cs The value of `cs' is a struct/class of type `struct ComplexStruct' with the following fields: ss_p = <Enter 0 to explore this field of type `struct SimpleStruct *'> arr = <Enter 1 to explore this field of type `int [10]'> Enter the field number of choice: |
Since the fields of cs are not scalar values, you are being
prompted to chose the field you want to explore. Let's say you choose
the field ss_p by entering 0. Then, since this field is a
pointer, you will be asked if it is pointing to a single value. From
the declaration of cs above, it is indeed pointing to a single
value, hence you enter y. If you enter n, then you will
be asked if it were pointing to an array of values, in which case this
field will be explored as if it were an array.
`cs.ss_p' is a pointer to a value of type `struct SimpleStruct' Continue exploring it as a pointer to a single value [y/n]: y The value of `*(cs.ss_p)' is a struct/class of type `struct SimpleStruct' with the following fields: i = 10 .. (Value of type `int') d = 1.1100000000000001 .. (Value of type `double') Press enter to return to parent value: |
If the field arr of cs was chosen for exploration by
entering 1 earlier, then since it is as array, you will be
prompted to enter the index of the element in the array that you want
to explore.
`cs.arr' is an array of `int'. Enter the index of the element you want to explore in `cs.arr': 5 `(cs.arr)[5]' is a scalar value of type `int'. (cs.arr)[5] = 4 Press enter to return to parent value: |
In general, at any stage of exploration, you can go deeper towards the leaf values by responding to the prompts appropriately, or hit the return key to return to the enclosing data structure (the higher level data structure).
Similar to exploring values, you can use the explore command to
explore types. Instead of specifying a value (which is typically a
variable name or an expression valid in the current context of the
program being debugged), you specify a type name. If you consider the
same example as above, your can explore the type
struct ComplexStruct by passing the argument
struct ComplexStruct to the explore command.
(gdb) explore struct ComplexStruct |
By responding to the prompts appropriately in the subsequent interactive
session, you can explore the type struct ComplexStruct in a
manner similar to how the value cs was explored in the above
example.
The explore command also has two sub-commands,
explore value and explore type. The former sub-command is
a way to explicitly specify that value exploration of the argument is
being invoked, while the latter is a way to explicitly specify that type
exploration of the argument is being invoked.
explore value expr
explore explores the value of the
expression expr (if expr is an expression valid in the
current context of the program being debugged). The behavior of this
command is identical to that of the behavior of the explore
command being passed the argument expr.
explore type arg
explore explores the type of arg (if
arg is a type visible in the current context of program being
debugged), or the type of the value/expression arg (if arg
is an expression valid in the current context of the program being
debugged). If arg is a type, then the behavior of this command is
identical to that of the explore command being passed the
argument arg. If arg is an expression, then the behavior of
this command will be identical to that of the explore command
being passed the type of arg as the argument.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
print and many other commands accept an expression and
compute its value. Any kind of constant, variable or operator defined
by the programming language you are using is valid in an expression in
. This includes conditional expressions, function calls,
casts, and string constants. It also includes preprocessor macros, if
you compiled your program to include this information; see
4.1 Compiling for Debugging.
supports array constants in expressions input by
the user. The syntax is {element, element...}. For example,
you can use the command print {1, 2, 3} to create an array
of three integers. If you pass an array to a function or assign it
to a program variable, copies the array to memory that
is malloced in the target program.
Because C is so widespread, most of the expressions shown in examples in this manual are in C. See section Using with Different Languages, for information on how to use expressions in other languages.
In this section, we discuss operators that you can use in expressions regardless of your programming language.
Casts are supported in all languages, not just in C, because it is so useful to cast a number into a pointer in order to examine a structure at that address in memory.
supports these operators, in addition to those common to programming languages:
@
::
{type} addr
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Expressions can sometimes contain some ambiguous elements. For instance, some programming languages (notably Ada, C++ and Objective-C) permit a single function name to be defined several times, for application in different contexts. This is called overloading. Another example involving Ada is generics. A generic package is similar to C++ templates and is typically instantiated several times, resulting in the same function name being defined in different contexts.
In some cases and depending on the language, it is possible to adjust the expression to remove the ambiguity. For instance in C++, you can specify the signature of the function you want to break on, as in break function(types). In Ada, using the fully qualified name of your function often makes the expression unambiguous as well.
When an ambiguity that needs to be resolved is detected, the debugger has the capability to display a menu of numbered choices for each possibility, and then waits for the selection with the prompt `>'. The first option is always `[0] cancel', and typing 0 RET aborts the current command. If the command in which the expression was used allows more than one choice to be selected, the next option in the menu is `[1] all', and typing 1 RET selects all possible choices.
For example, the following session excerpt shows an attempt to set a
breakpoint at the overloaded symbol String::after.
We choose three particular definitions of that function name:
() b String::after [0] cancel [1] all [2] file:String.cc; line number:867 [3] file:String.cc; line number:860 [4] file:String.cc; line number:875 [5] file:String.cc; line number:853 [6] file:String.cc; line number:846 [7] file:String.cc; line number:735 > 2 4 6 Breakpoint 1 at 0xb26c: file String.cc, line 867. Breakpoint 2 at 0xb344: file String.cc, line 875. Breakpoint 3 at 0xafcc: file String.cc, line 846. Multiple breakpoints were set. Use the "delete" command to delete unwanted breakpoints. () |
set multiple-symbols mode
This option allows you to adjust the debugger behavior when an expression is ambiguous.
By default, mode is set to all. If the command with which
the expression is used allows more than one choice, then
automatically selects all possible choices. For instance, inserting
a breakpoint on a function using an ambiguous name results in a breakpoint
inserted on each possible match. However, if a unique choice must be made,
then uses the menu to help you disambiguate the expression.
For instance, printing the address of an overloaded function will result
in the use of the menu.
When mode is set to ask, the debugger always uses the menu
when an ambiguity is detected.
Finally, when mode is set to cancel, the debugger reports
an error due to the ambiguity and the command is aborted.
show multiple-symbols
multiple-symbols setting.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The most common kind of expression to use is the name of a variable in your program.
Variables in expressions are understood in the selected stack frame (see section Selecting a Frame); they must be either:
or
This means that in the function
foo (a)
int a;
{
bar (a);
{
int b = test ();
bar (b);
}
}
|
you can examine and use the variable a whenever your program is
executing within the function foo, but you can only use or
examine the variable b while your program is executing inside
the block where b is declared.
There is an exception: you can refer to a variable or function whose
scope is a single source file even if the current execution point is not
in this file. But it is possible to have more than one such variable or
function with the same name (in different source files). If that
happens, referring to that name has unpredictable effects. If you wish,
you can specify a static variable in a particular function or file by
using the colon-colon (::) notation:
file::variable function::variable |
Here file or function is the name of the context for the
static variable. In the case of file names, you can use quotes to
make sure parses the file name as a single word--for example,
to print a global value of x defined in `f2.c':
() p 'f2.c'::x |
The :: notation is normally used for referring to
static variables, since you typically disambiguate uses of local variables
in functions by selecting the appropriate frame and using the
simple name of the variable. However, you may also use this notation
to refer to local variables in frames enclosing the selected frame:
void
foo (int a)
{
if (a < 10)
bar (a);
else
process (a); /* Stop here */
}
int
bar (int a)
{
foo (a + 5);
}
|
For example, if there is a breakpoint at the commented line,
here is what you might see
when the program stops after executing the call bar(0):
() p a $1 = 10 () p bar::a $2 = 5 () up 2 #2 0x080483d0 in foo (a=5) at foobar.c:12 () p a $3 = 5 () p bar::a $4 = 0 |
These uses of `::' are very rarely in conflict with the very similar use of the same notation in C++. also supports use of the C++ scope resolution operator in expressions.
Warning: Occasionally, a local variable may appear to have the wrong value at certain points in a function--just after entry to a new scope, and just before exit.You may see this problem when you are stepping by machine instructions. This is because, on most machines, it takes more than one instruction to set up a stack frame (including local variable definitions); if you are stepping by machine instructions, variables may appear to have the wrong values until the stack frame is completely built. On exit, it usually also takes more than one machine instruction to destroy a stack frame; after you begin stepping through that group of instructions, local variable definitions may be gone.
This may also happen when the compiler does significant optimizations. To be sure of always seeing accurate values, turn off all optimization when compiling.
Another possible effect of compiler optimizations is to optimize unused variables out of existence, or assign variables to registers (as opposed to memory addresses). Depending on the support for such cases offered by the debug info format used by the compiler, might not be able to display values for such local variables. If that happens, will print a message like this:
No symbol "foo" in current context. |
To solve such problems, either recompile without optimizations, or use a different debug info format, if the compiler supports several such formats. See section 4.1 Compiling for Debugging, for more information on choosing compiler options. See section C and C++, for more information about debug info formats that are best suited to C++ programs.
If you ask to print an object whose contents are unknown to , e.g., because its data type is not completely specified by the debug information, will say `<incomplete type>'. See section incomplete type, for more about this.
If you append @entry string to a function parameter name you get its value at the time the function got called. If the value is not available an error message is printed. Entry values are available only with some compilers. Entry values are normally also printed at the function parameter list according to set print entry-values.
Breakpoint 1, d (i=30) at gdb.base/entry-value.c:29 29 i++; (gdb) next 30 e (i); (gdb) print i $1 = 31 (gdb) print i@entry $2 = 30 |
Strings are identified as arrays of char values without specified
signedness. Arrays of either signed char or unsigned char get
printed as arrays of 1 byte sized integers. -fsigned-char or
-funsigned-char options have no effect as
defines literal string type "char" as char without a sign.
For program code
char var0[] = "A"; signed char var1[] = "A"; |
You get during debugging
(gdb) print var0
$1 = "A"
(gdb) print var1
$2 = {65 'A', 0 '\0'}
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is often useful to print out several successive objects of the same type in memory; a section of an array, or an array of dynamically determined size for which only a pointer exists in the program.
You can do this by referring to a contiguous span of memory as an artificial array, using the binary operator `@'. The left operand of `@' should be the first element of the desired array and be an individual object. The right operand should be the desired length of the array. The result is an array value whose elements are all of the type of the left argument. The first element is actually the left argument; the second element comes from bytes of memory immediately following those that hold the first element, and so on. Here is an example. If a program says
int *array = (int *) malloc (len * sizeof (int)); |
you can print the contents of array with
p *array@len |
The left operand of `@' must reside in memory. Array values made with `@' in this way behave just like other arrays in terms of subscripting, and are coerced to pointers when used in expressions. Artificial arrays most often appear in expressions via the value history (see section Value History), after printing one out.
Another way to create an artificial array is to use a cast. This re-interprets a value as if it were an array. The value need not be in memory:
() p/x (short[2])0x12345678
$1 = {0x1234, 0x5678}
|
As a convenience, if you leave the array length out (as in `(type[])value') calculates the size to fill the value (as `sizeof(value)/sizeof(type)':
() p/x (short[])0x12345678
$2 = {0x1234, 0x5678}
|
Sometimes the artificial array mechanism is not quite enough; in
moderately complex data structures, the elements of interest may not
actually be adjacent--for example, if you are interested in the values
of pointers in an array. One useful work-around in this situation is
to use a convenience variable (see section Convenience Variables) as a counter in an expression that prints the first
interesting value, and then repeat that expression via RET. For
instance, suppose you have an array dtab of pointers to
structures, and you are interested in the values of a field fv
in each structure. Here is an example of what you might type:
set $i = 0 p dtab[$i++]->fv RET RET ... |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, prints a value according to its data type. Sometimes this is not what you want. For example, you might want to print a number in hex, or a pointer in decimal. Or you might want to view data in memory at a certain address as a character string or as an instruction. To do these things, specify an output format when you print a value.
The simplest use of output formats is to say how to print a value
already computed. This is done by starting the arguments of the
print command with a slash and a format letter. The format
letters supported are:
x
d
u
o
t
a
() p/a 0x54320 $3 = 0x54320 <_initialize_vx+396> |
The command info symbol 0x54320 yields similar results.
See section info symbol.
c
Without this format, displays char,
unsigned char, and signed char data as character
constants. Single-byte members of vectors are displayed as integer
data.
f
s
Without this format, displays pointers to and arrays of
char, unsigned char, and signed char as
strings. Single-byte members of a vector are displayed as an integer
array.
r
For example, to print the program counter in hex (see section 10.13 Registers), type
p/x $pc |
Note that no space is required before the slash; this is because command names in cannot contain a slash.
To reprint the last value in the value history with a different format,
you can use the print command with just a format and no
expression. For example, `p/x' reprints the last value in hex.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can use the command x (for "examine") to examine memory in
any of several formats, independently of your program's data types.
x/nfu addr
x addr
x
x command to examine memory.
n, f, and u are all optional parameters that specify how much memory to display and how to format it; addr is an expression giving the address where you want to start displaying memory. If you use defaults for nfu, you need not type the slash `/'. Several commands set convenient defaults for addr.
print
(`x', `d', `u', `o', `t', `a', `c',
`f', `s'), and in addition `i' (for machine instructions).
The default is `x' (hexadecimal) initially. The default changes
each time you use either x or print.
b
h
w
g
Each time you specify a unit size with x, that size becomes the
default unit the next time you use x. For the `i' format,
the unit size is ignored and is normally not written. For the `s' format,
the unit size defaults to `b', unless it is explicitly given.
Use x /hs to display 16-bit char strings and x /ws to display
32-bit strings. The next use of x /s will again display 8-bit strings.
Note that the results depend on the programming language of the
current compilation unit. If the language is C, the `s'
modifier will use the UTF-16 encoding while `w' will use
UTF-32. The encoding is set by the programming language and cannot
be altered.
info breakpoints (to
the address of the last breakpoint listed), info line (to the
starting address of a line), and print (if you use it to display
a value from memory).
For example, `x/3uh 0x54320' is a request to display three halfwords
(h) of memory, formatted as unsigned decimal integers (`u'),
starting at address 0x54320. `x/4xw $sp' prints the four
words (`w') of memory above the stack pointer (here, `$sp';
see section Registers) in hexadecimal (`x').
Since the letters indicating unit sizes are all distinct from the letters specifying output formats, you do not have to remember whether unit size or format comes first; either order works. The output specifications `4xw' and `4wx' mean exactly the same thing. (However, the count n must come first; `wx4' does not work.)
Even though the unit size u is ignored for the formats `s'
and `i', you might still want to use a count n; for example,
`3i' specifies that you want to see three machine instructions,
including any operands. For convenience, especially when used with
the display command, the `i' format also prints branch delay
slot instructions, if any, beyond the count specified, which immediately
follow the last instruction that is within the count. The command
disassemble gives an alternative way of inspecting machine
instructions; see Source and Machine Code.
All the defaults for the arguments to x are designed to make it
easy to continue scanning memory with minimal specifications each time
you use x. For example, after you have inspected three machine
instructions with `x/3i addr', you can inspect the next seven
with just `x/7'. If you use RET to repeat the x command,
the repeat count n is used again; the other arguments default as
for successive uses of x.
When examining machine instructions, the instruction at current program
counter is shown with a => marker. For example:
() x/5i $pc-6 0x804837f <main+11>: mov %esp,%ebp 0x8048381 <main+13>: push %ecx 0x8048382 <main+14>: sub $0x4,%esp => 0x8048385 <main+17>: movl $0x8048460,(%esp) 0x804838c <main+24>: call 0x80482d4 <puts@plt> |
The addresses and contents printed by the x command are not saved
in the value history because there is often too much of them and they
would get in the way. Instead, makes these values available for
subsequent use in expressions as values of the convenience variables
$_ and $__. After an x command, the last address
examined is available for use in expressions in the convenience variable
$_. The contents of that address, as examined, are available in
the convenience variable $__.
If the x command has a repeat count, the address and contents saved
are from the last memory unit printed; this is not the same as the last
address printed if several units were printed on the last line of output.
When you are debugging a program running on a remote target machine
(see section 20. Debugging Remote Programs), you may wish to verify the program's image in the
remote machine's memory against the executable file you downloaded to
the target. The compare-sections command is provided for such
situations.
compare-sections [section-name]
"qCRC"
remote request.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you find that you want to print the value of an expression frequently (to see how it changes), you might want to add it to the automatic display list so that prints its value each time your program stops. Each expression added to the list is given a number to identify it; to remove an expression from the list, you specify that number. The automatic display looks like this:
2: foo = 38 3: bar[5] = (struct hack *) 0x3804 |
This display shows item numbers, expressions and their current values. As with
displays you request manually using x or print, you can
specify the output format you prefer; in fact, display decides
whether to use print or x depending your format
specification--it uses x if you specify either the `i'
or `s' format, or a unit size; otherwise it uses print.
display expr
display does not repeat if you press RET again after using it.
display/fmt expr
display/fmt addr
For example, `display/i $pc' can be helpful, to see the machine instruction about to be executed each time execution stops (`$pc' is a common name for the program counter; see section Registers).
undisplay dnums...
delete display dnums...
2-4.
undisplay does not repeat if you press RET after using it.
(Otherwise you would just get the error `No display number ...'.)
disable display dnums...
2-4.
enable display dnums...
2-4.
display
info display
If a display expression refers to local variables, then it does not make
sense outside the lexical context for which it was set up. Such an
expression is disabled when execution enters a context where one of its
variables is not defined. For example, if you give the command
display last_char while inside a function with an argument
last_char, displays this argument while your program
continues to stop inside that function. When it stops elsewhere--where
there is no variable last_char---the display is disabled
automatically. The next time your program stops where last_char
is meaningful, you can enable the display expression once again.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides the following ways to control how arrays, structures, and symbols are printed.
These settings are useful for debugging programs in any language:
set print address
set print address on
on. For example, this is what a stack frame display looks like with
set print address on:
() f
#0 set_quotes (lq=0x34c78 "<<", rq=0x34c88 ">>")
at input.c:530
530 if (lquote != def_lquote)
|
set print address off
set print address off:
() set print addr off () f #0 set_quotes (lq="<<", rq=">>") at input.c:530 530 if (lquote != def_lquote) |
You can use `set print address off' to eliminate all machine
dependent displays from the interface. For example, with
print address off, you should get the same text for backtraces on
all machines--whether or not they involve pointer arguments.
show print address
When prints a symbolic address, it normally prints the
closest earlier symbol plus an offset. If that symbol does not uniquely
identify the address (for example, it is a name whose scope is a single
source file), you may need to clarify. One way to do this is with
info line, for example `info line *0x4537'. Alternately,
you can set to print the source file and line number when
it prints a symbolic address:
set print symbol-filename on
set print symbol-filename off
show print symbol-filename
Another situation where it is helpful to show symbol filenames and line numbers is when disassembling code; shows you the line number and source file that corresponds to each instruction.
Also, you may wish to see the symbolic form only if the address being printed is reasonably close to the closest earlier symbol:
set print max-symbolic-offset max-offset
show print max-symbolic-offset
If you have a pointer and you are not sure where it points, try
`set print symbol-filename on'. Then you can determine the name
and source file location of the variable where it points, using
`p/a pointer'. This interprets the address in symbolic form.
For example, here shows that a variable ptt points
at another variable t, defined in `hi2.c':
() set print symbol-filename on () p/a ptt $4 = 0xe008 <t in hi2.c> |
Warning: For pointers that point to a local variable, `p/a'
does not show the symbol name and filename of the referent, even with
the appropriate set print options turned on.
You can also enable `/a'-like formatting all the time using `set print symbol on':
set print symbol on
set print symbol off
show print symbol
Other settings control how different kinds of objects are printed:
set print array
set print array on
set print array off
show print array
set print array-indexes
set print array-indexes on
set print array-indexes off
show print array-indexes
set print elements number-of-elements
set print elements command.
This limit also applies to the display of strings.
When starts, this limit is set to 200.
Setting number-of-elements to zero means that the printing is unlimited.
show print elements
set print frame-arguments value
all
scalars
.... This is the default. Here is an example where
only scalar arguments are shown:
#1 0x08048361 in call_me (i=3, s=..., ss=0xbf8d508c, u=..., e=green) at frame-args.c:23 |
none
.... In this case, the example above now becomes:
#1 0x08048361 in call_me (i=..., s=..., ss=..., u=..., e=...) at frame-args.c:23 |
By default, only scalar arguments are printed. This command can be used
to configure the debugger to print the value of all arguments, regardless
of their type. However, it is often advantageous to not print the value
of more complex parameters. For instance, it reduces the amount of
information printed in each frame, making the backtrace more readable.
Also, it improves performance when displaying Ada frames, because
the computation of large arguments can sometimes be CPU-intensive,
especially in large applications. Setting print frame-arguments
to scalars (the default) or none avoids this computation,
thus speeding up the display of each Ada frame.
show print frame-arguments
set print entry-values value
The default value is default (see below for its description). Older
behaved as with the setting no. Compilers not supporting
this feature will behave in the default setting the same way as with the
no setting.
This functionality is currently supported only by DWARF 2 debugging format and the compiler has to produce `DW_TAG_GNU_call_site' tags. With , you need to specify `-O -g' during compilation, to get this information.
The value parameter can be one of the following:
no
#0 equal (val=5) #0 different (val=6) #0 lost (val=<optimized out>) #0 born (val=10) #0 invalid (val=<optimized out>) |
only
#0 equal (val@entry=5) #0 different (val@entry=5) #0 lost (val@entry=5) #0 born (val@entry=<optimized out>) #0 invalid (val@entry=<optimized out>) |
preferred
#0 equal (val@entry=5) #0 different (val@entry=5) #0 lost (val@entry=5) #0 born (val=10) #0 invalid (val@entry=<optimized out>) |
if-needed
#0 equal (val=5) #0 different (val=6) #0 lost (val@entry=5) #0 born (val=10) #0 invalid (val=<optimized out>) |
both
#0 equal (val=5, val@entry=5) #0 different (val=6, val@entry=5) #0 lost (val=<optimized out>, val@entry=5) #0 born (val=10, val@entry=<optimized out>) #0 invalid (val=<optimized out>, val@entry=<optimized out>) |
compact
<optimized out>. If not in MI mode (see section 27. The GDB/MI Interface) and if both
values are known and identical, print the shortened
param=param@entry=VALUE notation.
#0 equal (val=val@entry=5) #0 different (val=6, val@entry=5) #0 lost (val@entry=5) #0 born (val=10) #0 invalid (val=<optimized out>) |
default
param=param@entry=VALUE notation.
#0 equal (val=val@entry=5) #0 different (val=6, val@entry=5) #0 lost (val=<optimized out>, val@entry=5) #0 born (val=10) #0 invalid (val=<optimized out>) |
For analysis messages on possible failures of frame argument values at function entry resolution see set debug entry-values.
show print entry-values
set print repeats
"<repeats n times>", where n is the number of
identical repetitions, instead of displaying the identical elements
themselves. Setting the threshold to zero will cause all elements to
be individually printed. The default threshold is 10.
show print repeats
set print null-stop
show print null-stop
set print pretty on
$1 = {
next = 0x0,
flags = {
sweet = 1,
sour = 1
},
meat = 0x54 "Pork"
}
|
set print pretty off
$1 = {next = 0x0, flags = {sweet = 1, sour = 1}, \
meat = 0x54 "Pork"}
|
This is the default format.
show print pretty
set print sevenbit-strings on
\nnn. This setting is
best if you are working in English (ASCII) and you use the
high-order bit of characters as a marker or "meta" bit.
set print sevenbit-strings off
show print sevenbit-strings
set print union on
set print union off
"{...}"
instead.
show print union
For example, given the declarations
typedef enum {Tree, Bug} Species;
typedef enum {Big_tree, Acorn, Seedling} Tree_forms;
typedef enum {Caterpillar, Cocoon, Butterfly}
Bug_forms;
struct thing {
Species it;
union {
Tree_forms tree;
Bug_forms bug;
} form;
};
struct thing foo = {Tree, {Acorn}};
|
with set print union on in effect `p foo' would print
$1 = {it = Tree, form = {tree = Acorn, bug = Cocoon}}
|
and with set print union off in effect it would print
$1 = {it = Tree, form = {...}}
|
set print union affects programs written in C-like languages
and in Pascal.
These settings are of interest when debugging C++ programs:
set print demangle
set print demangle on
show print demangle
set print asm-demangle
set print asm-demangle on
show print asm-demangle
set demangle-style style
auto
gnu
g++) encoding algorithm.
hp
aCC) encoding algorithm.
lucid
lcc) encoding algorithm.
arm
cfront-generated executables. would
require further enhancement to permit that.
show demangle-style
set print object
set print object on
set print object off
show print object
set print static-members
set print static-members on
set print static-members off
show print static-members
set print pascal_static-members
set print pascal_static-members on
set print pascal_static-members off
show print pascal_static-members
set print vtbl
set print vtbl on
vtbl commands do not work on programs compiled with the HP
ANSI C++ compiler (aCC).)
set print vtbl off
show print vtbl
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides a mechanism to allow pretty-printing of values using Python code. It greatly simplifies the display of complex objects. This mechanism works for both MI and the CLI.
10.9.1 Pretty-Printer Introduction Introduction to pretty-printers 10.9.2 Pretty-Printer Example An example pretty-printer 10.9.3 Pretty-Printer Commands Pretty-printer commands
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When prints a value, it first sees if there is a pretty-printer registered for the value. If there is then invokes the pretty-printer to print the value. Otherwise the value is printed normally.
Pretty-printers are normally named. This makes them easy to manage. The `info pretty-printer' command will list all the installed pretty-printers with their names. If a pretty-printer can handle multiple data types, then its subprinters are the printers for the individual data types. Each such subprinter has its own name. The format of the name is printer-name;subprinter-name.
Pretty-printers are installed by registering them with . Typically they are automatically loaded and registered when the corresponding debug information is loaded, thus making them available without having to do anything special.
There are three places where a pretty-printer can be registered.
See section 23.2.2.6 Selecting Pretty-Printers, for further information on how pretty-printers are selected,
See section 23.2.2.7 Writing a Pretty-Printer, for implementing pretty printers for new types.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Here is how a C++ std::string looks without a pretty-printer:
() print s
$1 = {
static npos = 4294967295,
_M_dataplus = {
<std::allocator<char>> = {
<__gnu_cxx::new_allocator<char>> = {
<No data fields>}, <No data fields>
},
members of std::basic_string<char, std::char_traits<char>,
std::allocator<char> >::_Alloc_hider:
_M_p = 0x804a014 "abcd"
}
}
|
With a pretty-printer for std::string only the contents are printed:
() print s $2 = "abcd" |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
info pretty-printer [object-regexp [name-regexp]]
object-regexp is a regular expression matching the objects
whose pretty-printers to list.
Objects can be global, the program space's file
(see section 23.2.2.15 Program Spaces In Python),
and the object files within that program space (see section 23.2.2.16 Objfiles In Python).
See section 23.2.2.6 Selecting Pretty-Printers, for details on how
looks up a printer from these three objects.
name-regexp is a regular expression matching the name of the printers to list.
disable pretty-printer [object-regexp [name-regexp]]
enable pretty-printer [object-regexp [name-regexp]]
Example:
Suppose we have three pretty-printers installed: one from library1.so
named foo that prints objects of type foo, and
another from library2.so named bar that prints two types of objects,
bar1 and bar2.
(gdb) info pretty-printer
library1.so:
foo
library2.so:
bar
bar1
bar2
(gdb) info pretty-printer library2
library2.so:
bar
bar1
bar2
(gdb) disable pretty-printer library1
1 printer disabled
2 of 3 printers enabled
(gdb) info pretty-printer
library1.so:
foo [disabled]
library2.so:
bar
bar1
bar2
(gdb) disable pretty-printer library2 bar:bar1
1 printer disabled
1 of 3 printers enabled
(gdb) info pretty-printer library2
library1.so:
foo [disabled]
library2.so:
bar
bar1 [disabled]
bar2
(gdb) disable pretty-printer library2 bar
1 printer disabled
0 of 3 printers enabled
(gdb) info pretty-printer library2
library1.so:
foo [disabled]
library2.so:
bar [disabled]
bar1 [disabled]
bar2
|
Note that for bar the entire printer can be disabled,
as can each individual subprinter.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Values printed by the print command are saved in the
value history. This allows you to refer to them in other expressions.
Values are kept until the symbol table is re-read or discarded
(for example with the file or symbol-file commands).
When the symbol table changes, the value history is discarded,
since the values may contain pointers back to the types defined in the
symbol table.
The values printed are given history numbers by which you can
refer to them. These are successive integers starting with one.
print shows you the history number assigned to a value by
printing `$num = ' before the value; here num is the
history number.
To refer to any previous value, use `$' followed by the value's
history number. The way print labels its output is designed to
remind you of this. Just $ refers to the most recent value in
the history, and $$ refers to the value before that.
$$n refers to the nth value from the end; $$2
is the value just prior to $$, $$1 is equivalent to
$$, and $$0 is equivalent to $.
For example, suppose you have just printed a pointer to a structure and want to see the contents of the structure. It suffices to type
p *$ |
If you have a chain of structures where the component next points
to the next one, you can print the contents of the next one with this:
p *$.next |
You can print successive links in the chain by repeating this command--which you can do by just typing RET.
Note that the history records values, not expressions. If the value of
x is 4 and you type these commands:
print x set x=5 |
then the value recorded in the value history by the print command
remains 4 even though the value of x has changed.
show values
show
values does not change the history.
show values n
show values +
show values + produces no display.
Pressing RET to repeat show values n has exactly the
same effect as `show values +'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides convenience variables that you can use within to hold on to a value and refer to it later. These variables exist entirely within ; they are not part of your program, and setting a convenience variable has no direct effect on further execution of your program. That is why you can use them freely.
Convenience variables are prefixed with `$'. Any name preceded by `$' can be used for a convenience variable, unless it is one of the predefined machine-specific register names (see section Registers). (Value history references, in contrast, are numbers preceded by `$'. See section Value History.)
You can save a value in a convenience variable with an assignment expression, just as you would set a variable in your program. For example:
set $foo = *object_ptr |
would save in $foo the value contained in the object pointed to by
object_ptr.
Using a convenience variable for the first time creates it, but its
value is void until you assign a new value. You can alter the
value with another assignment at any time.
Convenience variables have no fixed types. You can assign a convenience variable any type of value, including structures and arrays, even if that variable already has a value of a different type. The convenience variable, when used as an expression, has the type of its current value.
show convenience
show conv.
init-if-undefined $variable = expression
If the variable is already defined then the expression is not evaluated so any side-effects do not occur.
One of the ways to use a convenience variable is as a counter to be incremented or a pointer to be advanced. For example, to print a field from successive elements of an array of structures:
set $i = 0 print bar[$i++]->contents |
Repeat that command by typing RET.
Some convenience variables are created automatically by and given values likely to be useful.
$_
$_ is automatically set by the x command to
the last address examined (see section Examining Memory). Other
commands which provide a default address for x to examine also
set $_ to that address; these commands include info line
and info breakpoint. The type of $_ is void *
except when set by the x command, in which case it is a pointer
to the type of $__.
$__
$__ is automatically set by the x command
to the value found in the last address examined. Its type is chosen
to match the format in which the data was printed.
$_exitcode
$_exitcode is automatically set to the exit code when
the program being debugged terminates.
$_probe_argc
$_probe_arg0...$_probe_arg11
$_sdata
$_sdata contains extra collected static tracepoint
data. See section Tracepoint Action Lists. Note that
$_sdata could be empty, if not inspecting a trace buffer, or
if extra static tracepoint data has not been collected.
$_siginfo
$_siginfo contains extra signal information
(see extra signal information). Note that $_siginfo
could be empty, if the application has not yet received any signals.
For example, it will be empty before you execute the run command.
$_tlb
$_tlb is automatically set when debugging
applications running on MS-Windows in native mode or connected to
gdbserver that supports the qGetTIBAddr request.
See section E.4 General Query Packets.
This variable contains the address of the thread information block.
On HP-UX systems, if you refer to a function or variable name that begins with a dollar sign, searches for a user or system name first, before it searches for a convenience variable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
also supplies some convenience functions. These have a syntax similar to convenience variables. A convenience function can be used in an expression just like an ordinary function; however, a convenience function is implemented internally to .
These functions require to be configured with
Python support.
$_memeq(buf1, buf2, length)
$_regex(str, regex)
Python's
regular expression support.
$_streq(str1, str2)
$_strlen(str)
provides the ability to list and get help on convenience functions.
help function
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can refer to machine register contents, in expressions, as variables
with names starting with `$'. The names of registers are different
for each machine; use info registers to see the names used on
your machine.
info registers
info all-registers
info registers regname ...
has four "standard" register names that are available (in
expressions) on most machines--whenever they do not conflict with an
architecture's canonical mnemonics for registers. The register names
$pc and $sp are used for the program counter register and
the stack pointer. $fp is used for a register that contains a
pointer to the current stack frame, and $ps is used for a
register that contains the processor status. For example,
you could print the program counter in hex with
p/x $pc |
or print the instruction to be executed next with
x/i $pc |
or add four to the stack pointer(9) with
set $sp += 4 |
Whenever possible, these four standard register names are available on
your machine even though the machine has different canonical mnemonics,
so long as there is no conflict. The info registers command
shows the canonical names. For example, on the SPARC, info
registers displays the processor status register as $psr but you
can also refer to it as $ps; and on x86-based machines $ps
is an alias for the EFLAGS register.
always considers the contents of an ordinary register as an integer when the register is examined in this way. Some machines have special registers which can hold nothing but floating point; these registers are considered to have floating point values. There is no way to refer to the contents of an ordinary register as floating point value (although you can print it as a floating point value with `print/f $regname').
Some registers have distinct "raw" and "virtual" data formats. This
means that the data format in which the register contents are saved by
the operating system is not the same one that your program normally
sees. For example, the registers of the 68881 floating point
coprocessor are always saved in "extended" (raw) format, but all C
programs expect to work with "double" (virtual) format. In such
cases, normally works with the virtual format only (the format
that makes sense for your program), but the info registers command
prints the data in both formats.
Some machines have special registers whose contents can be interpreted
in several different ways. For example, modern x86-based machines
have SSE and MMX registers that can hold several values packed
together in several different formats. refers to such
registers in struct notation:
() print $xmm1
$1 = {
v4_float = {0, 3.43859137e-038, 1.54142831e-044, 1.821688e-044},
v2_double = {9.92129282474342e-303, 2.7585945287983262e-313},
v16_int8 = "\000\000\000\000\3706;\001\v\000\000\000\r\000\000",
v8_int16 = {0, 0, 14072, 315, 11, 0, 13, 0},
v4_int32 = {0, 20657912, 11, 13},
v2_int64 = {88725056443645952, 55834574859},
uint128 = 0x0000000d0000000b013b36f800000000
}
|
To set values of such registers, you need to tell which
view of the register you wish to change, as if you were assigning
value to a struct member:
() set $xmm1.uint128 = 0x000000000000000000000000FFFFFFFF |
Normally, register values are relative to the selected stack frame (see section Selecting a Frame). This means that you get the value that the register would contain if all stack frames farther in were exited and their saved registers restored. In order to see the true contents of hardware registers, you must select the innermost frame (with `frame 0').
However, must deduce where registers are saved, from the machine code generated by your compiler. If some registers are not saved, or if is unable to locate the saved registers, the selected stack frame makes no difference.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Depending on the configuration, may be able to give you more information about the status of the floating point hardware.
info float
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Depending on the configuration, may be able to give you more information about the status of the vector unit.
info vector
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides interfaces to useful OS facilities that can help you debug your program.
Some operating systems supply an auxiliary vector to programs at startup. This is akin to the arguments and environment that you specify for a program, but contains a system-dependent variety of binary values that tell system libraries important details about the hardware, operating system, and process. Each value's purpose is identified by an integer tag; the meanings are well-known but system-specific. Depending on the configuration and operating system facilities, may be able to show you this information. For remote targets, this functionality may further depend on the remote stub's support of the `qXfer:auxv:read' packet, see qXfer auxiliary vector read.
info auxv
On some targets, can access operating system-specific information and show it to you. The types of information available will differ depending on the type of operating system running on the target. The mechanism used to fetch the data is described in H. Operating System Information. For remote targets, this functionality depends on the remote stub's support of the `qXfer:osdata:read' packet, see qXfer osdata read.
info os infotype
Display OS information of the requested type.
On GNU/Linux, the following values of infotype are valid:
processes
procgroups
threads
files
sockets
shm
semaphores
msg
modules
info os
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Memory region attributes allow you to describe special handling required by regions of your target's memory. uses attributes to determine whether to allow certain types of memory accesses; whether to use specific width accesses; and whether to cache target memory. By default the description of memory regions is fetched from the target (if the current target supports this), but the user can override the fetched regions.
Defined memory regions can be individually enabled and disabled. When a memory region is disabled, uses the default attributes when accessing memory in that region. Similarly, if no memory regions have been defined, uses the default attributes when accessing all memory.
When a memory region is defined, it is given a number to identify it; to enable, disable, or remove a memory region, you specify that number.
mem lower upper attributes...
mem auto
delete mem nums...
disable mem nums...
enable mem nums...
info mem
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
While these attributes prevent from performing invalid memory accesses, they do nothing to prevent the target system, I/O DMA, etc. from accessing memory.
ro
wo
rw
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
8
16
32
64
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
cache
nocache
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
set mem inaccessible-by-default [on|off]
on is specified, make treat memory not
explicitly described by the memory ranges as non-existent and refuse accesses
to such memory. The checks are only performed if there's at least one
memory range defined. If off is specified, make
treat the memory not explicitly described by the memory ranges as RAM.
The default value is on.
show mem inaccessible-by-default
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can use the commands dump, append, and
restore to copy data between target memory and a file. The
dump and append commands write data to a file, and the
restore command reads data from a file back into the inferior's
memory. Files may be in binary, Motorola S-record, Intel hex, or
Tektronix Hex format; however, can only append to binary
files.
dump [format] memory filename start_addr end_addr
dump [format] value filename expr
The format parameter may be any one of:
binary
ihex
srec
tekhex
uses the same definitions of these formats as the GNU binary utilities, like `objdump' and `objcopy'. If format is omitted, dumps the data in raw binary form.
append [binary] memory filename start_addr end_addr
append [binary] value filename expr
restore filename [binary] bias start end
restore command can automatically recognize any known BFD
file format, except for raw binary. To restore a raw binary file you
must specify the optional keyword binary after the filename.
If bias is non-zero, its value will be added to the addresses contained in the file. Binary files always start at address zero, so they will be restored at address bias. Other bfd files have a built-in location; they will be restored at offset bias from that location.
If start and/or end are non-zero, then only data between file offset start and file offset end will be restored. These offsets are relative to the addresses in the file, before the bias argument is applied.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A core file or core dump is a file that records the memory image of a running process and its process status (register values etc.). Its primary use is post-mortem debugging of a program that crashed while it ran outside a debugger. A program that crashes automatically produces a core file, unless this feature is disabled by the user. See section 18.1 Commands to Specify Files, for information on invoking in the post-mortem debugging mode.
Occasionally, you may wish to produce a core file of the program you are debugging in order to preserve a snapshot of its state. has a special command for that.
generate-core-file [file]
gcore [file]
Note that this command is implemented only for some systems (as of this writing, GNU/Linux, FreeBSD, Solaris, and S390).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If the program you are debugging uses a different character set to represent characters and strings than the one uses itself, can automatically translate between the character sets for you. The character set uses we call the host character set; the one the inferior program uses we call the target character set.
For example, if you are running on a GNU/Linux system, which
uses the ISO Latin 1 character set, but you are using 's
remote protocol (see section 20. Debugging Remote Programs) to debug a program
running on an IBM mainframe, which uses the EBCDIC character set,
then the host character set is Latin-1, and the target character set is
EBCDIC. If you give the command set
target-charset EBCDIC-US, then translates between
EBCDIC and Latin 1 as you print character or string values, or use
character and string literals in expressions.
has no way to automatically recognize which character set
the inferior program uses; you must tell it, using the set
target-charset command, described below.
Here are the commands for controlling 's character set support:
set target-charset charset
set host-charset charset
By default, uses a host character set appropriate to the
system it is running on; you can override that default using the
set host-charset command. On some systems, cannot
automatically determine the appropriate host character set. In this
case, uses `UTF-8'.
can only use certain character sets as its host character set. If you type set host-charset TABTAB, will list the host character sets it supports.
set charset charset
show charset
show host-charset
show target-charset
set target-wide-charset charset
wchar_t type. To
display the list of supported wide character sets, type
set target-wide-charset TABTAB.
show target-wide-charset
Here is an example of 's character set support in action. Assume that the following source code has been placed in the file `charset-test.c':
#include <stdio.h>
char ascii_hello[]
= {72, 101, 108, 108, 111, 44, 32, 119,
111, 114, 108, 100, 33, 10, 0};
char ibm1047_hello[]
= {200, 133, 147, 147, 150, 107, 64, 166,
150, 153, 147, 132, 90, 37, 0};
main ()
{
printf ("Hello, world!\n");
}
|
In this program, ascii_hello and ibm1047_hello are arrays
containing the string `Hello, world!' followed by a newline,
encoded in the ASCII and IBM1047 character sets.
We compile the program, and invoke the debugger on it:
$ gcc -g charset-test.c -o charset-test $ gdb -nw charset-test GNU gdb 2001-12-19-cvs Copyright 2001 Free Software Foundation, Inc. ... () |
We can use the show charset command to see what character sets
is currently using to interpret and display characters and
strings:
() show charset The current host and target character set is `ISO-8859-1'. () |
For the sake of printing this manual, let's use ASCII as our initial character set:
() set charset ASCII () show charset The current host and target character set is `ASCII'. () |
Let's assume that ASCII is indeed the correct character set for our
host system -- in other words, let's assume that if prints
characters using the ASCII character set, our terminal will display
them properly. Since our current target character set is also
ASCII, the contents of ascii_hello print legibly:
() print ascii_hello $1 = 0x401698 "Hello, world!\n" () print ascii_hello[0] $2 = 72 'H' () |
uses the target character set for character and string literals you use in expressions:
() print '+' $3 = 43 '+' () |
The ASCII character set uses the number 43 to encode the `+' character.
relies on the user to tell it which character set the
target program uses. If we print ibm1047_hello while our target
character set is still ASCII, we get jibberish:
() print ibm1047_hello $4 = 0x4016a8 "\310\205\223\223\226k@\246\226\231\223\204Z%" () print ibm1047_hello[0] $5 = 200 '\310' () |
If we invoke the set target-charset followed by TABTAB,
tells us the character sets it supports:
() set target-charset ASCII EBCDIC-US IBM1047 ISO-8859-1 () set target-charset |
We can select IBM1047 as our target character set, and examine the
program's strings again. Now the ASCII string is wrong, but
translates the contents of ibm1047_hello from the
target character set, IBM1047, to the host character set,
ASCII, and they display correctly:
() set target-charset IBM1047 () show charset The current host character set is `ASCII'. The current target character set is `IBM1047'. () print ascii_hello $6 = 0x401698 "\110\145%%?\054\040\167?\162%\144\041\012" () print ascii_hello[0] $7 = 72 '\110' () print ibm1047_hello $8 = 0x4016a8 "Hello, world!\n" () print ibm1047_hello[0] $9 = 200 'H' () |
As above, uses the target character set for character and string literals you use in expressions:
() print '+' $10 = 78 '+' () |
The IBM1047 character set uses the number 78 to encode the `+' character.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
caches data exchanged between the debugger and a remote target (see section 20. Debugging Remote Programs). Such caching generally improves performance, because it reduces the overhead of the remote protocol by bundling memory reads and writes into large chunks. Unfortunately, simply caching everything would lead to incorrect results, since does not necessarily know anything about volatile values, memory-mapped I/O addresses, etc. Furthermore, in non-stop mode (see section 5.5.2 Non-Stop Mode) memory can be changed while a gdb command is executing. Therefore, by default, only caches data known to be on the stack(10). Other regions of memory can be explicitly marked as cacheable; see see section 10.17 Memory Region Attributes.
set remotecache on
set remotecache off
show remotecache
set stack-cache on
set stack-cache off
ON, use
caching. By default, this option is ON.
show stack-cache
info dcache [line]
If a line number is specified, the contents of that line will be printed in hex.
set dcache size size
set dcache line-size line-size
show dcache size
show dcache line-size
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Memory can be searched for a particular sequence of bytes with the
find command.
find [/sn] start_addr, +len, val1 [, val2, ...]
find [/sn] start_addr, end_addr, val1 [, val2, ...]
s and n are optional parameters. They may be specified in either order, apart or together.
b
h
w
g
All values are interpreted in the current language. This means, for example, that if the current source language is C/C++ then searching for the string "hello" includes the trailing '\0'.
If the value size is not specified, it is taken from the value's type in the current language. This is useful when one wants to specify the search pattern as a mixture of types. Note that this means, for example, that in the case of C-like languages a search for an untyped 0x42 will search for `(int) 0x42' which is typically four bytes.
You can use strings as search values. Quote them with double-quotes
(").
The string value is copied into the search pattern byte by byte,
regardless of the endianness of the target and the size specification.
The address of each match found is printed as well as a count of the number of matches found.
The address of the last value found is stored in convenience variable `$_'. A count of the number of matches is stored in `$numfound'.
For example, if stopped at the printf in this function:
void
hello ()
{
static char hello[] = "hello-hello";
static struct { char c; short s; int i; }
__attribute__ ((packed)) mixed
= { 'c', 0x1234, 0x87654321 };
printf ("%s\n", hello);
}
|
you get during debugging:
(gdb) find &hello[0], +sizeof(hello), "hello" 0x804956d <hello.1620+6> 1 pattern found (gdb) find &hello[0], +sizeof(hello), 'h', 'e', 'l', 'l', 'o' 0x8049567 <hello.1620> 0x804956d <hello.1620+6> 2 patterns found (gdb) find /b1 &hello[0], +sizeof(hello), 'h', 0x65, 'l' 0x8049567 <hello.1620> 1 pattern found (gdb) find &mixed, +sizeof(mixed), (char) 'c', (short) 0x1234, (int) 0x87654321 0x8049560 <mixed.1625> 1 pattern found (gdb) print $numfound $1 = 1 (gdb) print $_ $2 = (void *) 0x8049560 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Almost all compilers support optimization. With optimization disabled, the compiler generates assembly code that corresponds directly to your source code, in a simplistic way. As the compiler applies more powerful optimizations, the generated assembly code diverges from your original source code. With help from debugging information generated by the compiler, can map from the running program back to constructs from your original source.
is more accurate with optimization disabled. If you can recompile without optimization, it is easier to follow the progress of your program during debugging. But, there are many cases where you may need to debug an optimized version.
When you debug a program compiled with `-g -O', remember that the optimizer has rearranged your code; the debugger shows you what is really there. Do not be too surprised when the execution path does not exactly match your source file! An extreme example: if you define a variable, but never use it, never sees that variable--because the compiler optimizes it out of existence.
Some things do not work as well with `-g -O' as with just `-g', particularly on machines with instruction scheduling. If in doubt, recompile with `-g' alone, and if this fixes the problem, please report it to us as a bug (including a test case!). See section 10.3 Program Variables, for more information about debugging optimized code.
11.1 Inline Functions How presents inlining 11.2 Tail Call Frames analysis of jumps to functions
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Inlining is an optimization that inserts a copy of the function
body directly at each call site, instead of jumping to a shared
routine. displays inlined functions just like
non-inlined functions. They appear in backtraces. You can view their
arguments and local variables, step into them with step, skip
them with next, and escape from them with finish.
You can check whether a function was inlined by using the
info frame command.
For to support inlined functions, the compiler must record information about inlining in the debug information --- using the DWARF 2 format does this, and several other compilers do also. only supports inlined functions when using DWARF 2. Versions of before 4.1 do not emit two required attributes (`DW_AT_call_file' and `DW_AT_call_line'); does not display inlined function calls with earlier versions of . It instead displays the arguments and local variables of inlined functions as local variables in the caller.
The body of an inlined function is directly included at its call site; unlike a non-inlined function, there are no instructions devoted to the call. still pretends that the call site and the start of the inlined function are different instructions. Stepping to the call site shows the call site, and then stepping again shows the first line of the inlined function, even though no additional instructions are executed.
This makes source-level debugging much clearer; you can see both the
context of the call and then the effect of the call. Only stepping by
a single instruction using stepi or nexti does not do
this; single instruction steps always show the inlined body.
There are some ways that does not pretend that inlined function calls are the same as normal calls:
finish command. This is a limitation of compiler-generated
debugging information; after finish, you can step to the next line
and print a variable where your program stored the return value.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Function B can call function C in its very last statement. In
unoptimized compilation the call of C is immediately followed by return
instruction at the end of B code. Optimizing compiler may replace the
call and return in function B into one jump to function C
instead. Such use of a jump instruction is called tail call.
During execution of function C, there will be no indication in the
function call stack frames that it was tail-called from B. If function
A regularly calls function B which tail-calls function C,
then will see A as the caller of C. However, in
some cases can determine that C was tail-called from
B, and it will then create fictitious call frame for that, with the
return address set up as if B called C normally.
This functionality is currently supported only by DWARF 2 debugging format and the compiler has to produce `DW_TAG_GNU_call_site' tags. With , you need to specify `-O -g' during compilation, to get this information.
info frame command (see section 8.4 Information About a Frame) will indicate the tail call frame
kind by text tail call frame such as in this sample output:
(gdb) x/i $pc - 2 0x40066b <b(int, double)+11>: jmp 0x400640 <c(int, double)> (gdb) info frame Stack level 1, frame at 0x7fffffffda30: rip = 0x40066d in b (amd64-entry-value.cc:59); saved rip 0x4004c5 tail call frame, caller of frame at 0x7fffffffda30 source language c++. Arglist at unknown address. Locals at unknown address, Previous frame's sp is 0x7fffffffda30 |
The detection of all the possible code path executions can find them ambiguous. There is no execution history stored (possible 6. Running programs backward is never used for this purpose) and the last known caller could have reached the known callee by multiple different jump sequences. In such case still tries to show at least all the unambiguous top tail callers and all the unambiguous bottom tail calees, if any.
set debug entry-values
show debug entry-values
The analysis messages for tail calls can for example show why the virtual tail
call frame for function c has not been recognized (due to the indirect
reference by variable x):
static void __attribute__((noinline, noclone)) c (void);
void (*x) (void) = c;
static void __attribute__((noinline, noclone)) a (void) { x++; }
static void __attribute__((noinline, noclone)) c (void) { a (); }
int main (void) { x (); return 0; }
Breakpoint 1, DW_OP_GNU_entry_value resolving cannot find
DW_TAG_GNU_call_site 0x40039a in main
a () at t.c:3
3 static void __attribute__((noinline, noclone)) a (void) { x++; }
(gdb) bt
#0 a () at t.c:3
#1 0x000000000040039a in main () at t.c:5
|
Another possibility is an ambiguous virtual tail call frames resolution:
int i;
static void __attribute__((noinline, noclone)) f (void) { i++; }
static void __attribute__((noinline, noclone)) e (void) { f (); }
static void __attribute__((noinline, noclone)) d (void) { f (); }
static void __attribute__((noinline, noclone)) c (void) { d (); }
static void __attribute__((noinline, noclone)) b (void)
{ if (i) c (); else e (); }
static void __attribute__((noinline, noclone)) a (void) { b (); }
int main (void) { a (); return 0; }
tailcall: initial: 0x4004d2(a) 0x4004ce(b) 0x4004b2(c) 0x4004a2(d)
tailcall: compare: 0x4004d2(a) 0x4004cc(b) 0x400492(e)
tailcall: reduced: 0x4004d2(a) |
(gdb) bt
#0 f () at t.c:2
#1 0x00000000004004d2 in a () at t.c:8
#2 0x0000000000400395 in main () at t.c:9
|
Frames #0 and #2 are real, #1 is a virtual tail call frame.
The code can have possible execution paths mainabcdf or
mainabef, cannot find which one from the inferior state.
initial: state shows some random possible calling sequence
has found. It then finds another possible calling sequcen - that one is
prefixed by compare:. The non-ambiguous intersection of these two is
printed as the reduced: calling sequence. That one could have many
futher compare: and reduced: statements as long as there remain
any non-ambiguous sequence entries.
For the frame of function b in both cases there are different possible
$pc values (0x4004cc or 0x4004ce), therefore this frame is
also ambigous. The only non-ambiguous frame is the one for function a,
therefore this one is displayed to the user while the ambiguous frames are
omitted.
There can be also reasons why printing of frame argument values at function entry may fail:
int v;
static void __attribute__((noinline, noclone)) c (int i) { v++; }
static void __attribute__((noinline, noclone)) a (int i);
static void __attribute__((noinline, noclone)) b (int i) { a (i); }
static void __attribute__((noinline, noclone)) a (int i)
{ if (i) b (i - 1); else c (0); }
int main (void) { a (5); return 0; }
(gdb) bt
#0 c (i=i@entry=0) at t.c:2
#1 0x0000000000400428 in a (DW_OP_GNU_entry_value resolving has found
function "a" at 0x400420 can call itself via tail calls
i=<optimized out>) at t.c:6
#2 0x000000000040036e in main () at t.c:7
|
cannot find out from the inferior state if and how many times did
function a call itself (via function b) as these calls would be
tail calls. Such tail calls would modify thue i variable, therefore
cannot be sure the value it knows would be right -
prints <optimized out> instead.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some languages, such as C and C++, provide a way to define and invoke "preprocessor macros" which expand into strings of tokens. can evaluate expressions containing macro invocations, show the result of macro expansion, and show a macro's definition, including where it was defined.
You may need to compile your program specially to provide with information about preprocessor macros. Most compilers do not include macros in their debugging information, even when you compile with the `-g' flag. See section 4.1 Compiling for Debugging.
A program may define a macro at one point, remove that definition later, and then provide a different definition after that. Thus, at different points in the program, a macro may have different definitions, or have no definition at all. If there is a current stack frame, uses the macros in scope at that frame's source code line. Otherwise, uses the macros in scope at the current listing location; see 9.1 Printing Source Lines.
Whenever evaluates an expression, it always expands any macro invocations present in the expression. also provides the following commands for working with macros explicitly.
macro expand expression
macro exp expression
macro expand-once expression
macro exp1 expression
info macro [-a|-all] [--] macro
info macros linespec
macro define macro replacement-list
macro define macro(arglist) replacement-list
A definition introduced by this command is in scope in every
expression evaluated in , until it is removed with the
macro undef command, described below. The definition overrides
all definitions for macro present in the program being debugged,
as well as any previous user-supplied definition.
macro undef macro
macro
define command, described above; it cannot remove definitions present
in the program being debugged.
macro list
macro define command.
Here is a transcript showing the above commands in action. First, we show our source files:
$ cat sample.c
#include <stdio.h>
#include "sample.h"
#define M 42
#define ADD(x) (M + x)
main ()
{
#define N 28
printf ("Hello, world!\n");
#undef N
printf ("We're so creative.\n");
#define N 1729
printf ("Goodbye, world!\n");
}
$ cat sample.h
#define Q <
$
|
Now, we compile the program using the GNU C compiler, . We pass the `-gdwarf-2'(11) and `-g3' flags to ensure the compiler includes information about preprocessor macros in the debugging information.
$ gcc -gdwarf-2 -g3 sample.c -o sample $ |
Now, we start on our sample program:
$ gdb -nw sample GNU gdb 2002-05-06-cvs Copyright 2002 Free Software Foundation, Inc. GDB is free software, ... () |
We can expand macros and examine their definitions, even when the program is not running. uses the current listing position to decide which macro definitions are in scope:
() list main
3
4 #define M 42
5 #define ADD(x) (M + x)
6
7 main ()
8 {
9 #define N 28
10 printf ("Hello, world!\n");
11 #undef N
12 printf ("We're so creative.\n");
() info macro ADD
Defined at /home/jimb/gdb/macros/play/sample.c:5
#define ADD(x) (M + x)
() info macro Q
Defined at /home/jimb/gdb/macros/play/sample.h:1
included at /home/jimb/gdb/macros/play/sample.c:2
#define Q <
() macro expand ADD(1)
expands to: (42 + 1)
() macro expand-once ADD(1)
expands to: once (M + 1)
()
|
In the example above, note that macro expand-once expands only
the macro invocation explicit in the original text -- the invocation of
ADD -- but does not expand the invocation of the macro M,
which was introduced by ADD.
Once the program is running, uses the macro definitions in force at the source line of the current stack frame:
() break main
Breakpoint 1 at 0x8048370: file sample.c, line 10.
() run
Starting program: /home/jimb/gdb/macros/play/sample
Breakpoint 1, main () at sample.c:10
10 printf ("Hello, world!\n");
()
|
At line 10, the definition of the macro N at line 9 is in force:
() info macro N Defined at /home/jimb/gdb/macros/play/sample.c:9 #define N 28 () macro expand N Q M expands to: 28 < 42 () print N Q M $1 = 1 () |
As we step over directives that remove N's definition, and then
give it a new definition, finds the definition (or lack
thereof) in force at each point:
() next
Hello, world!
12 printf ("We're so creative.\n");
() info macro N
The symbol `N' has no definition as a C/C++ preprocessor macro
at /home/jimb/gdb/macros/play/sample.c:12
() next
We're so creative.
14 printf ("Goodbye, world!\n");
() info macro N
Defined at /home/jimb/gdb/macros/play/sample.c:13
#define N 1729
() macro expand N Q M
expands to: 1729 < 42
() print N Q M
$2 = 0
()
|
In addition to source files, macros can be defined on the compilation command line using the `-Dname=value' syntax. For macros defined in such a way, displays the location of their definition as line zero of the source file submitted to the compiler.
() info macro __STDC__ Defined at /home/jimb/gdb/macros/play/sample.c:0 -D__STDC__=1 () |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In some applications, it is not feasible for the debugger to interrupt the program's execution long enough for the developer to learn anything helpful about its behavior. If the program's correctness depends on its real-time behavior, delays introduced by a debugger might cause the program to change its behavior drastically, or perhaps fail, even when the code itself is correct. It is useful to be able to observe the program's behavior without interrupting it.
Using 's trace and collect commands, you can
specify locations in the program, called tracepoints, and
arbitrary expressions to evaluate when those tracepoints are reached.
Later, using the tfind command, you can examine the values
those expressions had when the program hit the tracepoints. The
expressions may also denote objects in memory--structures or arrays,
for example--whose values should record; while visiting
a particular tracepoint, you may inspect those objects as if they were
in memory at that moment. However, because records these
values without interacting with you, it can do so quickly and
unobtrusively, hopefully not disturbing the program's behavior.
The tracepoint facility is currently available only for remote targets. See section 19. Specifying a Debugging Target. In addition, your remote target must know how to collect trace data. This functionality is implemented in the remote stub; however, none of the stubs distributed with support tracepoints as of this writing. The format of the remote packets used to implement tracepoints are described in E.6 Tracepoint Packets.
It is also possible to get trace data from a file, in a manner reminiscent
of corefiles; you specify the filename, and use tfind to search
through the file. See section 13.4 Using Trace Files, for more details.
This chapter describes the tracepoint commands and features.
13.1 Commands to Set Tracepoints 13.2 Using the Collected Data 13.3 Convenience Variables for Tracepoints 13.4 Using Trace Files
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Before running such a trace experiment, an arbitrary number of tracepoints can be set. A tracepoint is actually a special type of breakpoint (see section 5.1.1 Setting Breakpoints), so you can manipulate it using standard breakpoint commands. For instance, as with breakpoints, tracepoint numbers are successive integers starting from one, and many of the commands associated with tracepoints take the tracepoint number as their argument, to identify which tracepoint to work on.
For each tracepoint, you can specify, in advance, some arbitrary set of data that you want the target to collect in the trace buffer when it hits that tracepoint. The collected data can include registers, local variables, or global data. Later, you can use commands to examine the values these data had at the time the tracepoint was hit.
Tracepoints do not support every breakpoint feature. Ignore counts on tracepoints have no effect, and tracepoints cannot run commands when they are hit. Tracepoints may not be thread-specific either.
Some targets may support fast tracepoints, which are inserted in a different way (such as with a jump instead of a trap), that is faster but possibly restricted in where they may be installed.
Regular and fast tracepoints are dynamic tracing facilities, meaning that they can be used to insert tracepoints at (almost) any location in the target. Some targets may also support controlling static tracepoints from . With static tracing, a set of instrumentation points, also known as markers, are embedded in the target program, and can be activated or deactivated by name or address. These are usually placed at locations which facilitate investigating what the target is actually doing. 's support for static tracing includes being able to list instrumentation points, and attach them with defined high level tracepoints that expose the whole range of convenience of 's tracepoints support. Namely, support for collecting registers values and values of global or local (to the instrumentation point) variables; tracepoint conditions and trace state variables. The act of installing a static tracepoint on an instrumentation point, or marker, is referred to as probing a static tracepoint marker.
gdbserver supports tracepoints on some target systems.
See section Tracepoints support in gdbserver.
This section describes commands to set tracepoints and associated conditions and actions.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
trace location
trace command is very similar to the break command.
Its argument location can be a source line, a function name, or
an address in the target program. See section 9.2 Specifying a Location. The
trace command defines a tracepoint, which is a point in the
target program where the debugger will briefly stop, collect some
data, and then allow the program to continue. Setting a tracepoint or
changing its actions takes effect immediately if the remote stub
supports the `InstallInTrace' feature (see install tracepoint in tracing).
If remote stub doesn't support the `InstallInTrace' feature, all
these changes don't take effect until the next tstart
command, and once a trace experiment is running, further changes will
not have any effect until the next trace experiment starts. In addition,
supports pending tracepoints---tracepoints whose
address is not yet resolved. (This is similar to pending breakpoints.)
Pending tracepoints are not downloaded to the target and not installed
until they are resolved. The resolution of pending tracepoints requires
support--when debugging with the remote target, and
disconnects from the remote stub (see disconnected tracing), pending tracepoints can not be resolved (and downloaded to
the remote stub) while is disconnected.
Here are some examples of using the trace command:
() trace foo.c:121 // a source file and line number () trace +2 // 2 lines forward () trace my_function // first source line of function () trace *my_function // EXACT start address of function () trace *0x2117c4 // an address |
You can abbreviate trace as tr.
trace location if cond
ftrace location [ if cond ]
ftrace command sets a fast tracepoint. For targets that
support them, fast tracepoints will use a more efficient but possibly
less general technique to trigger data collection, such as a jump
instruction instead of a trap, or some sort of hardware support. It
may not be possible to create a fast tracepoint at the desired
location, in which case the command will exit with an explanatory
message.
handles arguments to ftrace exactly as for
trace.
On 32-bit x86-architecture systems, fast tracepoints normally need to
be placed at an instruction that is 5 bytes or longer, but can be
placed at 4-byte instructions if the low 64K of memory of the target
program is available to install trampolines. Some Unix-type systems,
such as GNU/Linux, exclude low addresses from the program's
address space; but for instance with the Linux kernel it is possible
to let use this area by doing a sysctl command
to set the mmap_min_addr kernel parameter, as in
sudo sysctl -w vm.mmap_min_addr=32768 |
which sets the low address to 32K, which leaves plenty of room for trampolines. The minimum address should be set to a page boundary.
strace location [ if cond ]
strace command sets a static tracepoint. For targets that
support it, setting a static tracepoint probes a static
instrumentation point, or marker, found at location. It may not
be possible to set a static tracepoint at the desired location, in
which case the command will exit with an explanatory message.
handles arguments to strace exactly as for
trace, with the addition that the user can also specify
-m marker as location. This probes the marker
identified by the marker string identifier. This identifier
depends on the static tracepoint backend library your program is
using. You can find all the marker identifiers in the `ID' field
of the info static-tracepoint-markers command output.
See section Listing Static Tracepoint Markers. For example, in the following small program using the UST
tracing engine:
main ()
{
trace_mark(ust, bar33, "str %s", "FOOBAZ");
}
|
the marker id is composed of joining the first two arguments to the
trace_mark call with a slash, which translates to:
() info static-tracepoint-markers
Cnt Enb ID Address What
1 n ust/bar33 0x0000000000400ddc in main at stexample.c:22
Data: "str %s"
[etc...]
|
so you may probe the marker above with:
() strace -m ust/bar33 |
Static tracepoints accept an extra collect action -- collect
$_sdata. This collects arbitrary user data passed in the probe point
call to the tracing library. In the UST example above, you'll see
that the third argument to trace_mark is a printf-like format
string. The user data is then the result of running that formating
string against the following arguments. Note that info
static-tracepoint-markers command output lists that format string in
the `Data:' field.
You can inspect this data when analyzing the trace buffer, by printing the $_sdata variable like any other variable available to . See section Tracepoint Action Lists.
The convenience variable $tpnum records the tracepoint number
of the most recently set tracepoint.
delete tracepoint [num]
delete command can remove tracepoints also.
Examples:
() delete trace 1 2 3 // remove three tracepoints () delete trace // remove all tracepoints |
You can abbreviate this command as del tr.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These commands are deprecated; they are equivalent to plain disable and enable.
disable tracepoint [num]
enable tracepoint command.
If the command is issued during a trace experiment and the debug target
has support for disabling tracepoints during a trace experiment, then the
change will be effective immediately. Otherwise, it will be applied to the
next trace experiment.
enable tracepoint [num]
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
passcount [n [num]]
passcount command sets the
passcount of the most recently defined tracepoint. If no passcount is
given, the trace experiment will run until stopped explicitly by the
user.
Examples:
() passcount 5 2 // Stop on the 5th execution of
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The simplest sort of tracepoint collects data every time your program reaches a specified place. You can also specify a condition for a tracepoint. A condition is just a Boolean expression in your programming language (see section Expressions). A tracepoint with a condition evaluates the expression each time your program reaches it, and data collection happens only if the condition is true.
Tracepoint conditions can be specified when a tracepoint is set, by
using `if' in the arguments to the trace command.
See section Setting Tracepoints. They can
also be set or changed at any time with the condition command,
just as with breakpoints.
Unlike breakpoint conditions, does not actually evaluate the conditional expression itself. Instead, encodes the expression into an agent expression (see section F. The GDB Agent Expression Mechanism) suitable for execution on the target, independently of . Global variables become raw memory locations, locals become stack accesses, and so forth.
For instance, suppose you have a function that is usually called frequently, but should not be called after an error has occurred. You could use the following tracepoint command to collect data about calls of that function that happen while the error code is propagating through the program; an unconditional tracepoint could end up collecting thousands of useless trace frames that you would have to search through.
() trace normal_operation if errcode > 0 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A trace state variable is a special type of variable that is
created and managed by target-side code. The syntax is the same as
that for GDB's convenience variables (a string prefixed with "$"),
but they are stored on the target. They must be created explicitly,
using a tvariable command. They are always 64-bit signed
integers.
Trace state variables are remembered by , and downloaded to the target along with tracepoint information when the trace experiment starts. There are no intrinsic limits on the number of trace state variables, beyond memory limitations of the target.
Although trace state variables are managed by the target, you can use
them in print commands and expressions as if they were convenience
variables; will get the current value from the target
while the trace experiment is running. Trace state variables share
the same namespace as other "$" variables, which means that you
cannot have trace state variables with names like $23 or
$pc, nor can you have a trace state variable and a convenience
variable with the same name.
tvariable $name [ = expression ]
tvariable command creates a new trace state variable named
$name, and optionally gives it an initial value of
expression. expression is evaluated when this command is
entered; the result will be converted to an integer if possible,
otherwise will report an error. A subsequent
tvariable command specifying the same name does not create a
variable, but instead assigns the supplied initial value to the
existing variable of that name, overwriting any previous initial
value. The default initial value is 0.
info tvariables
delete tvariable [ $name ... ]
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
actions [num]
actions without bothering about its number). You specify the
actions themselves on the following lines, one action at a time, and
terminate the actions list with a line containing just end. So
far, the only defined actions are collect, teval, and
while-stepping.
actions is actually equivalent to commands (see section Breakpoint Command Lists), except that only the defined
actions are allowed; any other command is rejected.
To remove all actions from a tracepoint, type `actions num' and follow it immediately with `end'.
() collect data // collect some data () while-stepping 5 // single-step 5 times, collect data () end // signals the end of actions. |
In the following example, the action list begins with collect
commands indicating the things to be collected when the tracepoint is
hit. Then, in order to single-step and collect additional data
following the tracepoint, a while-stepping command is used,
followed by the list of things to be collected after each step in a
sequence of single steps. The while-stepping command is
terminated by its own separate end command. Lastly, the action
list is terminated by an end command.
() trace foo () actions Enter actions for tracepoint 1, one per line: > collect bar,baz > collect $regs > while-stepping 12 > collect $pc, arr[i] > end end |
collect[/mods] expr1, expr2, ...
$regs
$args
$locals
$_ret
$_probe_argc
$_probe_argn
$_sdata
printf function call. The
tracing library is able to collect user specified data formatted to a
character string using the format provided by the programmer that
instrumented the program. Other backends have similar mechanisms.
Here's an example of a UST marker call:
const char master_name[] = "$your_name"; trace_mark(channel1, marker1, "hello %s", master_name) |
In this case, collecting $_sdata collects the string
`hello $yourname'. When analyzing the trace buffer, you can
inspect `$_sdata' like any other variable available to
.
You can give several consecutive collect commands, each one
with a single argument, or one collect command with several
arguments separated by commas; the effect is the same.
The optional mods changes the usual handling of the arguments.
s requests that pointers to chars be handled as strings, in
particular collecting the contents of the memory being pointed at, up
to the first zero. The upper bound is by default the value of the
print elements variable; if s is followed by a decimal
number, that is the upper bound instead. So for instance
`collect/s25 mystr' collects as many as 25 characters at
`mystr'.
The command info scope (see section info scope) is
particularly useful for figuring out what data to collect.
teval expr1, expr2, ...
collect
action were used.
while-stepping n
while-stepping
command is followed by the list of what to collect while stepping
(followed by its own end command):
> while-stepping 12 > collect $regs, myglobal > end > |
Note that $pc is not automatically collected by
while-stepping; you need to explicitly collect that register if
you need it. You may abbreviate while-stepping as ws or
stepping.
set default-collect expr1, expr2, ...
collect action prepended
to every tracepoint action list. The expressions are parsed
individually for each tracepoint, so for instance a variable named
xyz may be interpreted as a global for one tracepoint, and a
local for another, as appropriate to the tracepoint's location.
show default-collect
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
info tracepoints [num...]
info breakpoints; in fact, info tracepoints is the same
command, simply restricting itself to tracepoints.
A tracepoint's listing may include additional information specific to tracing:
passcount n command
() info trace
Num Type Disp Enb Address What
1 tracepoint keep y 0x0804ab57 in foo() at main.cxx:7
while-stepping 20
collect globfoo, $regs
end
collect globfoo2
end
pass count 1200
2 tracepoint keep y <MULTIPLE>
collect $eip
2.1 y 0x0804859c in func4 at change-loc.h:35
installed on target
2.2 y 0xb7ffc480 in func4 at change-loc.h:35
installed on target
2.3 y <PENDING> set_tracepoint
3 tracepoint keep y 0x080485b1 in foo at change-loc.c:29
not installed on target
()
|
This command can be abbreviated info tp.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
info static-tracepoint-markers
For each marker, the following columns are printed:
In addition, the following information may be printed for each marker:
() info static-tracepoint-markers
Cnt ID Enb Address What
1 ust/bar2 y 0x0000000000400e1a in main at stexample.c:25
Data: number1 %d number2 %d
Probed by static tracepoints: #2
2 ust/bar33 n 0x0000000000400c87 in main at stexample.c:24
Data: str %s
()
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
tstart
tstop
Note: a trace experiment and data collection may stop automatically if any tracepoint's passcount is reached (see section 13.1.3 Tracepoint Passcounts), or if the trace buffer becomes full.
tstatus
Here is an example of the commands we described so far:
() trace gdb_c_test () actions Enter actions for tracepoint #1, one per line. > collect $regs,$locals,$args > while-stepping 11 > collect $regs > end > end () tstart [time passes ...] () tstop |
You can choose to continue running the trace experiment even if
disconnects from the target, voluntarily or
involuntarily. For commands such as detach, the debugger will
ask what you want to do with the trace. But for unexpected
terminations ( crash, network outage), it would be
unfortunate to lose hard-won trace data, so the variable
disconnected-tracing lets you decide whether the trace should
continue running without .
set disconnected-tracing on
set disconnected-tracing off
detach or
quit will ask you directly what to do about a running trace no
matter what this variable's setting, so the variable is mainly useful
for handling unexpected situations, such as loss of the network.
show disconnected-tracing
When you reconnect to the target, the trace experiment may or may not still be running; it might have filled the trace buffer in the meantime, or stopped for one of the other reasons. If it is running, it will continue after reconnection.
Upon reconnection, the target will upload information about the tracepoints in effect. will then compare that information to the set of tracepoints currently defined, and attempt to match them up, allowing for the possibility that the numbers may have changed due to creation and deletion in the meantime. If one of the target's tracepoints does not match any in , the debugger will create a new tracepoint, so that you have a number with which to specify that tracepoint. This matching-up process is necessarily heuristic, and it may result in useless tracepoints being created; you may simply delete them if they are of no use.
If your target agent supports a circular trace buffer, then you can run a trace experiment indefinitely without filling the trace buffer; when space runs out, the agent deletes already-collected trace frames, oldest first, until there is enough room to continue collecting. This is especially useful if your tracepoints are being hit too often, and your trace gets terminated prematurely because the buffer is full. To ask for a circular trace buffer, simply set `circular-trace-buffer' to on. You can set this at any time, including during tracing; if the agent can do it, it will change buffer handling on the fly, otherwise it will not take effect until the next run.
set circular-trace-buffer on
set circular-trace-buffer off
show circular-trace-buffer
set trace-buffer-size n
-1 to let the target use whatever size it likes. This is also
the default.
show trace-buffer-size
tstatus
to get a report of the actual buffer size.
set trace-user text
show trace-user
set trace-notes text
show trace-notes
set trace-stop-notes text
tstop arguments; the set command is convenient way to fix a
stop note that is mistaken or incomplete.
show trace-stop-notes
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are a number of restrictions on the use of tracepoints. As described above, tracepoint data gathering occurs on the target without interaction from . Thus the full capabilities of the debugger are not available during data gathering, and then at data examination time, you will be limited by only having what was collected. The following items describe some common problems, but it is not exhaustive, and you may run into additional difficulties not mentioned here.
$locals or $args, during while-stepping may
behave erratically. The stepping action may enter a new scope (for
instance by stepping into a function), or the location of the variable
may change (for instance it is loaded into a register). The
tracepoint data recorded uses the location information for the
variables that is correct for the tracepoint location. When the
tracepoint is created, it is not possible, in general, to determine
where the steps of a while-stepping sequence will advance the
program--particularly if a conditional branch is stepped.
*ptr@50 can be used to collect the 50 element array pointed to
by ptr.
*(unsigned char *)$esp@300
(adjust to use the name of the actual stack pointer register on your
target architecture, and the amount of stack you wish to capture).
Then the backtrace command will show a partial backtrace when
using a trace frame. The number of stack frames that can be examined
depends on the sizes of the frames in the collected stack. Note that
if you ask for a block so large that it goes past the bottom of the
stack, the target agent may report an error trying to read from an
invalid address.
$pc must be the same as the address of
the tracepoint and use that when you are looking at a trace frame
for that tracepoint. However, this cannot work if the tracepoint has
multiple locations (for instance if it was set in a function that was
inlined), or if it has a while-stepping loop. In those cases
will warn you that it can't infer $pc, and default
it to zero.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
After the tracepoint experiment ends, you use commands
for examining the trace data. The basic idea is that each tracepoint
collects a trace snapshot every time it is hit and another
snapshot every time it single-steps. All these snapshots are
consecutively numbered from zero and go into a buffer, and you can
examine them later. The way you examine them is to focus on a
specific trace snapshot. When the remote stub is focused on a trace
snapshot, it will respond to all requests for memory and
registers by reading from the buffer which belongs to that snapshot,
rather than from real memory or registers of the program being
debugged. This means that all commands
(print, info registers, backtrace, etc.) will
behave as if we were currently debugging the program state as it was
when the tracepoint occurred. Any requests for data that are not in
the buffer will fail.
13.2.1 tfind nHow to select a trace snapshot 13.2.2 tdumpHow to display all data for a snapshot 13.2.3 save tracepoints filenameHow to save tracepoints for a future run
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
tfind n
The basic command for selecting a trace snapshot from the buffer is
tfind n, which finds trace snapshot number n,
counting from zero. If no argument n is given, the next
snapshot is selected.
Here are the various forms of using the tfind command.
tfind start
tfind 0 (since 0 is the number of the first snapshot).
tfind none
tfind end
tfind
tfind -
tfind tracepoint num
tfind pc addr
tfind outside addr1, addr2
tfind range addr1, addr2
tfind line [file:]n
tfind line repeatedly can appear to have the same effect as
stepping from line to line in a live debugging session.
The default arguments for the tfind commands are specifically
designed to make it easy to scan through the trace buffer. For
instance, tfind with no argument selects the next trace
snapshot, and tfind - with no argument selects the previous
trace snapshot. So, by giving one tfind command, and then
simply hitting RET repeatedly you can examine all the trace
snapshots in order. Or, by saying tfind - and then hitting
RET repeatedly you can examine the snapshots in reverse order.
The tfind line command with no argument selects the snapshot
for the next source line executed. The tfind pc command with
no argument selects the next snapshot with the same program counter
(PC) as the current frame. The tfind tracepoint command with
no argument selects the next trace snapshot collected by the same
tracepoint as the current one.
In addition to letting you scan through the trace buffer manually, these commands make it easy to construct scripts that scan through the trace buffer and print out whatever collected data you are interested in. Thus, if we want to examine the PC, FP, and SP registers from each trace frame in the buffer, we can say this:
() tfind start
() while ($trace_frame != -1)
> printf "Frame %d, PC = %08X, SP = %08X, FP = %08X\n", \
$trace_frame, $pc, $sp, $fp
> tfind
> end
Frame 0, PC = 0020DC64, SP = 0030BF3C, FP = 0030BF44
Frame 1, PC = 0020DC6C, SP = 0030BF38, FP = 0030BF44
Frame 2, PC = 0020DC70, SP = 0030BF34, FP = 0030BF44
Frame 3, PC = 0020DC74, SP = 0030BF30, FP = 0030BF44
Frame 4, PC = 0020DC78, SP = 0030BF2C, FP = 0030BF44
Frame 5, PC = 0020DC7C, SP = 0030BF28, FP = 0030BF44
Frame 6, PC = 0020DC80, SP = 0030BF24, FP = 0030BF44
Frame 7, PC = 0020DC84, SP = 0030BF20, FP = 0030BF44
Frame 8, PC = 0020DC88, SP = 0030BF1C, FP = 0030BF44
Frame 9, PC = 0020DC8E, SP = 0030BF18, FP = 0030BF44
Frame 10, PC = 00203F6C, SP = 0030BE3C, FP = 0030BF14
|
Or, if we want to examine the variable X at each source line in
the buffer:
() tfind start () while ($trace_frame != -1) > printf "Frame %d, X == %d\n", $trace_frame, X > tfind line > end Frame 0, X = 1 Frame 7, X = 2 Frame 13, X = 255 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
tdump This command takes no arguments. It prints all the data collected at the current trace snapshot.
() trace 444 () actions Enter actions for tracepoint #2, one per line: > collect $regs, $locals, $args, gdb_long_test > end () tstart () tfind line 444 #0 gdb_test (p1=0x11, p2=0x22, p3=0x33, p4=0x44, p5=0x55, p6=0x66) at gdb_test.c:444 444 printp( "%s: arguments = 0x%X 0x%X 0x%X 0x%X 0x%X 0x%X\n", ) () tdump Data collected at tracepoint 2, trace frame 1: d0 0xc4aa0085 -995491707 d1 0x18 24 d2 0x80 128 d3 0x33 51 d4 0x71aea3d 119204413 d5 0x22 34 d6 0xe0 224 d7 0x380035 3670069 a0 0x19e24a 1696330 a1 0x3000668 50333288 a2 0x100 256 a3 0x322000 3284992 a4 0x3000698 50333336 a5 0x1ad3cc 1758156 fp 0x30bf3c 0x30bf3c sp 0x30bf34 0x30bf34 ps 0x0 0 pc 0x20b2c8 0x20b2c8 fpcontrol 0x0 0 fpstatus 0x0 0 fpiaddr 0x0 0 p = 0x20e5b4 "gdb-test" p1 = (void *) 0x11 p2 = (void *) 0x22 p3 = (void *) 0x33 p4 = (void *) 0x44 p5 = (void *) 0x55 p6 = (void *) 0x66 gdb_long_test = 17 '\021' () |
tdump works by scanning the tracepoint's current collection
actions and printing the value of each expression listed. So
tdump can fail, if after a run, you change the tracepoint's
actions to mention variables that were not collected during the run.
Also, for tracepoints with while-stepping loops, tdump
uses the collected value of $pc to distinguish between trace
frames that were collected at the tracepoint hit, and frames that were
collected while stepping. This allows it to correctly choose whether
to display the basic list of collections, or the collections from the
body of the while-stepping loop. However, if $pc was not collected,
then tdump will always attempt to dump using the basic collection
list, and may fail if a while-stepping frame does not include all the
same data that is collected at the tracepoint hit.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
save tracepoints filename
This command saves all current tracepoint definitions together with
their actions and passcounts, into a file `filename'
suitable for use in a later debugging session. To read the saved
tracepoint definitions, use the source command (see section 23.1.3 Command Files). The save-tracepoints command is a deprecated
alias for save tracepoints
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
(int) $trace_frame
(int) $tracepoint
(int) $trace_line
(char []) $trace_file
(char []) $trace_func
$tracepoint.
Note: $trace_file is not suitable for use in printf,
use output instead.
Here's a simple example of using these convenience variables for stepping through all the trace snapshots and printing some of their data. Note that these are not the same as trace state variables, which are managed by the target.
() tfind start () while $trace_frame != -1 > output $trace_file > printf ", line %d (tracepoint #%d)\n", $trace_line, $tracepoint > tfind > end |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In some situations, the target running a trace experiment may no
longer be available; perhaps it crashed, or the hardware was needed
for a different activity. To handle these cases, you can arrange to
dump the trace data into a file, and later use that file as a source
of trace data, via the target tfile command.
tsave [ -r ] filename
-r ("remote") to direct the target to save
the data directly into filename in its own filesystem, which may be
more efficient if the trace buffer is very large. (Note, however, that
target tfile can only read from files accessible to the host.)
target tfile filename
tstatus will report
the state of the trace run at the moment the data was saved, as well
as the current trace frame you are examining. filename must be
on a filesystem accessible to the host.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If your program is too large to fit completely in your target system's memory, you can sometimes use overlays to work around this problem. provides some support for debugging programs that use overlays.
14.1 How Overlays Work A general explanation of overlays. 14.2 Overlay Commands Managing overlays in . 14.3 Automatic Overlay Debugging can find out which overlays are mapped by asking the inferior. 14.4 Overlay Sample Program A sample program using overlays.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Suppose you have a computer whose instruction address space is only 64 kilobytes long, but which has much more memory which can be accessed by other means: special instructions, segment registers, or memory management hardware, for example. Suppose further that you want to adapt a program which is larger than 64 kilobytes to run on this system.
One solution is to identify modules of your program which are relatively independent, and need not call each other directly; call these modules overlays. Separate the overlays from the main program, and place their machine code in the larger memory. Place your main program in instruction memory, but leave at least enough space there to hold the largest overlay as well.
Now, to call a function located in an overlay, you must first copy that overlay's machine code from the large memory into the space set aside for it in the instruction memory, and then jump to its entry point there.
The diagram (see A code overlay) shows a system with separate data and instruction address spaces. To map an overlay, the program copies its code from the larger address space to the instruction address space. Since the overlays shown here all use the same mapped address, only one may be mapped at a time. For a system with a single address space for data and instructions, the diagram would be similar, except that the program variables and heap would share an address space with the main program and the overlay area.
An overlay loaded into instruction memory and ready for use is called a mapped overlay; its mapped address is its address in the instruction memory. An overlay not present (or only partially present) in instruction memory is called unmapped; its load address is its address in the larger memory. The mapped address is also called the virtual memory address, or VMA; the load address is also called the load memory address, or LMA.
Unfortunately, overlays are not a completely transparent way to adapt a program to limited instruction memory. They introduce a new set of global constraints you must keep in mind as you design your program:
The overlay system described above is rather simple, and could be improved in many ways:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To use 's overlay support, each overlay in your program must correspond to a separate section of the executable file. The section's virtual memory address and load memory address must be the overlay's mapped and load addresses. Identifying overlays with sections allows to determine the appropriate address of a function or variable, depending on whether the overlay is mapped or not.
's overlay commands all start with the word overlay;
you can abbreviate this as ov or ovly. The commands are:
overlay off
overlay manual
overlay map-overlay and overlay unmap-overlay
commands described below.
overlay map-overlay overlay
overlay map overlay
overlay unmap-overlay overlay
overlay unmap overlay
overlay auto
overlay load-target
overlay load
overlay list-overlays
overlay list
Normally, when prints a code address, it includes the name of the function the address falls in:
() print main
$3 = {int ()} 0x11a0 <main>
|
foo is a function in an
unmapped overlay, prints it this way:
() overlay list
No sections are mapped.
() print foo
$5 = {int (int)} 0x100000 <*foo*>
|
foo's overlay is mapped, prints the function's
name normally:
() overlay list
Section .ov.foo.text, loaded at 0x100000 - 0x100034,
mapped at 0x1016 - 0x104a
() print foo
$6 = {int (int)} 0x1016 <foo>
|
When overlay debugging is enabled, can find the correct
address for functions and variables in an overlay, whether or not the
overlay is mapped. This allows most commands, like
break and disassemble, to work normally, even on unmapped
code. However, 's breakpoint support has some limitations:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can automatically track which overlays are mapped and which
are not, given some simple co-operation from the overlay manager in the
inferior. If you enable automatic overlay debugging with the
overlay auto command (see section 14.2 Overlay Commands),
looks in the inferior's memory for certain variables describing the
current state of the overlays.
Here are the variables your overlay manager must define to support 's automatic overlay debugging:
_ovly_table:
struct
{
/* The overlay's mapped address. */
unsigned long vma;
/* The size of the overlay, in bytes. */
unsigned long size;
/* The overlay's load address. */
unsigned long lma;
/* Non-zero if the overlay is currently mapped;
zero otherwise. */
unsigned long mapped;
}
|
_novlys:
_ovly_table.
To decide whether a particular overlay is mapped or not,
looks for an entry in _ovly_table whose vma and
lma members equal the VMA and LMA of the overlay's section in the
executable file. When finds a matching entry, it consults
the entry's mapped member to determine whether the overlay is
currently mapped.
In addition, your overlay manager may define a function called
_ovly_debug_event. If this function is defined,
will silently set a breakpoint there. If the overlay manager then
calls this function whenever it has changed the overlay table, this
will enable to accurately keep track of which overlays
are in program memory, and update any breakpoints that may be set
in overlays. This will allow breakpoints to work even if the
overlays are kept in ROM or other non-writable memory while they
are not being executed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When linking a program which uses overlays, you must place the overlays at their load addresses, while relocating them to run at their mapped addresses. To do this, you must write a linker script (see section `Overlay Description' in Using ld: the GNU linker). Unfortunately, since linker scripts are specific to a particular host system, target architecture, and target memory layout, this manual cannot provide portable sample code demonstrating 's overlay support.
However, the source distribution does contain an overlaid program, with linker scripts for a few systems, as part of its test suite. The program consists of the following files from `gdb/testsuite/gdb.base':
d10v-elf
and m32r-elf targets.
You can build the test program using the d10v-elf GCC
cross-compiler like this:
$ d10v-elf-gcc -g -c overlays.c
$ d10v-elf-gcc -g -c ovlymgr.c
$ d10v-elf-gcc -g -c foo.c
$ d10v-elf-gcc -g -c bar.c
$ d10v-elf-gcc -g -c baz.c
$ d10v-elf-gcc -g -c grbx.c
$ d10v-elf-gcc -g overlays.o ovlymgr.o foo.o bar.o \
baz.o grbx.o -Wl,-Td10v.ld -o overlays
|
The build process is identical for any other architecture, except that
you must substitute the appropriate compiler and linker script for the
target system for d10v-elf-gcc and d10v.ld.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Although programming languages generally have common aspects, they are
rarely expressed in the same manner. For instance, in ANSI C,
dereferencing a pointer p is accomplished by *p, but in
Modula-2, it is accomplished by p^. Values can also be
represented (and displayed) differently. Hex numbers in C appear as
`0x1ae', while in Modula-2 they appear as `1AEH'.
Language-specific information is built into for some languages, allowing you to express operations like the above in your program's native language, and allowing to output values in a manner consistent with the syntax of your program's native language. The language you use to build expressions is called the working language.
15.1 Switching Between Source Languages Switching between source languages 15.2 Displaying the Language Displaying the language 15.3 Type and Range Checking Type and range checks 15.4 Supported Languages Supported languages 15.5 Unsupported Languages Unsupported languages
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are two ways to control the working language--either have
set it automatically, or select it manually yourself. You can use the
set language command for either purpose. On startup,
defaults to setting the language automatically. The working language is
used to determine how expressions you type are interpreted, how values
are printed, etc.
In addition to the working language, every source file that
knows about has its own working language. For some object
file formats, the compiler might indicate which language a particular
source file is in. However, most of the time infers the
language from the name of the file. The language of a source file
controls whether C++ names are demangled--this way backtrace can
show each frame appropriately for its own language. There is no way to
set the language of a source file from within , but you can
set the language associated with a filename extension. See section Displaying the Language.
This is most commonly a problem when you use a program, such
as cfront or f2c, that generates C but is written in
another language. In that case, make the
program use #line directives in its C output; that way
will know the correct language of the source code of the original
program, and will display that source code, not the generated C code.
15.1.1 List of Filename Extensions and Languages Filename extensions and languages. 15.1.2 Setting the Working Language Setting the working language manually 15.1.3 Having Infer the Source Language Having infer the source language
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If a source file name ends in one of the following extensions, then infers that its language is the one indicated.
In addition, you may set the language associated with a filename extension. See section Displaying the Language.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you allow to set the language automatically, expressions are interpreted the same way in your debugging session and your program.
If you wish, you may set the language manually. To do this, issue the
command `set language lang', where lang is the name of
a language, such as
c or modula-2.
For a list of the supported languages, type `set language'.
Setting the language manually prevents from updating the working language automatically. This can lead to confusion if you try to debug a program when the working language is not the same as the source language, when an expression is acceptable to both languages--but means different things. For instance, if the current source file were written in C, and was parsing Modula-2, a command such as:
print a = b + c |
might not have the effect you intended. In C, this means to add
b and c and place the result in a. The result
printed would be the value of a. In Modula-2, this means to compare
a to the result of b+c, yielding a BOOLEAN value.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To have set the working language automatically, use `set language local' or `set language auto'. then infers the working language. That is, when your program stops in a frame (usually by encountering a breakpoint), sets the working language to the language recorded for the function in that frame. If the language for a frame is unknown (that is, if the function or block corresponding to the frame was defined in a source file that does not have a recognized extension), the current working language is not changed, and issues a warning.
This may not seem necessary for most programs, which are written entirely in one source language. However, program modules and libraries written in one source language can be used by a main program written in a different source language. Using `set language auto' in this case frees you from having to set the working language manually.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following commands help you find out which language is the working language, and also what language source files were written in.
show language
print to
build and compute expressions that may involve variables in your program.
info frame
info source
In unusual circumstances, you may have source files with extensions not in the standard list. You can then set the extension associated with a language explicitly:
set extension-language ext language
info extensions
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some languages are designed to guard you against making seemingly common errors through a series of compile- and run-time checks. These include checking the type of arguments to functions and operators and making sure mathematical overflows are caught at run time. Checks such as these help to ensure a program's correctness once it has been compiled by eliminating type mismatches and providing active checks for range errors when your program is running.
By default checks for these errors according to the
rules of the current source language. Although does not check
the statements in your program, it can check expressions entered directly
into for evaluation via the print command, for example.
15.3.1 An Overview of Type Checking An overview of type checking 15.3.2 An Overview of Range Checking An overview of range checking
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some languages, such as C and C++, are strongly typed, meaning that the arguments to operators and functions have to be of the correct type, otherwise an error occurs. These checks prevent type mismatch errors from ever causing any run-time problems. For example,
int klass::my_method(char *b) { return b ? 1 : 2; }
() print obj.my_method (0)
$1 = 2
but
() print obj.my_method (0x1234)
Cannot resolve method klass::my_method to any overloaded instance
|
The second example fails because in C++ the integer constant `0x1234' is not type-compatible with the pointer parameter type.
For the expressions you use in commands, you can tell to not enforce strict type checking or to treat any mismatches as errors and abandon the expression; When type checking is disabled, successfully evaluates expressions like the second example above.
Even if type checking is off, there may be other reasons
related to type that prevent from evaluating an expression.
For instance, does not know how to add an int and
a struct foo. These particular type errors have nothing to do
with the language in use and usually arise from expressions which make
little sense to evaluate anyway.
provides some additional commands for controlling type checking:
set check type on
set check type off
show check type
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In some languages (such as Modula-2), it is an error to exceed the bounds of a type; this is enforced with run-time checks. Such range checking is meant to ensure program correctness by making sure computations do not overflow, or indices on an array element access do not exceed the bounds of the array.
For expressions you use in commands, you can tell to treat range errors in one of three ways: ignore them, always treat them as errors and abandon the expression, or issue warnings but evaluate the expression anyway.
A range error can result from numerical overflow, from exceeding an array index bound, or when you type a constant that is not a member of any type. Some languages, however, do not treat overflows as an error. In many implementations of C, mathematical overflow causes the result to "wrap around" to lower values--for example, if m is the largest integer value, and s is the smallest, then
m + 1 => s |
This, too, is specific to individual languages, and in some cases specific to individual compilers or machines. See section Supported Languages, for further details on specific languages.
provides some additional commands for controlling the range checker:
set check range auto
set check range on
set check range off
set check range warn
show range
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports C, C++, D, Go, Objective-C, Fortran, Java,
OpenCL C, Pascal, assembly, Modula-2, and Ada.
Some features may be used in expressions regardless of the
language you use: the @ and :: operators,
and the `{type}addr' construct (see section Expressions) can be used with the constructs of any supported
language.
The following sections detail to what degree each source language is supported by . These sections are not meant to be language tutorials or references, but serve only as a reference guide to what the expression parser accepts, and what input and output formats should look like for different languages. There are many good books written on each of these languages; please look to these for a language reference or tutorial.
15.4.1 C and C++ 15.4.2 D 15.4.3 Go 15.4.4 Objective-C 15.4.5 OpenCL C 15.4.6 Fortran 15.4.7 Pascal 15.4.8 Modula-2 15.4.9 Ada
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Since C and C++ are so closely related, many features of apply to both languages. Whenever this is the case, we discuss those languages together.
The C++ debugging facilities are jointly implemented by the C++
compiler and . Therefore, to debug your C++ code
effectively, you must compile your C++ programs with a supported
C++ compiler, such as GNU g++, or the HP ANSI C++
compiler (aCC).
15.4.1.1 C and C++ Operators C and C++ operators 15.4.1.2 C and C++ Constants C and C++ constants 15.4.1.3 C++ Expressions C++ expressions 15.4.1.4 C and C++ Defaults Default settings for C and C++ 15.4.1.5 C and C++ Type and Range Checks C and C++ type and range checks 15.4.1.6 and C 15.4.1.7 Features for C++ features for C++ 15.4.1.8 Decimal Floating Point format Numbers in Decimal Floating Point format
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Operators must be defined on values of specific types. For instance,
+ is defined on numbers, but not on structures. Operators are
often defined on groups of types.
For the purposes of C and C++, the following definitions hold:
int with any of its storage-class
specifiers; char; enum; and, for C++, bool.
float, double, and
long double (if supported by the target platform).
(type *).
The following operators are supported. They are listed here in order of increasing precedence:
,
=
op=
a op= b,
and translated to a = a op b.
op= and = have the same precedence.
op is any one of the operators |, ^, &,
<<, >>, +, -, *, /, %.
?:
a ? b : c can be thought
of as: if a then b else c. a should be of an
integral type.
||
&&
|
^
&
==, !=
<, >, <=, >=
<<, >>
@
+, -
*, /, %
++, --
*
++.
&
++.
For debugging C++, implements a use of `&' beyond what is allowed in the C++ language itself: you can use `&(&ref)' to examine the address where a C++ reference variable (declared with `&ref') is stored.
-
++.
!
++.
~
++.
., ->
struct and union data.
.*, ->*
[]
a[i] is defined as
*(a+i). Same precedence as ->.
()
->.
::
struct, union,
and class types.
::
::,
above.
If an operator is redefined in the user code, usually attempts to invoke the redefined version instead of using the operator's predefined meaning.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
allows you to express the constants of C and C++ in the following ways:
long value.
float (as opposed to the default double) type; or with
a letter `l' or `L', which specifies a long double
constant.
'), or a number--the ordinal value of the corresponding character
(usually its ASCII value). Within quotes, the single character may
be represented by a letter or by escape sequences, which are of
the form `\nnn', where nnn is the octal representation
of the character's ordinal value; or of the form `\x', where
`x' is a predefined special character--for example,
`\n' for newline.
Wide character constants can be written by prefixing a character constant with `L', as in C. For example, `L'x'' is the wide form of `x'. The target wide character set is used when computing the value of this constant (see section 10.20 Character Sets).
"). Any valid character constant (as described
above) may appear. Double quotes within the string must be preceded by
a backslash, so for instance `"a\"b'c"' is a string of five
characters.
Wide string constants can be written by prefixing a string constant with `L', as in C. The target wide character set is used when computing the value of this constant (see section 10.20 Character Sets).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
expression handling can interpret most C++ expressions.
Warning: can only debug C++ code if you use the proper compiler and the proper debug format. Currently, works best when debugging C++ code that is compiled with the most recent version of possible. The DWARF debugging format is preferred; defaults to this on most popular platforms. Other compilers and/or debug formats are likely to work badly or not at all when using to debug C++ code. See section 4.1 Compiling for Debugging.
count = aml->GetOriginal(x, y) |
this following the same rules as C++. using
declarations in the current scope are also respected by .
It does perform integral conversions and promotions, floating-point promotions, arithmetic conversions, pointer conversions, conversions of class objects to base classes, and standard conversions such as those of functions or arrays to pointers; it requires an exact match on the number of function arguments.
Overload resolution is always performed, unless you have specified
set overload-resolution off. See section Features for C++.
You must specify set overload-resolution off in order to use an
explicit function signature to call an overloaded function, as in
p 'foo(char,int)'('x', 13)
|
The command-completion facility can simplify this; see Command Completion.
In the parameter list shown when displays a frame, the values of reference variables are not displayed (unlike other variables); this avoids clutter, since references are often used for large structures. The address of a reference variable is always shown, unless you have specified `set print address off'.
::---your
expressions can use it just as expressions in your program do. Since
one scope may be defined in another, you can use :: repeatedly if
necessary, for example in an expression like
`scope1::scope2::name'. also allows
resolving name scope by reference to source files, in both C and C++
debugging (see section Program Variables).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you allow to set range checking automatically, it
defaults to off whenever the working language changes to
C or C++. This happens regardless of whether you or
selects the working language.
If you allow to set the language automatically, it recognizes source files whose names end with `.c', `.C', or `.cc', etc, and when enters code compiled from one of these files, it sets the working language to C or C++. See section Having Infer the Source Language, for further details.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, when parses C or C++ expressions, strict type checking is used. However, if you turn type checking off, will allow certain non-standard conversions, such as promoting integer constants to pointers.
Range checking, if turned on, is done on mathematical operations. Array indices are not checked, since they are often used to index a pointer that is not itself an array.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The set print union and show print union commands apply to
the union type. When set to `on', any union that is
inside a struct or class is also printed. Otherwise, it
appears as `{...}'.
The @ operator aids in the debugging of dynamic arrays, formed
with pointers and a memory allocation function. See section Expressions.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some commands are particularly useful with C++, and some are designed specifically for use with C++. Here is a summary:
breakpoint menus
rbreak regex
catch throw
catch catch
ptype typename
info vtbl expression.
info vtbl command can be used to display the virtual
method tables of the object computed by expression. This shows
one entry per virtual table; there may be multiple virtual tables when
multiple inheritance is in use.
set print demangle
show print demangle
set print asm-demangle
show print asm-demangle
set print object
show print object
set print vtbl
show print vtbl
vtbl commands do not work on programs compiled with the HP
ANSI C++ compiler (aCC).)
set overload-resolution on
set overload-resolution off
show overload-resolution
Overloaded symbol names
symbol(types) rather than just symbol. You can
also use the command-line word completion facilities to list the
available choices, or to finish the type list for you.
See section Command Completion, for details on how to do this.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can examine, set and perform computations with numbers in
decimal floating point format, which in the C language correspond to the
_Decimal32, _Decimal64 and _Decimal128 types as
specified by the extension to support decimal floating-point arithmetic.
There are two encodings in use, depending on the architecture: BID (Binary Integer Decimal) for x86 and x86-64, and DPD (Densely Packed Decimal) for PowerPC. will use the appropriate encoding for the configured target.
Because of a limitation in `libdecnumber', the library used by to manipulate decimal floating point numbers, it is not possible to convert (using a cast, for example) integers wider than 32-bit to decimal float.
In addition, in order to imitate 's behaviour with binary floating point computations, error checking in decimal float operations ignores underflow, overflow and divide by zero exceptions.
In the PowerPC architecture, provides a set of pseudo-registers
to inspect _Decimal128 values stored in floating point registers.
See PowerPC for more details.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can be used to debug programs written in D and compiled with GDC, LDC or DMD compilers. Currently supports only one D specific feature -- dynamic arrays.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can be used to debug programs written in Go and compiled with `gccgo' or `6g' compilers.
Here is a summary of the Go-specific features and restrictions:
The current Go package
For example, given the program:
package main
var myglob = "Shall we?"
func main () {
// ...
}
|
When stopped inside main either of these work:
(gdb) p myglob (gdb) p main.myglob |
Builtin Go types
string type is recognized by and is printed
as a string.
Builtin Go functions
unsafe.Sizeof
function and handles it internally.
Restrictions on Go expressions
&^.
The Go _ "blank identifier" is not supported.
Automatic dereferencing of pointers is not supported.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section provides information about some commands and command options that are useful for debugging Objective-C code. See also info classes, and info selectors, for a few more commands specific to Objective-C support.
15.4.4.1 Method Names in Commands 15.4.4.2 The Print Command With Objective-C
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following commands have been extended to accept Objective-C method names as line specifications:
clear
break
info line
jump
list
A fully qualified Objective-C method name is specified as
-[Class methodName] |
where the minus sign is used to indicate an instance method and a
plus sign (not shown) is used to indicate a class method. The class
name Class and method name methodName are enclosed in
brackets, similar to the way messages are specified in Objective-C
source code. For example, to set a breakpoint at the create
instance method of class Fruit in the program currently being
debugged, enter:
break -[Fruit create] |
To list ten program lines around the initialize class method,
enter:
list +[NSText initialize] |
In the current version of , the plus or minus sign is required. In future versions of , the plus or minus sign will be optional, but you can use it to narrow the search. It is also possible to specify just a method name:
break create |
You must specify the complete method name, including any colons. If
your program's source files contain more than one create method,
you'll be presented with a numbered list of classes that implement that
method. Indicate your choice by number, or type `0' to exit if
none apply.
As another example, to clear a breakpoint established at the
makeKeyAndOrderFront: method of the NSWindow class, enter:
clear -[NSWindow makeKeyAndOrderFront:] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The print command has also been extended to accept methods. For example:
print -[object hash] |
will tell to send the hash message to object
and print the result. Also, an additional command has been added,
print-object or po for short, which is meant to print
the description of an object. However, this command may only work
with certain Objective-C libraries that have a particular hook
function, _NSPrintForDebugger, defined.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section provides information about s OpenCL C support.
15.4.5.1 OpenCL C Datatypes 15.4.5.2 OpenCL C Expressions 15.4.5.3 OpenCL C Operators
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports the builtin scalar and vector datatypes specified
by OpenCL 1.1. In addition the half- and double-precision floating point
data types of the cl_khr_fp16 and cl_khr_fp64 OpenCL
extensions are also known to .
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports accesses to vector components including the access as lvalue where possible. Since OpenCL C is based on C99 most C expressions supported by can be used as well.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports the operators specified by OpenCL 1.1 for scalar and vector data types.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can be used to debug programs written in Fortran, but it currently supports only the features of Fortran 77 language.
Some Fortran compilers (GNU Fortran 77 and Fortran 95 compilers among them) append an underscore to the names of variables and functions. When you debug programs compiled by those compilers, you will need to refer to variables and functions with a trailing underscore.
15.4.6.1 Fortran Operators and Expressions Fortran operators and expressions 15.4.6.2 Fortran Defaults Default settings for Fortran 15.4.6.3 Special Fortran Commands Special commands for Fortran
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Operators must be defined on values of specific types. For instance,
+ is defined on numbers, but not on characters or other non-
arithmetic types. Operators are often defined on groups of types.
**
:
%
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Fortran symbols are usually case-insensitive, so by default uses case-insensitive matches for Fortran symbols. You can change that with the `set case-insensitive' command, see 16. Examining the Symbol Table, for the details.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
has some commands to support Fortran-specific features, such as displaying common blocks.
info common [common-name]
COMMON
block whose name is common-name. With no argument, the names of
all COMMON blocks visible at the current program location are
printed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Debugging Pascal programs which use sets, subranges, file variables, or nested functions does not currently work. does not support entering expressions, printing values, or similar features using Pascal syntax.
The Pascal-specific command set print pascal_static-members
controls whether static members of Pascal objects are displayed.
See section pascal_static-members.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The extensions made to to support Modula-2 only support output from the GNU Modula-2 compiler (which is currently being developed). Other Modula-2 compilers are not currently supported, and attempting to debug executables produced by them is most likely to give an error as reads in the executable's symbol table.
15.4.8.1 Operators Built-in operators 15.4.8.2 Built-in Functions and Procedures Built-in functions and procedures 15.4.8.3 Constants Modula-2 constants 15.4.8.4 Modula-2 Types Modula-2 types 15.4.8.5 Modula-2 Defaults Default settings for Modula-2 15.4.8.6 Deviations from Standard Modula-2 Deviations from standard Modula-2 15.4.8.7 Modula-2 Type and Range Checks Modula-2 type and range checks 15.4.8.8 The Scope Operators ::and.The scope operators ::and.15.4.8.9 and Modula-2
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Operators must be defined on values of specific types. For instance,
+ is defined on numbers, but not on structures. Operators are
often defined on groups of types. For the purposes of Modula-2, the
following definitions hold:
INTEGER, CARDINAL, and
their subranges.
CHAR and its subranges.
REAL.
POINTER TO
type.
SET and BITSET types.
BOOLEAN.
The following operators are supported, and appear in order of increasing precedence:
,
:=
:= value is
value.
<, >
<=, >=
<.
=, <>, #
<. In scripts, only <> is
available for inequality, since # conflicts with the script
comment character.
IN
<.
OR
AND, &
@
+, -
*
/
*.
DIV, MOD
*.
-
INTEGER and REAL data.
^
NOT
^.
.
RECORD field selector. Defined on RECORD data. Same
precedence as ^.
[]
ARRAY data. Same precedence as ^.
()
PROCEDURE objects. Same precedence
as ^.
::, .
Warning: Set expressions and their operations are not yet supported, so treats the use of the operatorIN, or the use of operators+,-,*,/,=, ,<>,#,<=, and>=on sets as an error.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Modula-2 also makes available several built-in procedures and functions. In describing these, the following metavariables are used:
ARRAY variable.
CHAR constant or variable.
SET OF mtype (where mtype is the type of m).
All Modula-2 built-in procedures also return a result, described below.
ABS(n)
CAP(c)
CHR(i)
DEC(v)
DEC(v,i)
EXCL(m,s)
FLOAT(i)
HIGH(a)
INC(v)
INC(v,i)
INCL(m,s)
MAX(t)
MIN(t)
ODD(i)
ORD(x)
SIZE(x)
TRUNC(r)
TSIZE(x)
VAL(t,i)
Warning: Sets and their operations are not yet supported, so treats the use of proceduresINCLandEXCLas an error.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
allows you to express the constants of Modula-2 in the following ways:
') or double ("). They may
also be expressed by their ordinal value (their ASCII value, usually)
followed by a `C'.
') or double (").
Escape sequences in the style of C are also allowed. See section C and C++ Constants, for a brief explanation of escape
sequences.
TRUE and
FALSE.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Currently can print the following data types in Modula-2 syntax: array types, record types, set types, pointer types, procedure types, enumerated types, subrange types and base types. You can also print the contents of variables declared using these type. This section gives a number of simple source code examples together with sample sessions.
The first example contains the following section of code:
VAR s: SET OF CHAR ; r: [20..40] ; |
and you can request to interrogate the type and value of
r and s.
() print s
{'A'..'C', 'Z'}
() ptype s
SET OF CHAR
() print r
21
() ptype r
[20..40]
|
Likewise if your source code declares s as:
VAR s: SET ['A'..'Z'] ; |
then you may query the type of s by:
() ptype s type = SET ['A'..'Z'] |
Note that at present you cannot interactively manipulate set expressions using the debugger.
The following example shows how you might declare an array in Modula-2 and how you can interact with to print its type and contents:
VAR s: ARRAY [-10..10] OF CHAR ; |
() ptype s ARRAY [-10..10] OF CHAR |
Note that the array handling is not yet complete and although the type
is printed correctly, expression handling still assumes that all
arrays have a lower bound of zero and not -10 as in the example
above.
Here are some more type related Modula-2 examples:
TYPE colour = (blue, red, yellow, green) ; t = [blue..yellow] ; VAR s: t ; BEGIN s := blue ; |
The interaction shows how you can query the data type and value of a variable.
() print s $1 = blue () ptype t type = [blue..yellow] |
In this example a Modula-2 array is declared and its contents
displayed. Observe that the contents are written in the same way as
their C counterparts.
VAR s: ARRAY [1..5] OF CARDINAL ; BEGIN s[1] := 1 ; |
() print s
$1 = {1, 0, 0, 0, 0}
() ptype s
type = ARRAY [1..5] OF CARDINAL
|
The Modula-2 language interface to also understands pointer types as shown in this example:
VAR s: POINTER TO ARRAY [1..5] OF CARDINAL ; BEGIN NEW(s) ; s^[1] := 1 ; |
and you can request that describes the type of s.
() ptype s type = POINTER TO ARRAY [1..5] OF CARDINAL |
handles compound types as we can see in this example. Here we combine array types, record types, pointer types and subrange types:
TYPE
foo = RECORD
f1: CARDINAL ;
f2: CHAR ;
f3: myarray ;
END ;
myarray = ARRAY myrange OF CARDINAL ;
myrange = [-2..2] ;
VAR
s: POINTER TO ARRAY myrange OF foo ;
|
and you can ask to describe the type of s as shown
below.
() ptype s
type = POINTER TO ARRAY [-2..2] OF foo = RECORD
f1 : CARDINAL;
f2 : CHAR;
f3 : ARRAY [-2..2] OF CARDINAL;
END
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If type and range checking are set automatically by , they
both default to on whenever the working language changes to
Modula-2. This happens regardless of whether you or
selected the working language.
If you allow to set the language automatically, then entering code compiled from a file whose name ends with `.mod' sets the working language to Modula-2. See section Having Infer the Source Language, for further details.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A few changes have been made to make Modula-2 programs easier to debug. This is done primarily via loosening its type strictness:
:=) returns the value of its right-hand
argument.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Warning: in this release, does not yet perform type or range checking.
considers two Modula-2 variables type equivalent if:
TYPE
t1 = t2 statement
As long as type checking is enabled, any attempt to combine variables whose types are not equivalent is an error.
Range checking is done on all mathematical operations, assignment, array index bounds, and all built-in functions and procedures.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
:: and .
There are a few subtle differences between the Modula-2 scope operator
(.) and the scope operator (::). The two have
similar syntax:
module . id scope :: id |
where scope is the name of a module or a procedure, module the name of a module, and id is any declared identifier within your program, except another module.
Using the :: operator makes search the scope
specified by scope for the identifier id. If it is not
found in the specified scope, then searches all scopes
enclosing the one specified by scope.
Using the . operator makes search the current scope for
the identifier specified by id that was imported from the
definition module specified by module. With this operator, it is
an error if the identifier id was not imported from definition
module module, or if id is not an identifier in
module.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some commands have little use when debugging Modula-2 programs.
Five subcommands of set print and show print apply
specifically to C and C++: `vtbl', `demangle',
`asm-demangle', `object', and `union'. The first four
apply to C++, and the last to the C union type, which has no direct
analogue in Modula-2.
The @ operator (see section Expressions), while available
with any language, is not useful with Modula-2. Its
intent is to aid the debugging of dynamic arrays, which cannot be
created in Modula-2 as they can in C or C++. However, because an
address can be specified by an integral constant, the construct
`{type}adrexp' is still useful.
In scripts, the Modula-2 inequality operator # is
interpreted as the beginning of a comment. Use <> instead.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The extensions made to for Ada only support output from the GNU Ada (GNAT) compiler. Other Ada compilers are not currently supported, and attempting to debug executables produced by them is most likely to be difficult.
15.4.9.1 Introduction General remarks on the Ada syntax and semantics supported by Ada mode in . 15.4.9.2 Omissions from Ada Restrictions on the Ada expression syntax. 15.4.9.3 Additions to Ada Extensions of the Ada expression syntax. 15.4.9.4 Stopping at the Very Beginning Debugging the program during elaboration. 15.4.9.5 Extensions for Ada Tasks Listing and setting breakpoints in tasks. 15.4.9.6 Tasking Support when Debugging Core Files 15.4.9.7 Tasking Support when using the Ravenscar Profile 15.4.9.8 Known Peculiarities of Ada Mode Known peculiarities of Ada mode.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Ada mode of supports a fairly large subset of Ada expression syntax, with some extensions. The philosophy behind the design of this subset is
Thus, for brevity, the debugger acts as if all names declared in user-written packages are directly visible, even if they are not visible according to Ada rules, thus making it unnecessary to fully qualify most names with their packages, regardless of context. Where this causes ambiguity, asks the user's intent.
The debugger will start in Ada mode if it detects an Ada main program. As for other languages, it will enter Ada mode when stopped in a program that was translated from an Ada source file.
While in Ada mode, you may use `--' for comments. This is useful mostly for documenting command files. The standard comment (`#') still works at the beginning of a line in Ada mode, but not in the middle (to allow based literals).
The debugger supports limited overloading. Given a subprogram call in which
the function symbol has multiple definitions, it will use the number of
actual parameters and some information about their types to attempt to narrow
the set of definitions. It also makes very limited use of context, preferring
procedures to functions in the context of the call command, and
functions to procedures elsewhere.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Here are the notable omissions from the subset:
in) operator.
Characters.Latin_1 are not available and
concatenation is not implemented. Thus, escape characters in strings are
not currently available.
and, or,
xor, not, and relational tests other than equality)
are not implemented.
() set An_Array := (1, 2, 3, 4, 5, 6) () set An_Array := (1, others => 0) () set An_Array := (0|4 => 1, 1..3 => 2, 5 => 6) () set A_2D_Array := ((1, 2, 3), (4, 5, 6), (7, 8, 9)) () set A_Record := (1, "Peter", True); () set A_Record := (Name => "Peter", Id => 1, Alive => True) |
Changing a
discriminant's value by assigning an aggregate has an
undefined effect if that discriminant is used within the record.
However, you can first modify discriminants by directly assigning to
them (which normally would not be allowed in Ada), and then performing an
aggregate assignment. For example, given a variable A_Rec
declared to have a type such as:
type Rec (Len : Small_Integer := 0) is record
Id : Integer;
Vals : IntArray (1 .. Len);
end record;
|
you can assign a value with a different size of Vals with two
assignments:
() set A_Rec.Len := 4 () set A_Rec := (Id => 42, Vals => (1, 2, 3, 4)) |
As this example also illustrates, is very loose about the usual
rules concerning aggregates. You may leave out some of the
components of an array or record aggregate (such as the Len
component in the assignment to A_Rec above); they will retain their
original values upon assignment. You may freely use dynamic values as
indices in component associations. You may even use overlapping or
redundant component associations, although which component values are
assigned in such cases is not defined.
new operator is not implemented.
True and False, when not part of a qualified name,
are interpreted as if implicitly prefixed by Standard, regardless of
context.
Should your program
redefine these names in a package or procedure (at best a dubious practice),
you will have to use fully qualified names to access their new definitions.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
As it does for other languages, makes certain generic extensions to Ada (see section 10.1 Expressions):
E@N displays the values of E and the
N-1 adjacent variables following it in memory as an array. In
Ada, this operator is generally not necessary, since its prime use is
in displaying parts of an array, and slicing will usually do this in
Ada. However, there are occasional uses when debugging programs in
which certain debugging information has been optimized away.
B::var means "the variable named var that
appears in function or file B." When B is a file name,
you must typically surround it in single quotes.
{type} addr means "the variable of type
type that appears at address addr."
In addition, provides a few other shortcuts and outright additions specific to Ada:
() set x := y + 3 () print A(tmp := y + 1) |
() break f () condition 1 (report(i); k += 1; A(k) > 100) |
"One line.["0a"]Next line.["0a"]" |
Ada.Characters.Latin_1.LF)
after each period.
() print 'max(x, y) |
(3 => 10, 17, 1) |
That is, in contrast to valid Ada, only the first component has a =>
clause.
() print <JMPBUF_SAVE>[0] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is sometimes necessary to debug the program during elaboration, and
before reaching the main procedure.
As defined in the Ada Reference
Manual, the elaboration code is invoked from a procedure called
adainit. To run your program up to the beginning of
elaboration, simply use the following two commands:
tbreak adainit and run.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Support for Ada tasks is analogous to that for threads (see section 4.10 Debugging Programs with Multiple Threads). provides the following task-related commands:
info tasks
() info tasks ID TID P-ID Pri State Name 1 8088000 0 15 Child Activation Wait main_task 2 80a4000 1 15 Accept Statement b 3 809a800 1 15 Child Activation Wait a * 4 80ae800 3 15 Runnable c |
In this listing, the asterisk before the last task indicates it to be the task currently being inspected.
Unactivated
Runnable
Terminated
Child Activation Wait
Accept Statement
Waiting on entry call
Async Select Wait
Delay Sleep
Child Termination Wait
Wait Child in Term Alt
Accepting RV with taskno
info task taskno
() info tasks ID TID P-ID Pri State Name 1 8077880 0 15 Child Activation Wait main_task * 2 807c468 1 15 Runnable task_1 () info task 2 Ada Task: 0x807c468 Name: task_1 Thread: 0x807f378 Parent: 1 (main_task) Base Priority: 15 State: Runnable |
task
() info tasks ID TID P-ID Pri State Name 1 8077870 0 15 Child Activation Wait main_task * 2 807c458 1 15 Runnable t () task [Current task is 2] |
task taskno
thread threadno
command (see section 4.10 Debugging Programs with Multiple Threads). It switches the context of debugging
from the current task to the given task.
() info tasks ID TID P-ID Pri State Name 1 8077870 0 15 Child Activation Wait main_task * 2 807c458 1 15 Runnable t () task 1 [Switching to task 1] #0 0x8067726 in pthread_cond_wait () () bt #0 0x8067726 in pthread_cond_wait () #1 0x8056714 in system.os_interface.pthread_cond_wait () #2 0x805cb63 in system.task_primitives.operations.sleep () #3 0x806153e in system.tasking.stages.activate_tasks () #4 0x804aacc in un () at un.adb:5 |
break linespec task taskno
break linespec task taskno if ...
break ... thread ...
command (see section 5.5 Stopping and Starting Multi-thread Programs).
linespec specifies source lines, as described
in 9.2 Specifying a Location.
Use the qualifier `task taskno' with a breakpoint command to specify that you only want to stop the program when a particular Ada task reaches this breakpoint. taskno is one of the numeric task identifiers assigned by , shown in the first column of the `info tasks' display.
If you do not specify `task taskno' when you set a breakpoint, the breakpoint applies to all tasks of your program.
You can use the task qualifier on conditional breakpoints as
well; in this case, place `task taskno' before the
breakpoint condition (before the if).
For example,
() info tasks ID TID P-ID Pri State Name 1 140022020 0 15 Child Activation Wait main_task 2 140045060 1 15 Accept/Select Wait t2 3 140044840 1 15 Runnable t1 * 4 140056040 1 15 Runnable t3 () b 15 task 2 Breakpoint 5 at 0x120044cb0: file test_task_debug.adb, line 15. () cont Continuing. task # 1 running task # 2 running Breakpoint 5, test_task_debug () at test_task_debug.adb:15 15 flush; () info tasks ID TID P-ID Pri State Name 1 140022020 0 15 Child Activation Wait main_task * 2 140045060 1 15 Runnable t2 3 140044840 1 15 Runnable t1 4 140056040 1 15 Delay Sleep t3 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When inspecting a core file, as opposed to debugging a live program, tasking support may be limited or even unavailable, depending on the platform being used. For instance, on x86-linux, the list of tasks is available, but task switching is not supported. On Tru64, however, task switching will work as usual.
On certain platforms, including Tru64, the debugger needs to perform some memory writes in order to provide Ada tasking support. When inspecting a core file, this means that the core file must be opened with read-write privileges, using the command `"set write on"' (see section 17.6 Patching Programs). Under these circumstances, you should make a backup copy of the core file before inspecting it with .
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Ravenscar Profile is a subset of the Ada tasking features, specifically designed for systems with safety-critical real-time requirements.
set ravenscar task-switching on
set ravenscar task-switching off
show ravenscar task-switching
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Besides the omissions listed previously (see section 15.4.9.2 Omissions from Ada), we know of several problems with and limitations of Ada mode in , some of which will be fixed with planned future releases of the debugger and the GNU Ada compiler.
Standard for any of
the standard symbols defined by the Ada language. knows about
this: it will strip the prefix from names when you use it, and will never
look for a name you have so qualified among local symbols, nor match against
symbols in other packages or subprograms. If you have
defined entities anywhere in your program other than parameters and
local variables whose simple names match names in Standard,
GNAT's lack of qualification here can cause confusion. When this happens,
you can usually resolve the confusion
by qualifying the problematic names with package
Standard explicitly.
Older versions of the compiler sometimes generate erroneous debugging information, resulting in the debugger incorrectly printing the value of affected entities. In some cases, the debugger is able to work around an issue automatically. In other cases, the debugger is able to work around the issue, but the work-around has to be specifically enabled.
set ada trust-PAD-over-XVS on
PAD and PAD___XVS
types are involved (see ada/exp_dbug.ads in the GCC sources for
a complete description of the encoding used by the GNAT compiler).
This is the default.
set ada trust-PAD-over-XVS off
ada
trust-PAD-over-XVS to off activates a work-around which may fix
the issue. It is always safe to set ada trust-PAD-over-XVS to
off, but this incurs a slight performance penalty, so it is
recommended to leave this setting to on unless necessary.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In addition to the other fully-supported programming languages,
also provides a pseudo-language, called minimal.
It does not represent a real programming language, but provides a set
of capabilities close to what the C or assembly languages provide.
This should allow most simple operations to be performed while debugging
an application that uses a language currently not supported by .
If the language is set to auto, will automatically
select this language if the current frame corresponds to an unsupported
language.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The commands described in this chapter allow you to inquire about the symbols (names of variables, functions and types) defined in your program. This information is inherent in the text of your program and does not change as your program executes. finds it in your program's symbol table, in the file indicated when you started (see section Choosing Files), or by one of the file-management commands (see section Commands to Specify Files).
Occasionally, you may need to refer to symbols that contain unusual characters, which ordinarily treats as word delimiters. The most frequent case is in referring to static variables in other source files (see section Program Variables). File names are recorded in object files as debugging symbols, but would ordinarily parse a typical file name, like `foo.c', as the three words `foo' `.' `c'. To allow to recognize `foo.c' as a single symbol, enclose it in single quotes; for example,
p 'foo.c'::x |
looks up the value of x in the scope of the file `foo.c'.
set case-sensitive on
set case-sensitive off
set case-sensitive auto
set
case-sensitive lets you do that by specifying on for
case-sensitive matches or off for case-insensitive ones. If
you specify auto, case sensitivity is reset to the default
suitable for the source language. The default is case-sensitive
matches for all languages except for Fortran, for which the default is
case-insensitive matches.
show case-sensitive
set print type methods
set print type methods on
set print type methods off
ptype, or using set
print type methods. Specifying on will cause to
display the methods; this is the default. Specifying off will
cause to omit the methods.
show print type methods
set print type typedefs
set print type typedefs on
set print type typedefs off
Normally, when prints a class, it displays any typedefs
defined in that class. You can control this behavior either by
passing the appropriate flag to ptype, or using set
print type typedefs. Specifying on will cause to
display the typedef definitions; this is the default. Specifying
off will cause to omit the typedef definitions.
Note that this controls whether the typedef definition itself is
printed, not whether typedef names are substituted when printing other
types.
show print type typedefs
info address symbol
Note the contrast with `print &symbol', which does not work at all for a register variable, and for a stack local variable prints the exact address of the current instantiation of the variable.
info symbol addr
() info symbol 0x54320 _initialize_vx + 396 in section .text |
This is the opposite of the info address command. You can use
it to find out the name of a variable or a function given its address.
For dynamically linked executables, the name of executable or shared library containing the symbol is also printed:
() info symbol 0x400225 _start + 5 in section .text of /tmp/a.out () info symbol 0x2aaaac2811cf __read_nocancel + 6 in section .text of /usr/lib64/libc.so.6 |
whatis[/flags] [arg]
$, the last value in the value history.
If arg is an expression (see section Expressions), it is not actually evaluated, and any side-effecting operations (such as assignments or function calls) inside it do not take place.
If arg is a variable or an expression, whatis prints its
literal type as it is used in the source code. If the type was
defined using a typedef, whatis will not print
the data type underlying the typedef. If the type of the
variable or the expression is a compound data type, such as
struct or class, whatis never prints their
fields or methods. It just prints the struct/class
name (a.k.a. its tag). If you want to see the members of
such a compound data type, use ptype.
If arg is a type name that was defined using typedef,
whatis unrolls only one level of that typedef.
Unrolling means that whatis will show the underlying type used
in the typedef declaration of arg. However, if that
underlying type is also a typedef, whatis will not
unroll it.
For C code, the type names may also have the form `class class-name', `struct struct-tag', `union union-tag' or `enum enum-tag'.
flags can be used to modify how the type is displayed. Available flags are:
r
/r flag disables this.
m
M
set print type methods.
t
T
set print type typedefs.
ptype[/flags] [arg]
ptype accepts the same arguments as whatis, but prints a
detailed description of the type, instead of just the name of the type.
See section Expressions.
Contrary to whatis, ptype always unrolls any
typedefs in its argument declaration, whether the argument is
a variable, expression, or a data type. This means that ptype
of a variable or an expression will not print literally its type as
present in the source code--use whatis for that. typedefs at
the pointer or reference targets are also unrolled. Only typedefs of
fields, methods and inner class typedefs of structs,
classes and unions are not unrolled even with ptype.
For example, for this variable declaration:
typedef double real_t;
struct complex { real_t real; double imag; };
typedef struct complex complex_t;
complex_t var;
real_t *real_pointer_var;
|
the two commands give this output:
() whatis var
type = complex_t
() ptype var
type = struct complex {
real_t real;
double imag;
}
() whatis complex_t
type = struct complex
() whatis struct complex
type = struct complex
() ptype struct complex
type = struct complex {
real_t real;
double imag;
}
() whatis real_pointer_var
type = real_t *
() ptype real_pointer_var
type = double *
|
As with whatis, using ptype without an argument refers to
the type of $, the last value in the value history.
Sometimes, programs use opaque data types or incomplete specifications of complex data structure. If the debug information included in the program does not allow to display a full declaration of the data type, it will say `<incomplete type>'. For example, given these declarations:
struct foo;
struct foo *fooptr;
|
but no definition for struct foo itself, will say:
() ptype foo $1 = <incomplete type> |
"Incomplete type" is C terminology for data types that are not completely specified.
info types regexp
info types
value, but
`i type ^value$' gives information only on types whose complete
name is value.
This command differs from ptype in two ways: first, like
whatis, it does not print a detailed description; second, it
lists all source files where a type is defined.
info type-printers
ptype or
whatis, these printers are consulted when the name of a type
is needed. See section 23.2.2.8 Type Printing API, for more information on writing
type printers.
info type-printers displays all the available type printers.
enable type-printer name...
disable type-printer name...
info scope location
() info scope command_line_handler Scope for command_line_handler: Symbol rl is an argument at stack/frame offset 8, length 4. Symbol linebuffer is in static storage at address 0x150a18, length 4. Symbol linelength is in static storage at address 0x150a1c, length 4. Symbol p is a local variable in register $esi, length 4. Symbol p1 is a local variable in register $ebx, length 4. Symbol nline is a local variable in register $edx, length 4. Symbol repeat is a local variable at frame offset -8, length 4. |
This command is especially useful for determining what data to collect during a trace experiment, see collect.
info source
info sources
info functions
info functions regexp
step; `info fun ^step' finds those whose names
start with step. If a function name contains characters
that conflict with the regular expression language (e.g.
`operator*()'), they may be quoted with a backslash.
info variables
info variables regexp
info classes
info classes regexp
info selectors
info selectors regexp
set opaque-type-resolution on
struct, class, or
union---for example, struct MyType *---that is used in one
source file although the full declaration of struct MyType is in
another source file. The default is on.
A change in the setting of this subcommand will not take effect until the next time symbols for a file are loaded.
set opaque-type-resolution off
{<no data fields>}
|
show opaque-type-resolution
maint print symbols filename
maint print psymbols filename
maint print msymbols filename
info sources to find out which files these are. If you
use `maint print psymbols' instead, the dump shows information about
symbols that only knows partially--that is, symbols defined in
files that has skimmed, but not yet read completely. Finally,
`maint print msymbols' dumps just the minimal symbol information
required for each object file from which has read some symbols.
See section Commands to Specify Files, for a discussion of how
reads symbols (in the description of symbol-file).
maint info symtabs [ regexp ]
maint info psymtabs [ regexp ]
List the struct symtab or struct partial_symtab
structures whose names match regexp. If regexp is not
given, list them all. The output includes expressions which you can
copy into a debugging this one to examine a particular
structure in more detail. For example:
() maint info psymtabs dwarf2read
{ objfile /home/gnu/build/gdb/gdb
((struct objfile *) 0x82e69d0)
{ psymtab /home/gnu/src/gdb/dwarf2read.c
((struct partial_symtab *) 0x8474b10)
readin no
fullname (null)
text addresses 0x814d3c8 -- 0x8158074
globals (* (struct partial_symbol **) 0x8507a08 @ 9)
statics (* (struct partial_symbol **) 0x40e95b78 @ 2882)
dependencies (none)
}
}
() maint info symtabs
()
|
() break dwarf2_psymtab_to_symtab
Breakpoint 1 at 0x814e5da: file /home/gnu/src/gdb/dwarf2read.c,
line 1574.
() maint info symtabs
{ objfile /home/gnu/build/gdb/gdb
((struct objfile *) 0x82e69d0)
{ symtab /home/gnu/src/gdb/dwarf2read.c
((struct symtab *) 0x86c1f38)
dirname (null)
fullname (null)
blockvector ((struct blockvector *) 0x86c1bd0) (primary)
linetable ((struct linetable *) 0x8370fa0)
debugformat DWARF 2
}
}
()
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Once you think you have found an error in your program, you might want to find out for certain whether correcting the apparent error would lead to correct results in the rest of the run. You can find the answer by experiment, using the features for altering execution of the program.
For example, you can store new values into variables or memory locations, give your program a signal, restart it at a different address, or even return prematurely from a function.
17.1 Assignment to Variables Assignment to variables 17.2 Continuing at a Different Address Continuing at a different address 17.3 Giving your Program a Signal Giving your program a signal 17.4 Returning from a Function Returning from a function 17.5 Calling Program Functions Calling your program's functions 17.6 Patching Programs Patching your program
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To alter the value of a variable, evaluate an assignment expression. See section Expressions. For example,
print x=4 |
stores the value 4 into the variable x, and then prints the
value of the assignment expression (which is 4).
See section Using with Different Languages, for more
information on operators in supported languages.
If you are not interested in seeing the value of the assignment, use the
set command instead of the print command. set is
really the same as print except that the expression's value is
not printed and is not put in the value history (see section Value History). The expression is evaluated only for its effects.
If the beginning of the argument string of the set command
appears identical to a set subcommand, use the set
variable command instead of just set. This command is identical
to set except for its lack of subcommands. For example, if your
program has a variable width, you get an error if you try to set
a new value with just `set width=13', because has the
command set width:
() whatis width type = double () p width $4 = 13 () set width=47 Invalid syntax in expression. |
The invalid expression, of course, is `=47'. In
order to actually set the program's variable width, use
() set var width=47 |
Because the set command has many subcommands that can conflict
with the names of program variables, it is a good idea to use the
set variable command instead of just set. For example, if
your program has a variable g, you run into problems if you try
to set a new value with just `set g=4', because has
the command set gnutarget, abbreviated set g:
() whatis g
type = double
() p g
$1 = 1
() set g=4
() p g
$2 = 1
() r
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /home/smith/cc_progs/a.out
"/home/smith/cc_progs/a.out": can't open to read symbols:
Invalid bfd target.
() show g
The current BFD target is "=4".
|
The program variable g did not change, and you silently set the
gnutarget to an invalid value. In order to set the variable
g, use
() set var g=4 |
allows more implicit conversions in assignments than C; you can freely store an integer value into a pointer variable or vice versa, and you can convert any structure to any other structure that is the same length or shorter.
To store values into arbitrary places in memory, use the `{...}'
construct to generate a value of specified type at a specified address
(see section Expressions). For example, {int}0x83040 refers
to memory location 0x83040 as an integer (which implies a certain size
and representation in memory), and
set {int}0x83040 = 4
|
stores the value 4 into that memory location.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Ordinarily, when you continue your program, you do so at the place where
it stopped, with the continue command. You can instead continue at
an address of your own choosing, with the following commands:
jump linespec
j linespec
jump location
j location
tbreak command in conjunction with
jump. See section Setting Breakpoints.
The jump command does not change the current stack frame, or
the stack pointer, or the contents of any memory location or any
register other than the program counter. If line linespec is in
a different function from the one currently executing, the results may
be bizarre if the two functions expect different patterns of arguments or
of local variables. For this reason, the jump command requests
confirmation if the specified line is not in the function currently
executing. However, even bizarre results are predictable if you are
well acquainted with the machine-language code of your program.
On many systems, you can get much the same effect as the jump
command by storing a new value into the register $pc. The
difference is that this does not start your program running; it only
changes the address of where it will run when you continue. For
example,
set $pc = 0x485 |
makes the next continue command or stepping command execute at
address 0x485, rather than at the address where your program stopped.
See section Continuing and Stepping.
The most common occasion to use the jump command is to back
up--perhaps with more breakpoints set--over a portion of a program
that has already executed, in order to examine its execution in more
detail.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
signal signal
signal 2 and signal
SIGINT are both ways of sending an interrupt signal.
Alternatively, if signal is zero, continue execution without
giving a signal. This is useful when your program stopped on account of
a signal and would ordinarily see the signal when resumed with the
continue command; `signal 0' causes it to resume without a
signal.
signal does not repeat when you press RET a second time
after executing the command.
Invoking the signal command is not the same as invoking the
kill utility from the shell. Sending a signal with kill
causes to decide what to do with the signal depending on
the signal handling tables (see section 5.4 Signals). The signal command
passes the signal directly to your program.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
return
return expression
return
command. If you give an
expression argument, its value is used as the function's return
value.
When you use return, discards the selected stack frame
(and all frames within it). You can think of this as making the
discarded frame return prematurely. If you wish to specify a value to
be returned, give that value as the argument to return.
This pops the selected stack frame (see section Selecting a Frame), and any other frames inside of it, leaving its caller as the innermost remaining frame. That frame becomes selected. The specified value is stored in the registers used for returning values of functions.
The return command does not resume execution; it leaves the
program stopped in the state that would exist if the function had just
returned. In contrast, the finish command (see section Continuing and Stepping) resumes execution until the
selected stack frame returns naturally.
needs to know how the expression argument should be set for
the inferior. The concrete registers assignment depends on the OS ABI and the
type being returned by the selected stack frame. For example it is common for
OS ABI to return floating point values in FPU registers while integer values in
CPU registers. Still some ABIs return even floating point values in CPU
registers. Larger integer widths (such as long long int) also have
specific placement rules. already knows the OS ABI from its
current target so it needs to find out also the type being returned to make the
assignment into the right register(s).
Normally, the selected stack frame has debug info. will always
use the debug info instead of the implicit type of expression when the
debug info is available. For example, if you type return -1, and the
function in the current stack frame is declared to return a long long
int, transparently converts the implicit int value of -1
into a long long int:
Breakpoint 1, func () at gdb.base/return-nodebug.c:29
29 return 31;
() return -1
Make func return now? (y or n) y
#0 0x004004f6 in main () at gdb.base/return-nodebug.c:43
43 printf ("result=%lld\n", func ());
()
|
However, if the selected stack frame does not have a debug info, e.g., if the
function was compiled without debug info, has to find out the type
to return from user. Specifying a different type by mistake may set the value
in different inferior registers than the caller code expects. For example,
typing return -1 with its implicit type int would set only a part
of a long long int result for a debug info less function (on 32-bit
architectures). Therefore the user is required to specify the return type by
an appropriate cast explicitly:
Breakpoint 2, 0x0040050b in func () () return -1 Return value type not available for selected stack frame. Please use an explicit cast of the value to return. () return (long long int) -1 Make selected stack frame return now? (y or n) y #0 0x00400526 in main () () |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
print expr
call expr
void
returned values.
You can use this variant of the print command if you want to
execute a function from your program that does not return anything
(a.k.a. a void function), but without cluttering the output
with void returned values that will otherwise
print. If the result is not void, it is printed and saved in the
value history.
It is possible for the function you call via the print or
call command to generate a signal (e.g., if there's a bug in
the function, or if you passed it incorrect arguments). What happens
in that case is controlled by the set unwindonsignal command.
Similarly, with a C++ program it is possible for the function you
call via the print or call command to generate an
exception that is not handled due to the constraints of the dummy
frame. In this case, any exception that is raised in the frame, but has
an out-of-frame exception handler will not be found. GDB builds a
dummy-frame for the inferior function call, and the unwinder cannot
seek for exception handlers outside of this dummy-frame. What happens
in that case is controlled by the
set unwind-on-terminating-exception command.
set unwindonsignal
show unwindonsignal
set unwind-on-terminating-exception
show unwind-on-terminating-exception
Sometimes, a function you wish to call is actually a weak alias for another function. In such case, might not pick up the type information, including the types of the function arguments, which causes to call the inferior function incorrectly. As a result, the called function will function erroneously and may even crash. A solution to that is to use the name of the aliased function instead.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, opens the file containing your program's executable code (or the corefile) read-only. This prevents accidental alterations to machine code; but it also prevents you from intentionally patching your program's binary.
If you'd like to be able to patch the binary, you can specify that
explicitly with the set write command. For example, you might
want to turn on internal debugging flags, or even to make emergency
repairs.
set write on
set write off
If you have already loaded a file, you must load it again (using the
exec-file or core-file command) after changing set
write, for your new setting to take effect.
show write
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
needs to know the file name of the program to be debugged, both in order to read its symbol table and in order to start your program. To debug a core dump of a previous run, you must also tell the name of the core dump file.
18.1 Commands to Specify Files Commands to specify files 18.2 Debugging Information in Separate Files Debugging information in separate files 18.3 Debugging information in a special section 18.4 Index Files Speed Up Index files speed up GDB 18.5 Errors Reading Symbol Files Errors reading symbol files 18.6 GDB Data Files GDB data files
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You may want to specify executable and core dump file names. The usual way to do this is at start-up time, using the arguments to 's start-up commands (see section Getting In and Out of ).
Occasionally it is necessary to change to a different file during a
session. Or you may run and forget to
specify a file you want to use. Or you are debugging a remote target
via gdbserver (see section Using the gdbserver Program). In these situations the commands to specify
new files are useful.
file filename
run command. If you do not specify a
directory and the file is not found in the working directory,
uses the environment variable PATH as a list of
directories to search, just as the shell does when looking for a program
to run. You can change the value of this variable, for both
and your program, using the path command.
You can load unlinked object `.o' files into using
the file command. You will not be able to "run" an object
file, but you can disassemble functions and inspect variables. Also,
if the underlying BFD functionality supports it, you could use
gdb -write to patch object files using this technique. Note
that can neither interpret nor modify relocations in this
case, so branches and some initialized variables will appear to go to
the wrong place. But this feature is still handy from time to time.
file
file with no argument makes discard any information it
has on both executable file and the symbol table.
exec-file [ filename ]
PATH
if necessary to locate your program. Omitting filename means to
discard information on the executable file.
symbol-file [ filename ]
PATH is
searched when necessary. Use the file command to get both symbol
table and program to run from the same file.
symbol-file with no argument clears out information on your
program's symbol table.
The symbol-file command causes to forget the contents of
some breakpoints and auto-display expressions. This is because they may
contain pointers to the internal data recording symbols and data types,
which are part of the old symbol table data being discarded inside
.
symbol-file does not repeat if you press RET again after
executing it once.
When is configured for a particular environment, it
understands debugging information in whatever format is the standard
generated for that environment; you may use either a GNU compiler, or
other compilers that adhere to the local conventions.
Best results are usually obtained from GNU compilers; for example,
using you can generate debugging information for
optimized code.
For most kinds of object files, with the exception of old SVR3 systems
using COFF, the symbol-file command does not normally read the
symbol table in full right away. Instead, it scans the symbol table
quickly to find which source files and which symbols are present. The
details are read later, one source file at a time, as they are needed.
The purpose of this two-stage reading strategy is to make
start up faster. For the most part, it is invisible except for
occasional pauses while the symbol table details for a particular source
file are being read. (The set verbose command can turn these
pauses into messages if desired. See section Optional Warnings and Messages.)
We have not implemented the two-stage strategy for COFF yet. When the
symbol table is stored in COFF format, symbol-file reads the
symbol table data in full right away. Note that "stabs-in-COFF"
still does the two-stage strategy, since the debug info is actually
in stabs format.
symbol-file [ -readnow ] filename
file [ -readnow ] filename
core-file [filename]
core
core-file with no argument specifies that no core file is
to be used.
Note that the core file is ignored when your program is actually running
under . So, if you have been running your program and you
wish to debug a core file instead, you must kill the subprocess in which
the program is running. To do this, use the kill command
(see section Killing the Child Process).
add-symbol-file filename address
add-symbol-file filename address [ -readnow ]
add-symbol-file filename address -s section address ...
add-symbol-file command reads additional symbol table
information from the file filename. You would use this command
when filename has been dynamically loaded (by some other means)
into the program that is running. address should be the memory
address at which the file has been loaded; cannot figure
this out for itself. You can additionally specify an arbitrary number
of `-s section address' pairs, to give an explicit
section name and base address for that section. You can specify any
address as an expression.
The symbol table of the file filename is added to the symbol table
originally read with the symbol-file command. You can use the
add-symbol-file command any number of times; the new symbol data
thus read keeps adding to the old. To discard all old symbol data
instead, use the symbol-file command without any arguments.
Although filename is typically a shared library file, an executable file, or some other object file which has been fully relocated for loading into a process, you can also load symbolic information from relocatable `.o' files, as long as:
add-symbol-file command.
Some embedded operating systems, like Sun Chorus and VxWorks, can load
relocatable files into an already running program; such systems
typically make the requirements above easy to meet. However, it's
important to recognize that many native systems use complex link
procedures (.linkonce section factoring and C++ constructor table
assembly, for example) that make the requirements difficult to meet. In
general, one cannot assume that using add-symbol-file to read a
relocatable object file's symbolic information will have the same effect
as linking the relocatable object file into the program in the normal
way.
add-symbol-file does not repeat if you press RET after using it.
add-symbol-file-from-memory address
syscall DSO into each
process's address space; this DSO provides kernel-specific code for
some system calls. The argument can be any expression whose
evaluation yields the address of the file's shared object file header.
For this command to work, you must have used symbol-file or
exec-file commands in advance.
add-shared-symbol-files library-file
assf library-file
add-shared-symbol-files command can currently be used only
in the Cygwin build of on MS-Windows OS, where it is an
alias for the dll-symbols command (see section 21.1.5 Features for Debugging MS Windows PE Executables).
automatically looks for shared libraries, however if
does not find yours, you can invoke
add-shared-symbol-files. It takes one argument: the shared
library's file name. assf is a shorthand alias for
add-shared-symbol-files.
section section addr
section command changes the base address of the named
section of the exec file to addr. This can be used if the
exec file does not contain section addresses, (such as in the
a.out format), or when the addresses specified in the file
itself are wrong. Each section must be changed separately. The
info files command, described below, lists all the sections and
their addresses.
info files
info target
info files and info target are synonymous; both print the
current target (see section Specifying a Debugging Target),
including the names of the executable and core dump files currently in
use by , and the files from which symbols were loaded. The
command help target lists all possible targets rather than
current ones.
maint info sections
maint info sections. In addition to the section information
displayed by info files, this command displays the flags and file
offset of each section in the executable and core dump files. In addition,
maint info sections provides the following command options (which
may be arbitrarily combined):
ALLOBJ
sections
section-flags
ALLOC
LOAD
.bss sections.
RELOC
READONLY
CODE
DATA
ROM
CONSTRUCTOR
HAS_CONTENTS
NEVER_LOAD
COFF_SHARED_LIBRARY
IS_COMMON
set trust-readonly-sections on
The default is off.
set trust-readonly-sections off
show trust-readonly-sections
All file-specifying commands allow both absolute and relative file names as arguments. always converts the file name to an absolute file name and remembers it that way.
supports GNU/Linux, MS-Windows, HP-UX, SunOS, SVr4, Irix, and IBM RS/6000 AIX shared libraries.
On MS-Windows must be linked with the Expat library to support shared libraries. See Expat.
automatically loads symbol definitions from shared libraries
when you use the run command, or when you examine a core file.
(Before you issue the run command, does not understand
references to a function in a shared library, however--unless you are
debugging a core file).
On HP-UX, if the program loads a library explicitly,
automatically loads the symbols at the time of the shl_load call.
There are times, however, when you may wish to not automatically load symbol definitions from shared libraries, such as when they are particularly large or there are many of them.
To control the automatic loading of shared library symbols, use the commands:
set auto-solib-add mode
on, symbols from all shared object libraries
will be loaded automatically when the inferior begins execution, you
attach to an independently started inferior, or when the dynamic linker
informs that a new library has been loaded. If mode
is off, symbols must be loaded manually, using the
sharedlibrary command. The default value is on.
If your program uses lots of shared libraries with debug info that takes large amounts of memory, you can decrease the memory footprint by preventing it from automatically loading the symbols from shared libraries. To that end, type set auto-solib-add off before running the inferior, then load each library whose debug symbols you do need with sharedlibrary regexp, where regexp is a regular expression that matches the libraries whose symbols you want to be loaded.
show auto-solib-add
To explicitly load shared library symbols, use the sharedlibrary
command:
info share regex
info sharedlibrary regex
sharedlibrary regex
share regex
run. If
regex is omitted all shared libraries required by your program are
loaded.
nosharedlibrary
Sometimes you may wish that stops and gives you control
when any of shared library events happen. The best way to do this is
to use catch load and catch unload (see section 5.1.3 Setting Catchpoints).
also supports the the set stop-on-solib-events
command for this. This command exists for historical reasons. It is
less useful than setting a catchpoint, because it does not allow for
conditions or commands as a catchpoint does.
set stop-on-solib-events
show stop-on-solib-events
Shared libraries are also supported in many cross or remote debugging configurations. needs to have access to the target's libraries; this can be accomplished either by providing copies of the libraries on the host system, or by asking to automatically retrieve the libraries from the target. If copies of the target libraries are provided, they need to be the same as the target libraries, although the copies on the target can be stripped as long as the copies on the host are not.
For remote debugging, you need to tell where the target libraries are, so that it can load the correct copies--otherwise, it may try to load the host's libraries. has two variables to specify the search directories for target libraries.
set sysroot path
set sysroot to find shared
libraries, they need to be laid out in the same way that they are on
the target, with e.g. a `/lib' and `/usr/lib' hierarchy
under path.
If path starts with the sequence `remote:', will
retrieve the target libraries from the remote system. This is only
supported when using a remote target that supports the remote get
command (see section Sending files to a remote system).
The part of path following the initial `remote:'
(if present) is used as system root prefix on the remote file system.
(12)
For targets with an MS-DOS based filesystem, such as MS-Windows and SymbianOS, tries prefixing a few variants of the target absolute file name with path. But first, on Unix hosts, converts all backslash directory separators into forward slashes, because the backslash is not a directory separator on Unix:
c:\foo\bar.dll => c:/foo/bar.dll |
Then, attempts prefixing the target file name with path, and looks for the resulting file name in the host file system:
c:/foo/bar.dll => /path/to/sysroot/c:/foo/bar.dll |
If that does not find the shared library, tries removing the `:' character from the drive spec, both for convenience, and, for the case of the host file system not supporting file names with colons:
c:/foo/bar.dll => /path/to/sysroot/c/foo/bar.dll |
This makes it possible to have a system root that mirrors a target with more than one drive. E.g., you may want to setup your local copies of the target system shared libraries like so (note `c' vs `z'):
`/path/to/sysroot/c/sys/bin/foo.dll' `/path/to/sysroot/c/sys/bin/bar.dll' `/path/to/sysroot/z/sys/bin/bar.dll' |
and point the system root at `/path/to/sysroot', so that can find the correct copies of both `c:\sys\bin\foo.dll', and `z:\sys\bin\bar.dll'.
If that still does not find the shared library, tries removing the whole drive spec from the target file name:
c:/foo/bar.dll => /path/to/sysroot/foo/bar.dll |
This last lookup makes it possible to not care about the drive name, if you don't want or need to.
The set solib-absolute-prefix command is an alias for set
sysroot.
You can set the default system root by using the configure-time `--with-sysroot' option. If the system root is inside 's configured binary prefix (set with `--prefix' or `--exec-prefix'), then the default system root will be updated automatically if the installed is moved to a new location.
show sysroot
set solib-search-path path
show solib-search-path
set target-file-system-kind kind
Shared library file names as reported by the target system may not
make sense as is on the system is running on. For
example, when remote debugging a target that has MS-DOS based file
system semantics, from a Unix host, the target may be reporting to
a list of loaded shared libraries with file names such as
`c:\Windows\kernel32.dll'. On Unix hosts, there's no concept of
drive letters, so the `c:\' prefix is not normally understood as
indicating an absolute file name, and neither is the backslash
normally considered a directory separator character. In that case,
the native file system would interpret this whole absolute file name
as a relative file name with no directory components. This would make
it impossible to point at a copy of the remote target's
shared libraries on the host using set sysroot, and impractical
with set solib-search-path. Setting
target-file-system-kind to dos-based tells
to interpret such file names similarly to how the target would, and to
map them to file names valid on 's native file system
semantics. The value of kind can be "auto", in addition
to one of the supported file system kinds. In that case,
tries to determine the appropriate file system variant based on the
current target's operating system (see section Configuring the Current ABI). The supported file system settings are:
unix
dos-based
auto
When processing file names provided by the user,
frequently needs to compare them to the file names recorded in the
program's debug info. Normally, compares just the
base names of the files as strings, which is reasonably fast
even for very large programs. (The base name of a file is the last
portion of its name, after stripping all the leading directories.)
This shortcut in comparison is based upon the assumption that files
cannot have more than one base name. This is usually true, but
references to files that use symlinks or similar filesystem
facilities violate that assumption. If your program records files
using such facilities, or if you provide file names to
using symlinks etc., you can set basenames-may-differ to
true to instruct to completely canonicalize each
pair of file names it needs to compare. This will make file-name
comparisons accurate, but at a price of a significant slowdown.
set basenames-may-differ
show basenames-may-differ
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
allows you to put a program's debugging information in a file separate from the executable itself, in a way that allows to find and load the debugging information automatically. Since debugging information can be very large--sometimes larger than the executable code itself--some systems distribute debugging information for their executables in separate files, which users can install only when they need to debug a problem.
supports two ways of specifying the separate debug info file:
Depending on the way the debug info file is specified, uses two different methods of looking for the debug file:
So, for example, suppose you ask to debug
`/usr/bin/ls', which has a debug link that specifies the
file `ls.debug', and a build ID whose value in hex is
abcdef1234. If the list of the global debug directories includes
`/usr/lib/debug', then will look for the following
debug information files, in the indicated order:
Global debugging info directories default to what is set by configure option `--with-separate-debug-dir'. During run you can also set the global debugging info directories, and view the list is currently using.
set debug-file-directory directories
show debug-file-directory
A debug link is a special section of the executable file named
.gnu_debuglink. The section must contain:
Any executable file format can carry a debug link, as long as it can
contain a section named .gnu_debuglink with the contents
described above.
The build ID is a special section in the executable file (and in other
ELF binary files that may consider). This section is
often named .note.gnu.build-id, but that name is not mandatory.
It contains unique identification for the built files--the ID remains
the same across multiple builds of the same build tree. The default
algorithm SHA1 produces 160 bits (40 hexadecimal characters) of the
content for the build ID string. The same section with an identical
value is present in the original built binary with symbols, in its
stripped variant, and in the separate debugging information file.
The debugging information file itself should be an ordinary
executable, containing a full set of linker symbols, sections, and
debugging information. The sections of the debugging information file
should have the same names, addresses, and sizes as the original file,
but they need not contain any data--much like a .bss section
in an ordinary executable.
The GNU binary utilities (Binutils) package includes the `objcopy' utility that can produce the separated executable / debugging information file pairs using the following commands:
objcopy --only-keep-debug foo foo.debug strip -g foo |
These commands remove the debugging information from the executable file `foo' and place it in the file `foo.debug'. You can use the first, second or both methods to link the two files:
objcopy --add-gnu-debuglink=foo.debug foo |
Ulrich Drepper's `elfutils' package, starting with version 0.53, contains
a version of the strip command such that the command strip foo -f
foo.debug has the same functionality as the two objcopy commands and
the ln -s command above, together.
ld --build-id or
the counterpart gcc -Wl,--build-id. Build ID support plus
compatibility fixes for debug files separation are present in GNU binary
utilities (Binutils) package since version 2.18.
The CRC used in .gnu_debuglink is the CRC-32 defined in
IEEE 802.3 using the polynomial:
x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1 |
The function is computed byte at a time, taking the least
significant bit of each byte first. The initial pattern
0xffffffff is used, to ensure leading zeros affect the CRC and
the final result is inverted to ensure trailing zeros also affect the
CRC.
Note: This is the same CRC polynomial as used in handling the
Remote Serial Protocol qCRC packet (see section Remote Serial Protocol). However in the
case of the Remote Serial Protocol, the CRC is computed most
significant bit first, and the result is not inverted, so trailing
zeros have no effect on the CRC value.
To complete the description, we show below the code of the function
which produces the CRC used in .gnu_debuglink. Inverting the
initially supplied crc argument means that an initial call to
this function passing in zero will start computing the CRC using
0xffffffff.
unsigned long
gnu_debuglink_crc32 (unsigned long crc,
unsigned char *buf, size_t len)
{
static const unsigned long crc32_table[256] =
{
0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856,
0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9,
0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4,
0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3,
0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a,
0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599,
0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190,
0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f,
0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e,
0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed,
0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950,
0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3,
0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a,
0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5,
0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010,
0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17,
0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6,
0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615,
0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344,
0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb,
0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a,
0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1,
0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c,
0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef,
0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe,
0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31,
0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c,
0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b,
0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242,
0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1,
0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278,
0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7,
0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66,
0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605,
0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8,
0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b,
0x2d02ef8d
};
unsigned char *end;
crc = ~crc & 0xffffffff;
for (end = buf + len; buf < end; ++buf)
crc = crc32_table[(crc ^ *buf) & 0xff] ^ (crc >> 8);
return ~crc & 0xffffffff;
}
|
This computation does not apply to the "build ID" method.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some systems ship pre-built executables and libraries that have a special `.gnu_debugdata' section. This feature is called MiniDebugInfo. This section holds an LZMA-compressed object and is used to supply extra symbols for backtraces.
The intent of this section is to provide extra minimal debugging information for use in simple backtraces. It is not intended to be a replacement for full separate debugging information (see section 18.2 Debugging Information in Separate Files). The example below shows the intended use; however, does not currently put restrictions on what sort of debugging information might be included in the section.
has support for this extension. If the section exists, then it is used provided that no other source of debugging information can be found, and that was configured with LZMA support.
This section can be easily created using objcopy and other
standard utilities:
# Extract the dynamic symbols from the main binary, there is no need
# to also have these in the normal symbol table
nm -D binary --format=posix --defined-only \
| awk '{ print $1 }' | sort > dynsyms
# Extract all the text (i.e. function) symbols from the debuginfo .
nm binary --format=posix --defined-only \
| awk '{ if ($2 == "T" || $2 == "t") print $1 }' \
| sort > funcsyms
# Keep all the function symbols not already in the dynamic symbol
# table.
comm -13 dynsyms funcsyms > keep_symbols
# Copy the full debuginfo, keeping only a minimal set of symbols and
# removing some unnecessary sections.
objcopy -S --remove-section .gdb_index --remove-section .comment \
--keep-symbols=keep_symbols binary mini_debuginfo
# Inject the compressed data into the .gnu_debugdata section of the
# original binary.
xz mini_debuginfo
objcopy --add-section .gnu_debugdata=mini_debuginfo.xz binary
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When finds a symbol file, it scans the symbols in the file in order to construct an internal symbol table. This lets most operations work quickly--at the cost of a delay early on. For large programs, this delay can be quite lengthy, so provides a way to build an index, which speeds up startup.
The index is stored as a section in the symbol file. can
write the index to a file, then you can put it into the symbol file
using objcopy.
To create an index file, use the save gdb-index command:
save gdb-index directory
Once you have created an index file you can merge it into your symbol
file, here named `symfile', using objcopy:
$ objcopy --add-section .gdb_index=symfile.gdb-index \
--set-section-flags .gdb_index=readonly symfile symfile
|
will normally ignore older versions of `.gdb_index'
sections that have been deprecated. Usually they are deprecated because
they are missing a new feature or have performance issues.
To tell to use a deprecated index section anyway
specify set use-deprecated-index-sections on.
The default is off.
This can speed up startup, but may result in some functionality being lost.
See section J. .gdb_index section format.
Warning: Setting use-deprecated-index-sections to on
must be done before gdb reads the file. The following will not work:
$ gdb -ex "set use-deprecated-index-sections on" <program> |
Instead you must do, for example,
$ gdb -iex "set use-deprecated-index-sections on" <program> |
There are currently some limitation on indices. They only work when for DWARF debugging information, not stabs. And, they do not currently work for programs using Ada.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
While reading a symbol file, occasionally encounters problems,
such as symbol types it does not recognize, or known bugs in compiler
output. By default, does not notify you of such problems, since
they are relatively common and primarily of interest to people
debugging compilers. If you are interested in seeing information
about ill-constructed symbol tables, you can either ask to print
only one message about each such type of problem, no matter how many
times the problem occurs; or you can ask to print more messages,
to see how many times the problems occur, with the set
complaints command (see section Optional Warnings and Messages).
The messages currently printed, and their meanings, include:
inner block not inside outer block in symbol
The symbol information shows where symbol scopes begin and end (such as at the start of a function or a block of statements). This error indicates that an inner scope block is not fully contained in its outer scope blocks.
circumvents the problem by treating the inner block as if it had
the same scope as the outer block. In the error message, symbol
may be shown as "(don't know)" if the outer block is not a
function.
block at address out of order
The symbol information for symbol scope blocks should occur in order of increasing addresses. This error indicates that it does not do so.
does not circumvent this problem, and has trouble
locating symbols in the source file whose symbols it is reading. (You
can often determine what source file is affected by specifying
set verbose on. See section Optional Warnings and Messages.)
bad block start address patched
The symbol information for a symbol scope block has a start address smaller than the address of the preceding source line. This is known to occur in the SunOS 4.1.1 (and earlier) C compiler.
circumvents the problem by treating the symbol scope block as starting on the previous source line.
bad string table offset in symbol n
Symbol number n contains a pointer into the string table which is larger than the size of the string table.
circumvents the problem by considering the symbol to have the
name foo, which may cause other problems if many symbols end up
with this name.
unknown symbol type 0xnn
The symbol information contains new data types that does
not yet know how to read. 0xnn is the symbol type of the
uncomprehended information, in hexadecimal.
circumvents the error by ignoring this symbol information.
This usually allows you to debug your program, though certain symbols
are not accessible. If you encounter such a problem and feel like
debugging it, you can debug with itself, breakpoint
on complain, then go up to the function read_dbx_symtab
and examine *bufp to see the symbol.
stub type has NULL name
could not find the full definition for a struct or class.
const/volatile indicator missing (ok if using g++ v1.x), got...
info mismatch between compiler and debugger
could not parse a type specification output by the compiler.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
will sometimes read an auxiliary data file. These files are kept in a directory known as the data directory.
You can set the data directory's name, and view the name is currently using.
set data-directory directory
show data-directory
You can set the default data directory by using the configure-time `--with-gdb-datadir' option. If the data directory is inside 's configured binary prefix (set with `--prefix' or `--exec-prefix'), then the default data directory will be updated automatically if the installed is moved to a new location.
The data directory may also be specified with the
--data-directory command line option.
See section 2.1.2 Choosing Modes.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A target is the execution environment occupied by your program.
Often, runs in the same host environment as your program;
in that case, the debugging target is specified as a side effect when
you use the file or core commands. When you need more
flexibility--for example, running on a physically separate
host, or controlling a standalone system over a serial port or a
realtime system over a TCP/IP connection--you can use the target
command to specify one of the target types configured for
(see section Commands for Managing Targets).
It is possible to build for several different target architectures. When is built like that, you can choose one of the available architectures with the set architecture command.
set architecture arch
"auto", in addition to one of the
supported architectures.
show architecture
set processor
processor
set architecture
and show architecture.
19.1 Active Targets Active targets 19.2 Commands for Managing Targets Commands for managing targets 19.3 Choosing Target Byte Order Choosing target byte order
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are multiple classes of targets such as: processes, executable files or
recording sessions. Core files belong to the process class, making core file
and process mutually exclusive. Otherwise, can work concurrently
on multiple active targets, one in each class. This allows you to (for
example) start a process and inspect its activity, while still having access to
the executable file after the process finishes. Or if you start process
recording (see section 6. Running programs backward) and reverse-step there, you are
presented a virtual layer of the recording target, while the process target
remains stopped at the chronologically last point of the process execution.
Use the core-file and exec-file commands to select a new core
file or executable target (see section Commands to Specify Files). To
specify as a target a process that is already running, use the attach
command (see section Debugging an Already-running Process).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
target type parameters
Further parameters are interpreted by the target protocol, but typically include things like device names or host names to connect with, process numbers, and baud rates.
The target command does not repeat if you press RET again
after executing the command.
help target
info target or info files
(see section Commands to Specify Files).
help target name
set gnutarget args
set gnutarget command. Unlike most target commands,
with gnutarget the target refers to a program, not a machine.
Warning: To specify a file format with set gnutarget,
you must know the actual BFD name.
See section Commands to Specify Files.
show gnutarget
show gnutarget command to display what file format
gnutarget is set to read. If you have not set gnutarget,
will determine the file format for each file automatically,
and show gnutarget displays `The current BFD target is "auto"'.
Here are some common targets (available, or not, depending on the GDB configuration):
target exec program
target core filename
target remote medium
For example, if you have a board connected to `/dev/ttya' on the machine running , you could say:
target remote /dev/ttya |
target remote supports the load command. This is only
useful if you have some other way of getting the stub to the target
system, and you can put it somewhere in memory where it won't get
clobbered by the download.
target sim [simargs] ...
target sim
load
run
|
Some configurations may include these targets as well:
target nrom dev
Different targets are available on different configurations of ; your configuration may have more or fewer targets.
Many remote targets require you to download the executable's code once you've successfully established a connection. You may wish to control various aspects of this process.
set hash
show hash
set debug monitor
show debug monitor
load filename
load command may be available. Where it exists, it
is meant to make filename (an executable) available for debugging
on the remote system--by downloading, or dynamic linking, for example.
load also records the filename symbol table in , like
the add-symbol-file command.
If your does not have a load command, attempting to
execute it gets the error message "You can't do that when your
target is ..."
The file is loaded at whatever address is specified in the executable. For some object file formats, you can specify the load address when you link the program; for other formats, like a.out, the object file format specifies a fixed address.
Depending on the remote side capabilities, may be able to load programs into flash memory.
load does not repeat if you press RET again after using it.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some types of processors, such as the MIPS, PowerPC, and Renesas SH, offer the ability to run either big-endian or little-endian byte orders. Usually the executable or symbol will include a bit to designate the endian-ness, and you will not need to worry about which to use. However, you may still find it useful to adjust 's idea of processor endian-ness manually.
set endian big
set endian little
set endian auto
show endian
Note that these commands merely adjust interpretation of symbolic data on the host, and that they have absolutely no effect on the target system.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you are trying to debug a program running on a machine that cannot run in the usual way, it is often useful to use remote debugging. For example, you might use remote debugging on an operating system kernel, or on a small system which does not have a general purpose operating system powerful enough to run a full-featured debugger.
Some configurations of have special serial or TCP/IP interfaces to make this work with particular debugging targets. In addition, comes with a generic serial protocol (specific to , but not specific to any particular target system) which you can use if you write the remote stubs--the code that runs on the remote system to communicate with .
Other remote targets may be available in your
configuration of ; use help target to list them.
20.1 Connecting to a Remote Target Connecting to a remote target 20.2 Sending files to a remote system 20.3 Using the gdbserverProgramUsing the gdbserver program 20.4 Remote Configuration Remote configuration 20.5 Implementing a Remote Stub Implementing a remote stub
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On the host machine, you will need an unstripped copy of your program, since needs symbol and debugging information. Start up as usual, using the name of the local copy of your program as the first argument.
can communicate with the target over a serial line, or
over an IP network using TCP or UDP. In
each case, uses the same protocol for debugging your
program; only the medium carrying the debugging packets varies. The
target remote command establishes a connection to the target.
Its arguments indicate which medium to use:
target remote serial-device
target remote /dev/ttyb |
If you're using a serial line, you may want to give the
`--baud' option, or use the set remotebaud command
(see section set remotebaud) before the
target command.
target remote host:port
target remote tcp:host:port
For example, to connect to port 2828 on a terminal server named
manyfarms:
target remote manyfarms:2828 |
If your remote target is actually running on the same machine as your debugger session (e.g. a simulator for your target running on the same host), you can omit the hostname. For example, to connect to port 1234 on your local machine:
target remote :1234 |
Note that the colon is still required here.
target remote udp:host:port
manyfarms:
target remote udp:manyfarms:2828 |
When using a UDP connection for remote debugging, you should keep in mind that the `U' stands for "Unreliable". UDP can silently drop packets on busy or unreliable networks, which will cause havoc with your debugging session.
target remote | command
/bin/sh; it should expect remote
protocol packets on its standard input, and send replies on its
standard output. You could use this to run a stand-alone simulator
that speaks the remote debugging protocol, to make net connections
using programs like ssh, or for other similar tricks.
If command closes its standard output (perhaps by exiting),
will try to send it a SIGTERM signal. (If the
program has already exited, this will have no effect.)
Once the connection has been established, you can use all the usual commands to examine and change data. The remote program is already running; you can use step and continue, and you do not need to use run.
Whenever is waiting for the remote program, if you type the interrupt character (often Ctrl-c), attempts to stop the program. This may or may not succeed, depending in part on the hardware and the serial drivers the remote system uses. If you type the interrupt character once again, displays this prompt:
Interrupted while waiting for the program. Give up (and stop debugging it)? (y or n) |
If you type y, abandons the remote debugging session. (If you decide you want to try again later, you can use `target remote' again to connect once more.) If you type n, goes back to waiting.
detach
detach command to release it from control.
Detaching from the target normally resumes its execution, but the results
will depend on your particular remote stub. After the detach
command, is free to connect to another target.
disconnect
disconnect command behaves like detach, except that
the target is generally not resumed. It will wait for
(this instance or another one) to connect and continue debugging. After
the disconnect command, is again free to connect to
another target.
monitor cmd
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some remote targets offer the ability to transfer files over the same
connection used to communicate with . This is convenient
for targets accessible through other means, e.g. GNU/Linux systems
running gdbserver over a network interface. For other targets,
e.g. embedded devices with only a single serial port, this may be
the only way to upload or download files.
Not all remote targets support these commands.
remote put hostfile targetfile
remote get targetfile hostfile
remote delete targetfile
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver Program
gdbserver is a control program for Unix-like systems, which
allows you to connect your program with a remote via
target remote---but without linking in the usual debugging stub.
gdbserver is not a complete replacement for the debugging stubs,
because it requires essentially the same operating-system facilities
that itself does. In fact, a system that can run
gdbserver to connect to a remote could also run
locally! gdbserver is sometimes useful nevertheless,
because it is a much smaller program than itself. It is
also easier to port than all of , so you may be able to get
started more quickly on a new system by using gdbserver.
Finally, if you develop code for real-time systems, you may find that
the tradeoffs involved in real-time operation make it more convenient to
do as much development work as possible on another system, for example
by cross-compiling. You can use gdbserver to make a similar
choice for debugging.
and gdbserver communicate via either a serial line
or a TCP connection, using the standard remote serial
protocol.
Warning:gdbserverdoes not have any built-in security. Do not rungdbserverconnected to any public network; a connection togdbserverprovides access to the target system with the same privileges as the user runninggdbserver.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver
Run gdbserver on the target system. You need a copy of the
program you want to debug, including any libraries it requires.
gdbserver does not need your program's symbol table, so you can
strip the program if necessary to save space. on the host
system does all the symbol handling.
To use the server, you must tell it how to communicate with ; the name of your program; and the arguments for your program. The usual syntax is:
target> gdbserver comm program [ args ... ] |
comm is either a device name (to use a serial line), or a TCP
hostname and portnumber, or - or stdio to use
stdin/stdout of gdbserver.
For example, to debug Emacs with the argument
`foo.txt' and communicate with over the serial port
`/dev/com1':
target> gdbserver /dev/com1 emacs foo.txt |
gdbserver waits passively for the host to communicate
with it.
To use a TCP connection instead of a serial line:
target> gdbserver host:2345 emacs foo.txt |
The only difference from the previous example is the first argument,
specifying that you are communicating with the host via
TCP. The `host:2345' argument means that gdbserver is to
expect a TCP connection from machine `host' to local TCP port 2345.
(Currently, the `host' part is ignored.) You can choose any number
you want for the port number as long as it does not conflict with any
TCP ports already in use on the target system (for example, 23 is
reserved for telnet).(13) You must use the same port number with the host
target remote command.
The stdio connection is useful when starting gdbserver
with ssh:
(gdb) target remote | ssh -T hostname gdbserver - hello |
The `-T' option to ssh is provided because we don't need a remote pty, and we don't want escape-character handling. Ssh does this by default when a command is provided, the flag is provided to make it explicit. You could elide it if you want to.
Programs started with stdio-connected gdbserver have `/dev/null' for
stdin, and stdout,stderr are sent back to gdb for
display through a pipe connected to gdbserver.
Both stdout and stderr use the same pipe.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On some targets, gdbserver can also attach to running programs.
This is accomplished via the --attach argument. The syntax is:
target> gdbserver --attach comm pid |
pid is the process ID of a currently running process. It isn't necessary
to point gdbserver at a binary for the running process.
You can debug processes by name instead of process ID if your target has the
pidof utility:
target> gdbserver --attach comm `pidof program` |
In case more than one copy of program is running, or program
has multiple threads, most versions of pidof support the
-s option to only return the first process ID.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver
When you connect to gdbserver using target remote,
gdbserver debugs the specified program only once. When the
program exits, or you detach from it, closes the connection
and gdbserver exits.
If you connect using target extended-remote, gdbserver
enters multi-process mode. When the debugged program exits, or you
detach from it, stays connected to gdbserver even
though no program is running. The run and attach
commands instruct gdbserver to run or attach to a new program.
The run command uses set remote exec-file (see set remote exec-file) to select the program to run. Command line
arguments are supported, except for wildcard expansion and I/O
redirection (see section 4.3 Your Program's Arguments).
To start gdbserver without supplying an initial command to run
or process ID to attach, use the `--multi' command line option.
Then you can connect using target extended-remote and start
the program you want to debug.
In multi-process mode gdbserver does not automatically exit unless you
use the option `--once'. You can terminate it by using
monitor exit (see Monitor Commands for gdbserver). Note that the
conditions under which gdbserver terminates depend on how
connects to it (target remote or target extended-remote). The
`--multi' option to gdbserver has no influence on that.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver
This section applies only when gdbserver is run to listen on a TCP port.
gdbserver normally terminates after all of its debugged processes have
terminated in target remote mode. On the other hand, for target
extended-remote, gdbserver stays running even with no processes left.
normally terminates the spawned debugged process on its exit,
which normally also terminates gdbserver in the target remote
mode. Therefore, when the connection drops unexpectedly, and
cannot ask gdbserver to kill its debugged processes, gdbserver
stays running even in the target remote mode.
When gdbserver stays running, can connect to it again later.
Such reconnecting is useful for features like disconnected tracing. For
completeness, at most one can be connected at a time.
By default, gdbserver keeps the listening TCP port open, so that
additional connections are possible. However, if you start gdbserver
with the `--once' option, it will stop listening for any further
connection attempts after connecting to the first session. This
means no further connections to gdbserver will be possible after the
first one. It also means gdbserver will terminate after the first
connection with remote has closed, even for unexpectedly closed
connections and even in the target extended-remote mode. The
`--once' option allows reusing the same port number for connecting to
multiple instances of gdbserver running on the same host, since each
instance closes its port after the first connection.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver
The `--debug' option tells gdbserver to display extra
status information about the debugging process.
The `--remote-debug' option tells gdbserver to display
remote protocol debug output. These options are intended for
gdbserver development and for bug reports to the developers.
The `--wrapper' option specifies a wrapper to launch programs for debugging. The option should be followed by the name of the wrapper, then any command-line arguments to pass to the wrapper, then -- indicating the end of the wrapper arguments.
gdbserver runs the specified wrapper program with a combined
command line including the wrapper arguments, then the name of the
program to debug, then any arguments to the program. The wrapper
runs until it executes your program, and then gains control.
You can use any program that eventually calls execve with
its arguments as a wrapper. Several standard Unix utilities do
this, e.g. env and nohup. Any Unix shell script ending
with exec "$@" will also work.
For example, you can use env to pass an environment variable to
the debugged program, without setting the variable in gdbserver's
environment:
$ gdbserver --wrapper env LD_PRELOAD=libtest.so -- :2222 ./testprog |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver Run on the host system.
First make sure you have the necessary symbol files. Load symbols for
your application using the file command before you connect. Use
set sysroot to locate target libraries (unless your
was compiled with the correct sysroot using --with-sysroot).
The symbol file and target libraries must exactly match the executable
and libraries on the target, with one exception: the files on the host
system should not be stripped, even if the files on the target system
are. Mismatched or missing files will lead to confusing results
during debugging. On GNU/Linux targets, mismatched or missing
files may also prevent gdbserver from debugging multi-threaded
programs.
Connect to your target (see section Connecting to a Remote Target).
For TCP connections, you must start up gdbserver prior to using
the target remote command. Otherwise you may get an error whose
text depends on the host system, but which usually looks something like
`Connection refused'. Don't use the load
command in when using gdbserver, since the program is
already on the target.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver
During a session using gdbserver, you can use the
monitor command to send special requests to gdbserver.
Here are the available commands.
monitor help
monitor set debug 0
monitor set debug 1
monitor set remote-debug 0
monitor set remote-debug 1
monitor set libthread-db-search-path [PATH]
libthread_db (see section set libthread-db-search-path). If you omit path,
`libthread-db-search-path' will be reset to its default value.
The special entry `$pdir' for `libthread-db-search-path' is
not supported in gdbserver.
monitor exit
disconnect to close the debugging session. gdbserver will
detach from any attached processes and kill any processes it created.
Use monitor exit to terminate gdbserver at the end
of a multi-process mode debug session.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
gdbserver
On some targets, gdbserver supports tracepoints, fast
tracepoints and static tracepoints.
For fast or static tracepoints to work, a special library called the
in-process agent (IPA), must be loaded in the inferior process.
This library is built and distributed as an integral part of
gdbserver. In addition, support for static tracepoints
requires building the in-process agent library with static tracepoints
support. At present, the UST (LTTng Userspace Tracer,
http://lttng.org/ust) tracing engine is supported. This support
is automatically available if UST development headers are found in the
standard include path when gdbserver is built, or if
gdbserver was explicitly configured using `--with-ust'
to point at such headers. You can explicitly disable the support
using `--with-ust=no'.
There are several ways to load the in-process agent in your program:
Specifying it as dependency at link time
You can link your program dynamically with the in-process agent
library. On most systems, this is accomplished by adding
-linproctrace to the link command.
Using the system's preloading mechanisms
You can force loading the in-process agent at startup time by using
your system's support for preloading shared libraries. Many Unixes
support the concept of preloading user defined libraries. In most
cases, you do that by specifying LD_PRELOAD=libinproctrace.so
in the environment. See also the description of gdbserver's
`--wrapper' command line option.
Using to force loading the agent at run time
On some systems, you can force the inferior to load a shared library,
by calling a dynamic loader function in the inferior that takes care
of dynamically looking up and loading a shared library. On most Unix
systems, the function is dlopen. You'll use the call
command for that. For example:
() call dlopen ("libinproctrace.so", ...)
|
Note that on most Unix systems, for the dlopen function to be
available, the program needs to be linked with -ldl.
On systems that have a userspace dynamic loader, like most Unix
systems, when you connect to gdbserver using target
remote, you'll find that the program is stopped at the dynamic
loader's entry point, and no shared library has been loaded in the
program's address space yet, including the in-process agent. In that
case, before being able to use any of the fast or static tracepoints
features, you need to let the loader run and load the shared
libraries. The simplest way to do that is to run the program to the
main procedure. E.g., if debugging a C or C++ program, start
gdbserver like so:
$ gdbserver :9999 myprogram |
Start GDB and connect to gdbserver like so, and run to main:
$ gdb myprogram () target remote myhost:9999 0x00007f215893ba60 in ?? () from /lib64/ld-linux-x86-64.so.2 () b main () continue |
The in-process tracing agent library should now be loaded into the
process; you can confirm it with the info sharedlibrary
command, which will list `libinproctrace.so' as loaded in the
process. You are now ready to install fast tracepoints, list static
tracepoint markers, probe static tracepoints markers, and start
tracing.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section documents the configuration options available when debugging remote programs. For the options related to the File I/O extensions of the remote protocol, see system-call-allowed.
set remoteaddresssize bits
show remoteaddresssize
set remotebaud n
show remotebaud
set remotebreak
BREAK signal to the remote
when you type Ctrl-c to interrupt the program running
on the remote. If set to off, sends the `Ctrl-C'
character instead. The default is off, since most remote systems
expect to see `Ctrl-C' as the interrupt signal.
show remotebreak
BREAK or `Ctrl-C' to
interrupt the remote program.
set remoteflow on
set remoteflow off
RTS/CTS)
on the serial port used to communicate to the remote target.
show remoteflow
set remotelogbase base
ascii, octal, and hex. The default is
ascii.
show remotelogbase
set remotelogfile file
show remotelogfile.
set remotetimeout num
show remotetimeout
set remote hardware-watchpoint-limit limit
set remote hardware-breakpoint-limit limit
set remote hardware-watchpoint-length-limit limit
show remote hardware-watchpoint-length-limit
set remote exec-file filename
show remote exec-file
run with target
extended-remote. This should be set to a filename valid on the
target system. If it is not set, the target will use a default
filename (e.g. the last program run).
set remote interrupt-sequence
BREAK or
`BREAK-g' as the
sequence to the remote target in order to interrupt the execution.
`Ctrl-C' is a default. Some system prefers BREAK which
is high level of serial line for some certain time.
Linux kernel prefers `BREAK-g', a.k.a Magic SysRq g.
It is BREAK signal followed by character g.
show interrupt-sequence
BREAK or BREAK-g
is sent by to interrupt the remote program.
BREAK-g is BREAK signal followed by g and
also known as Magic SysRq g.
set remote interrupt-on-connect
BREAK followed by g
which is known as Magic SysRq g in order to connect .
show interrupt-on-connect
set tcp auto-retry on
set tcp connect-timeout.
set tcp auto-retry off
show tcp auto-retry
set tcp connect-timeout seconds
set tcp auto-retry on) and waiting for connections
that are merely slow to complete, and represents an approximate cumulative
value.
show tcp connect-timeout
The remote protocol autodetects the packets supported by your debugging stub. If you need to override the autodetection, you can use these commands to enable or disable individual packets. Each packet can be set to `on' (the remote target supports this packet), `off' (the remote target does not support this packet), or `auto' (detect remote target support for this packet). They all default to `auto'. For more information about each packet, see E. Remote Serial Protocol.
During normal use, you should not have to use any of these commands. If you do, that may be a bug in your remote debugging stub, or a bug in . You may want to report the problem to the developers.
For each packet name, the command to enable or disable the
packet is set remote name-packet. The available settings
are:
| Command Name | Remote Packet | Related Features |
fetch-register |
p
| info registers
|
set-register |
P
| set
|
binary-download |
X
| load, set
|
read-aux-vector |
qXfer:auxv:read
| info auxv
|
symbol-lookup |
qSymbol
| Detecting multiple threads |
attach |
vAttach
| attach
|
verbose-resume |
vCont
| Stepping or resuming multiple threads |
run |
vRun
| run
|
software-breakpoint |
Z0
| break
|
hardware-breakpoint |
Z1
| hbreak
|
write-watchpoint |
Z2
| watch
|
read-watchpoint |
Z3
| rwatch
|
access-watchpoint |
Z4
| awatch
|
target-features |
qXfer:features:read
| set architecture
|
library-info |
qXfer:libraries:read
| info sharedlibrary
|
memory-map |
qXfer:memory-map:read
| info mem
|
read-sdata-object |
qXfer:sdata:read
| print $_sdata
|
read-spu-object |
qXfer:spu:read
| info spu
|
write-spu-object |
qXfer:spu:write
| info spu
|
read-siginfo-object |
qXfer:siginfo:read
| print $_siginfo
|
write-siginfo-object |
qXfer:siginfo:write
| set $_siginfo
|
threads |
qXfer:threads:read
| info threads
|
get-thread-local- |
qGetTLSAddr
| Displaying __thread variables
|
get-thread-information-block-address |
qGetTIBAddr
| Display MS-Windows Thread Information Block. |
search-memory |
qSearch:memory
| find
|
supported-packets |
qSupported
| Remote communications parameters |
pass-signals |
QPassSignals
| handle signal
|
program-signals |
QProgramSignals
| handle signal
|
hostio-close-packet |
vFile:close
| remote get, remote put
|
hostio-open-packet |
vFile:open
| remote get, remote put
|
hostio-pread-packet |
vFile:pread
| remote get, remote put
|
hostio-pwrite-packet |
vFile:pwrite
| remote get, remote put
|
hostio-unlink-packet |
vFile:unlink
| remote delete
|
hostio-readlink-packet |
vFile:readlink
| Host I/O |
noack-packet |
QStartNoAckMode
| Packet acknowledgment |
osdata |
qXfer:osdata:read
| info os
|
query-attached |
qAttached
| Querying remote process attach state. |
trace-buffer-size |
QTBuffer:size
| set trace-buffer-size
|
traceframe-info |
qXfer:traceframe-info:read
| Traceframe info |
install-in-trace |
InstallInTrace
| Install tracepoint in tracing |
disable-randomization |
QDisableRandomization
| set disable-randomization
|
conditional-breakpoints-packet |
Z0 and Z1
| Support for target-side breakpoint condition evaluation
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The stub files provided with implement the target side of the communication protocol, and the side is implemented in the source file `remote.c'. Normally, you can simply allow these subroutines to communicate, and ignore the details. (If you're implementing your own stub file, you can still ignore the details: start with one of the existing stub files. `sparc-stub.c' is the best organized, and therefore the easiest to read.)
To debug a program running on another machine (the debugging target machine), you must first arrange for all the usual prerequisites for the program to run by itself. For example, for a C program, you need:
The next step is to arrange for your program to use a serial port to communicate with the machine where is running (the host machine). In general terms, the scheme looks like this:
On certain remote targets, you can use an auxiliary program
gdbserver instead of linking a stub into your program.
See section Using the gdbserver Program, for details.
The debugging stub is specific to the architecture of the remote machine; for example, use `sparc-stub.c' to debug programs on SPARC boards.
These working remote stubs are distributed with :
i386-stub.c
m68k-stub.c
sh-stub.c
sparc-stub.c
sparcl-stub.c
The `README' file in the distribution may list other recently added stubs.
20.5.1 What the Stub Can Do for You What the stub can do for you 20.5.2 What You Must Do for the Stub What you must do for the stub 20.5.3 Putting it All Together Putting it all together
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The debugging stub for your architecture supplies these three subroutines:
set_debug_traps
handle_exception to run when your
program stops. You must call this subroutine explicitly in your
program's startup code.
handle_exception
handle_exception to
run when a trap is triggered.
handle_exception takes control when your program stops during
execution (for example, on a breakpoint), and mediates communications
with on the host machine. This is where the communications
protocol is implemented; handle_exception acts as the
representative on the target machine. It begins by sending summary
information on the state of your program, then continues to execute,
retrieving and transmitting any information needs, until you
execute a command that makes your program resume; at that point,
handle_exception returns control to your own code on the target
machine.
breakpoint
handle_exception---in effect, to . On some machines,
simply receiving characters on the serial port may also trigger a trap;
again, in that situation, you don't need to call breakpoint from
your own program--simply running `target remote' from the host
session gets control.
Call breakpoint if none of these is true, or if you simply want
to make certain your program stops at a predetermined point for the
start of your debugging session.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The debugging stubs that come with are set up for a particular chip architecture, but they have no information about the rest of your debugging target machine.
First of all you need to tell the stub how to communicate with the serial port.
int getDebugChar()
getchar for your target system; a
different name is used to allow you to distinguish the two if you wish.
void putDebugChar(int)
putchar for your target system; a
different name is used to allow you to distinguish the two if you wish.
If you want to be able to stop your program while it is
running, you need to use an interrupt-driven serial driver, and arrange
for it to stop when it receives a ^C (`\003', the control-C
character). That is the character which uses to tell the
remote system to stop.
Getting the debugging target to return the proper status to
probably requires changes to the standard stub; one quick and dirty way
is to just execute a breakpoint instruction (the "dirty" part is that
reports a SIGTRAP instead of a SIGINT).
Other routines you need to supply are:
void exceptionHandler (int exception_number, void *exception_address)
For the 386, exception_address should be installed as an interrupt
gate so that interrupts are masked while the handler runs. The gate
should be at privilege level 0 (the most privileged level). The
SPARC and 68k stubs are able to mask interrupts themselves without
help from exceptionHandler.
void flush_i_cache()
On target machines that have instruction caches, requires this function to make certain that the state of your program is stable.
You must also make sure this library routine is available:
void *memset(void *, int, int)
memset that sets an area of
memory to a known value. If you have one of the free versions of
libc.a, memset can be found there; otherwise, you must
either obtain it from your hardware manufacturer, or write your own.
If you do not use the GNU C compiler, you may need other standard
library subroutines as well; this varies from one stub to another,
but in general the stubs are likely to use any of the common library
subroutines which generates as inline code.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In summary, when your program is ready to debug, you must follow these steps.
|
set_debug_traps(); breakpoint(); |
On some machines, when a breakpoint trap is raised, the hardware
automatically makes the PC point to the instruction after the
breakpoint. If your machine doesn't do that, you may need to adjust
handle_exception to arrange for it to return to the instruction
after the breakpoint on this first invocation, so that your program
doesn't keep hitting the initial breakpoint instead of making
progress.
exceptionHook. Normally you just use:
void (*exceptionHook)() = 0; |
but if before calling set_debug_traps, you set it to point to a
function in your program, that function is called when
continues after stopping on a trap (for example, bus
error). The function indicated by exceptionHook is called with
one parameter: an int which is the exception number.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
While nearly all commands are available for all native and cross versions of the debugger, there are some exceptions. This chapter describes things that are only available in certain configurations.
There are three major categories of configurations: native configurations, where the host and target are the same, embedded operating system configurations, which are usually the same for several different processor architectures, and bare embedded processors, which are quite different from each other.
21.1 Native 21.2 Embedded Operating Systems 21.3 Embedded Processors 21.4 Architectures
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes details specific to particular native configurations.
21.1.1 HP-UX 21.1.2 BSD libkvm Interface Debugging BSD kernel memory images 21.1.3 SVR4 Process Information SVR4 process information 21.1.4 Features for Debugging DJGPP Programs Features specific to the DJGPP port 21.1.5 Features for Debugging MS Windows PE Executables Features specific to the Cygwin port 21.1.6 Commands Specific to GNU Hurd Systems Features specific to GNU Hurd 21.1.7 Darwin Features specific to Darwin
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On HP-UX systems, if you refer to a function or variable name that begins with a dollar sign, searches for a user or system name first, before it searches for a convenience variable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
BSD-derived systems (FreeBSD/NetBSD/OpenBSD) have a kernel memory
interface that provides a uniform interface for accessing kernel virtual
memory images, including live systems and crash dumps.
uses this interface to allow you to debug live kernels and kernel crash
dumps on many native BSD configurations. This is implemented as a
special kvm debugging target. For debugging a live system, load
the currently running kernel into and connect to the
kvm target:
() target kvm |
For debugging crash dumps, provide the file name of the crash dump as an argument:
() target kvm /var/crash/bsd.0 |
Once connected to the kvm target, the following commands are
available:
kvm pcb
kvm proc
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Many versions of SVR4 and compatible systems provide a facility called `/proc' that can be used to examine the image of a running process using file-system subroutines.
If is configured for an operating system with this
facility, the command info proc is available to report
information about the process running your program, or about any
process running on your system. This includes, as of this writing,
GNU/Linux, OSF/1 (Digital Unix), Solaris, and Irix, but
not HP-UX, for example.
This command may also work on core files that were created on a system that has the `/proc' facility.
info proc
info proc process-id
On some systems, process-id can be of the form `[pid]/tid' which specifies a certain thread ID within a process. If the optional pid part is missing, it means a thread from the process being debugged (the leading `/' still needs to be present, or else will interpret the number as a process ID rather than a thread ID).
info proc cmdline
info proc cwd
info proc exe
info proc mappings
info proc stat
info proc status
info proc all
info proc subcommands.
set procfs-trace
procfs API calls.
show procfs-trace
procfs API call tracing.
set procfs-file file
procfs API trace to the named
file. appends the trace info to the previous
contents of the file. The default is to display the trace on the
standard output.
show procfs-file
procfs API trace is written.
proc-trace-entry
proc-trace-exit
proc-untrace-entry
proc-untrace-exit
syscall interface.
info pidlist
info meminfo
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
DJGPP is a port of the GNU development tools to MS-DOS and MS-Windows. DJGPP programs are 32-bit protected-mode programs that use the DPMI (DOS Protected-Mode Interface) API to run on top of real-mode DOS systems and their emulations.
supports native debugging of DJGPP programs, and defines a few commands specific to the DJGPP port. This subsection describes those commands.
info dos
info dos sysinfo
info dos gdt
info dos ldt
info dos idt
A typical DJGPP program uses 3 segments: a code segment, a data segment (used for both data and the stack), and a DOS segment (which allows access to DOS/BIOS data structures and absolute addresses in conventional memory). However, the DPMI host will usually define additional segments in order to support the DPMI environment.
These commands allow to display entries from the descriptor tables. Without an argument, all entries from the specified table are displayed. An argument, which should be an integer expression, means display a single entry whose index is given by the argument. For example, here's a convenient way to display information about the debugged program's data segment:
|
This comes in handy when you want to see whether a pointer is outside the data segment's limit (i.e. garbled).
info dos pde
info dos pte
Without an argument, info dos pde displays the entire Page Directory, and info dos pte displays all the entries in all of the Page Tables. An argument, an integer expression, given to the info dos pde command means display only that entry from the Page Directory table. An argument given to the info dos pte command means display entries from a single Page Table, the one pointed to by the specified entry in the Page Directory.
These commands are useful when your program uses DMA (Direct Memory Access), which needs physical addresses to program the DMA controller.
These commands are supported only with some DPMI servers.
info dos address-pte addr
i is stored:
|
This says that i is stored at offset 0xd30 from the page
whose physical base address is 0x02698000, and shows all the
attributes of that page.
Note that you must cast the addresses of variables to a char *,
since otherwise the value of __djgpp_base_address, the base
address of all variables and functions in a DJGPP program, will
be added using the rules of C pointer arithmetics: if i is
declared an int, will add 4 times the value of
__djgpp_base_address to the address of i.
Here's another example, it displays the Page Table entry for the transfer buffer:
|
(The + 3 offset is because the transfer buffer's address is the
3rd member of the _go32_info_block structure.) The output
clearly shows that this DPMI server maps the addresses in conventional
memory 1:1, i.e. the physical (0x00029000 + 0x110) and
linear (0x29110) addresses are identical.
This command is supported only with some DPMI servers.
In addition to native debugging, the DJGPP port supports remote debugging via a serial data link. The following commands are specific to remote serial debugging in the DJGPP port of .
set com1base addr
set com1irq irq
IRQ) line to use
for the `COM1' serial port.
There are similar commands `set com2base', `set com3irq',
etc. for setting the port address and the IRQ lines for the
other 3 COM ports.
The related commands `show com1base', `show com1irq' etc.
display the current settings of the base address and the IRQ
lines used by the COM ports.
info serial
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports native debugging of MS Windows programs, including DLLs with and without symbolic debugging information.
MS-Windows programs that call SetConsoleMode to switch off the
special meaning of the `Ctrl-C' keystroke cannot be interrupted
by typing C-c. For this reason, on MS-Windows
supports C-BREAK as an alternative interrupt key
sequence, which can be used to interrupt the debuggee even if it
ignores C-c.
There are various additional Cygwin-specific commands, described in this section. Working with DLLs that have no debugging symbols is described in 21.1.5.1 Support for DLLs without Debugging Symbols.
info w32
info w32 selector
GetThreadSelectorEntry function.
It takes an optional argument that is evaluated to
a long value to give the information about this given selector.
Without argument, this command displays information
about the six segment registers.
info w32 thread-information-block
$fs
selector for 32-bit programs and $gs for 64-bit programs).
info dll
info shared.
dll-symbols
set cygwin-exceptions mode
on, will break on exceptions that
happen inside the Cygwin DLL. If mode is off,
will delay recognition of exceptions, and may ignore some
exceptions which seem to be caused by internal Cygwin DLL
"bookkeeping". This option is meant primarily for debugging the
Cygwin DLL itself; the default value is off to avoid annoying
users with false SIGSEGV signals.
show cygwin-exceptions
set new-console mode
on the debuggee will
be started in a new console on next start.
If mode is off, the debuggee will
be started in the same console as the debugger.
show new-console
set new-group mode
show new-group
set debugevents
OutputDebugString API call.
set debugexec
set debugexceptions
set debugmemory
set shell
show shell
21.1.5.1 Support for DLLs without Debugging Symbols Support for DLLs without debugging symbols
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Very often on windows, some of the DLLs that your program relies on do not include symbolic debugging information (for example, `kernel32.dll'). When doesn't recognize any debugging symbols in a DLL, it relies on the minimal amount of symbolic information contained in the DLL's export table. This section describes working with such symbols, known internally to as "minimal symbols".
Note that before the debugged program has started execution, no DLLs
will have been loaded. The easiest way around this problem is simply to
start the program -- either by setting a breakpoint or letting the
program run once to completion. It is also possible to force
to load a particular DLL before starting the executable ---
see the shared library information in 18.1 Commands to Specify Files, or the
dll-symbols command in 21.1.5 Features for Debugging MS Windows PE Executables. Currently,
explicitly loading symbols from a DLL with no debugging information will
cause the symbol names to be duplicated in 's lookup table,
which may adversely affect symbol lookup performance.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In keeping with the naming conventions used by the Microsoft debugging
tools, DLL export symbols are made available with a prefix based on the
DLL name, for instance KERNEL32!CreateFileA. The plain name is
also entered into the symbol table, so CreateFileA is often
sufficient. In some cases there will be name clashes within a program
(particularly if the executable itself includes full debugging symbols)
necessitating the use of the fully qualified name when referring to the
contents of the DLL. Use single-quotes around the name to avoid the
exclamation mark ("!") being interpreted as a language operator.
Note that the internal name of the DLL may be all upper-case, even
though the file name of the DLL is lower-case, or vice-versa. Since
symbols within are case-sensitive this may cause
some confusion. If in doubt, try the info functions and
info variables commands or even maint print msymbols
(see section 16. Examining the Symbol Table). Here's an example:
() info function CreateFileA All functions matching regular expression "CreateFileA": Non-debugging symbols: 0x77e885f4 CreateFileA 0x77e885f4 KERNEL32!CreateFileA |
() info function ! All functions matching regular expression "!": Non-debugging symbols: 0x6100114c cygwin1!__assert 0x61004034 cygwin1!_dll_crt0@0 0x61004240 cygwin1!dll_crt0(per_process *) [etc...] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Symbols extracted from a DLL's export table do not contain very much type information. All that can do is guess whether a symbol refers to a function or variable depending on the linker section that contains the symbol. Also note that the actual contents of the memory contained in a DLL are not available unless the program is running. This means that you cannot examine the contents of a variable or disassemble a function within a DLL without a running program.
Variables are generally treated as pointers and dereferenced automatically. For this reason, it is often necessary to prefix a variable name with the address-of operator ("&") and provide explicit type information in the command. Here's an example of the type of problem:
() print 'cygwin1!__argv' $1 = 268572168 |
() x 'cygwin1!__argv' 0x10021610: "\230y\"" |
And two possible solutions:
() print ((char **)'cygwin1!__argv')[0] $2 = 0x22fd98 "/cygdrive/c/mydirectory/myprogram" |
() x/2x &'cygwin1!__argv' 0x610c0aa8 <cygwin1!__argv>: 0x10021608 0x00000000 () x/x 0x10021608 0x10021608: 0x0022fd98 () x/s 0x0022fd98 0x22fd98: "/cygdrive/c/mydirectory/myprogram" |
Setting a break point within a DLL is possible even before the program starts execution. However, under these circumstances, can't examine the initial instructions of the function in order to skip the function's frame set-up code. You can work around this by using "*&" to set the breakpoint at a raw memory address:
() break *&'python22!PyOS_Readline' Breakpoint 1 at 0x1e04eff0 |
The author of these extensions is not entirely convinced that setting a break point within a shared DLL like `kernel32.dll' is completely safe.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This subsection describes commands specific to the GNU Hurd native debugging.
set signals
set sigs
sigs is a shorthand alias for
signals.
show signals
show sigs
set signal-thread
set sigthread
libc signal
thread. That thread is run when a signal is delivered to a running
process. set sigthread is the shorthand alias of set
signal-thread.
show signal-thread
show sigthread
set stopped
SIGSTOP signal. The stopped process can be
continued by delivering a signal to it.
show stopped
set exceptions
show exceptions
set task pause
set thread default pause on or set
thread pause on (see below) to pause individual threads.
show task pause
set task detach-suspend-count
show task detach-suspend-count
set task exception-port
set task excp
set task excp is a shorthand alias.
set noninvasive
set task pause, set exceptions, and
set signals to values opposite to the defaults.
info send-rights
info receive-rights
info port-rights
info port-sets
info dead-names
info ports
info psets
info ports for info
port-rights and info psets for info port-sets.
set thread pause
set
task pause off (see above), this command comes in handy to suspend
only the current thread.
show thread pause
set thread run
show thread run
set thread detach-suspend-count
set thread
takeover-suspend-count to force it to an absolute value.
show thread detach-suspend-count
set thread exception-port
set thread excp
set task exception-port (see above).
set thread excp is the shorthand alias.
set thread takeover-suspend-count
set thread default
show thread default
set thread commands has a set thread
default counterpart (e.g., set thread default pause, set
thread default exception-port, etc.). The thread default
variety of commands sets the default thread properties for all
threads; you can then change the properties of individual threads with
the non-default commands.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides the following commands specific to the Darwin target:
set debug darwin num
show debug darwin
set debug mach-o num
show debug mach-o
set mach-exceptions on
set mach-exceptions off
show mach-exceptions
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes configurations involving the debugging of embedded operating systems that are available for several different architectures.
21.2.1 Using with VxWorks
includes the ability to debug programs running on various real-time operating systems.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
target vxworks machinename
On VxWorks, load links filename dynamically on the
current target system as well as adding its symbols in .
enables developers to spawn and debug tasks running on networked
VxWorks targets from a Unix host. Already-running tasks spawned from
the VxWorks shell can also be debugged. uses code that runs on
both the Unix host and on the VxWorks target. The program
is installed and executed on the Unix host. (It may be
installed with the name vxgdb, to distinguish it from a
for debugging programs on the host itself.)
VxWorks-timeout args
vxworks-timeout.
This option is set by the user, and args represents the number of
seconds waits for responses to rpc's. You might use this if
your VxWorks target is a slow software simulator or is on the far side
of a thin network line.
The following information on connecting to VxWorks was current when this manual was produced; newer releases of VxWorks may use revised procedures.
To use with VxWorks, you must rebuild your VxWorks kernel
to include the remote debugging interface routines in the VxWorks
library `rdb.a'. To do this, define INCLUDE_RDB in the
VxWorks configuration file `configAll.h' and rebuild your VxWorks
kernel. The resulting kernel contains `rdb.a', and spawns the
source debugging task tRdbTask when VxWorks is booted. For more
information on configuring and remaking VxWorks, see the manufacturer's
manual.
Once you have included `rdb.a' in your VxWorks system image and set
your Unix execution search path to find , you are ready to
run . From your Unix host, run (or
vxgdb, depending on your installation).
comes up showing the prompt:
(vxgdb) |
21.2.1.1 Connecting to VxWorks 21.2.1.2 VxWorks Download VxWorks download 21.2.1.3 Running Tasks Running tasks
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The command target lets you connect to a VxWorks target on the
network. To connect to a target whose host name is "tt", type:
(vxgdb) target vxworks tt |
displays messages like these:
Attaching remote machine across net... Connected to tt. |
then attempts to read the symbol tables of any object modules loaded into the VxWorks target since it was last booted. locates these files by searching the directories listed in the command search path (see section Your Program's Environment); if it fails to find an object file, it displays a message such as:
prog.o: No such file or directory. |
When this happens, add the appropriate directory to the search path with
the command path, and execute the target
command again.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you have connected to the VxWorks target and you want to debug an
object that has not yet been loaded, you can use the
load command to download a file from Unix to VxWorks
incrementally. The object file given as an argument to the load
command is actually opened twice: first by the VxWorks target in order
to download the code, then by in order to read the symbol
table. This can lead to problems if the current working directories on
the two systems differ. If both systems have NFS mounted the same
filesystems, you can avoid these problems by using absolute paths.
Otherwise, it is simplest to set the working directory on both systems
to the directory in which the object file resides, and then to reference
the file by its name, without any path. For instance, a program
`prog.o' may reside in `vxpath/vw/demo/rdb' in VxWorks
and in `hostpath/vw/demo/rdb' on the host. To load this
program, type this on VxWorks:
-> cd "vxpath/vw/demo/rdb" |
Then, in , type:
(vxgdb) cd hostpath/vw/demo/rdb (vxgdb) load prog.o |
displays a response similar to this:
Reading symbol data from wherever/vw/demo/rdb/prog.o... done. |
You can also use the load command to reload an object module
after editing and recompiling the corresponding source file. Note that
this makes delete all currently-defined breakpoints,
auto-displays, and convenience variables, and to clear the value
history. (This is necessary in order to preserve the integrity of
debugger's data structures that reference the target system's symbol
table.)
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can also attach to an existing task using the attach command as
follows:
(vxgdb) attach task |
where task is the VxWorks hexadecimal task ID. The task can be running or suspended when you attach to it. Running tasks are suspended at the time of attachment.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section goes into details specific to particular embedded configurations.
Whenever a specific embedded processor has a simulator, allows to send an arbitrary command to the simulator.
sim command
21.3.1 ARM ARM RDI 21.3.2 Renesas M32R/D and M32R/SDI Renesas M32R/D 21.3.3 M68k Motorola M68K 21.3.4 MicroBlaze Xilinx MicroBlaze 21.3.5 MIPS Embedded MIPS Embedded 21.3.6 OpenRISC 1000 OpenRisc 1000 21.3.7 PowerPC Embedded 21.3.8 HP PA Embedded 21.3.9 Tsqware Sparclet 21.3.10 Fujitsu Sparclite 21.3.11 Zilog Z8000 21.3.12 Atmel AVR 21.3.13 CRIS 21.3.14 Renesas Super-H
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
target rdi dev
target rdp dev
provides the following ARM-specific commands:
set arm disassembler
"std" style is the standard style.
show arm disassembler
set arm apcs32
show arm apcs32
set arm fpu fputype
auto
softfpa
fpa
softvfp
vfp
show arm fpu
set arm abi
show arm abi
set arm fallback-mode (arm|thumb|auto)
T bit in the CPSR
register).
show arm fallback-mode
set arm force-mode (arm|thumb|auto)
show arm force-mode
set debug arm
show debug arm
The following commands are available when an ARM target is debugged using the RDI interface:
rdilogfile [file]
rdilogenable [arg]
"yes"
enables logging, with an argument 0 or "no" disables it. With
no arguments displays the current setting. When logging is enabled,
ADP packets exchanged between and the RDI target device
are logged to a file.
set rdiromatzero
target rdi command.
show rdiromatzero
set rdiheartbeat
show rdiheartbeat
target sim [simargs] ...
--swi-support=type
all.
none
demon
angel
redboot
all
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
target m32r dev
target m32rsdi dev
The following commands are specific to the M32R monitor:
set download-path path
show download-path
set board-address addr
show board-address
set server-address addr
show server-address
upload [file]
tload [file]
upload command.
The following commands are available for M32R/SDI:
sdireset
sdistatus
debug_chaos
use_debug_dma
use_mon_code
use_ib_break
use_dbt_break
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Motorola m68k configuration includes ColdFire support, and a target command for the following ROM monitor.
target dbug dev
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The MicroBlaze is a soft-core processor supported on various Xilinx
FPGAs, such as Spartan or Virtex series. Boards with these processors
usually have JTAG ports which connect to a host system running the Xilinx
Embedded Development Kit (EDK) or Software Development Kit (SDK).
This host system is used to download the configuration bitstream to
the target FPGA. The Xilinx Microprocessor Debugger (XMD) program
communicates with the target board using the JTAG interface and
presents a gdbserver interface to the board. By default
xmd uses port 1234. (While it is possible to change
this default port, it requires the use of undocumented xmd
commands. Contact Xilinx support if you need to do this.)
Use these GDB commands to connect to the MicroBlaze target processor.
target remote :1234
xmd.
target remote xmd-host:1234
xmd
running on a different system named xmd-host.
load
set debug microblaze n
show debug microblaze n
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can use the MIPS remote debugging protocol to talk to a MIPS board attached to a serial line. This is available when you configure with `--target=mips-elf'.
Use these commands to specify the connection to your target board:
target mips port
with the
name of your program as the argument. To connect to the board, use the
command `target mips port', where port is the name of
the serial port connected to the board. If the program has not already
been downloaded to the board, you may use the load command to
download it. You can then use all the usual commands.
For example, this sequence connects to the target board through a serial port, and loads and runs a program called prog through the debugger:
host$ prog is free software and ... () target mips /dev/ttyb () load prog () run |
target mips hostname:portnumber
target pmon port
target ddb port
target lsi port
target r3900 dev
target array dev
also supports these special commands for MIPS targets:
set mipsfpu double
set mipsfpu single
set mipsfpu none
set mipsfpu auto
show mipsfpu
In previous versions the only choices were double precision or no floating point, so `set mipsfpu on' will select double precision and `set mipsfpu off' will select no floating point.
As usual, you can inquire about the mipsfpu variable with
`show mipsfpu'.
set timeout seconds
set retransmit-timeout seconds
show timeout
show retransmit-timeout
set timeout seconds command. The
default is 5 seconds. Similarly, you can control the timeout used while
waiting for an acknowledgment of a packet with the set
retransmit-timeout seconds command. The default is 3 seconds.
You can inspect both values with show timeout and show
retransmit-timeout. (These commands are only available when
is configured for `--target=mips-elf'.)
The timeout set by set timeout does not apply when
is waiting for your program to stop. In that case, waits
forever because it has no way of knowing how long the program is going
to run before stopping.
set syn-garbage-limit num
show syn-garbage-limit
set monitor-prompt prompt
show monitor-prompt
set monitor-warnings
lsi target. When on, will
display warning messages whose codes are returned by the lsi
PMON monitor for breakpoint commands.
show monitor-warnings
pmon command
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
See OR1k Architecture document (www.opencores.org) for more information about platform and commands.
target jtag jtag://host:port
Connects to remote JTAG server. JTAG remote server can be either an or1ksim or JTAG server, connected via parallel port to the board.
Example: target jtag jtag://localhost:9999
or1ksim command
or1ksim OpenRISC 1000 Architectural
Simulator, proprietary commands can be executed.
info or1k spr
info or1k spr group
info or1k spr groupno
info or1k spr group register
info or1k spr register
info or1k spr groupno registerno
info or1k spr registerno
spr group register value
spr register value
spr groupno registerno value
spr registerno value
Some implementations of OpenRISC 1000 Architecture also have hardware trace. It is very similar to trace, except it does not interfere with normal program execution and is thus much faster. Hardware breakpoints/watchpoint triggers can be set using:
$LEA/$LDATA
$SEA/$SDATA
$AEA/$ADATA
$FETCH
When triggered, it can capture low level data, like: PC, LSEA,
LDATA, SDATA, READSPR, WRITESPR, INSTR.
hwatch conditional
hwatch ($LEA == my_var) && ($LDATA < 50) || ($SEA == my_var) && ($SDATA >= 50)
hwatch ($LEA == my_var) && ($LDATA < 50) || ($SEA == my_var) && ($SDATA >= 50)
htrace info
htrace trigger conditional
htrace qualifier conditional
htrace stop conditional
htrace record [data]*
htrace enable
htrace disable
htrace rewind [filename]
If filename is specified, new trace file is made and any newly collected data will be written there.
htrace print [start [len]]
htrace mode continuous
htrace mode suspend
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports using the DVC (Data Value Compare) register to implement in hardware simple hardware watchpoint conditions of the form:
() watch ADDRESS|VARIABLE \ if ADDRESS|VARIABLE == CONSTANT EXPRESSION |
The DVC register will be automatically used when detects
such pattern in a condition expression, and the created watchpoint uses one
debug register (either the exact-watchpoints option is on and the
variable is scalar, or the variable has a length of one byte). This feature
is available in native running on a Linux kernel version 2.6.34
or newer.
When running on PowerPC embedded processors, automatically uses
ranged hardware watchpoints, unless the exact-watchpoints option is on,
in which case watchpoints using only one debug register are created when
watching variables of scalar types.
You can create an artificial array to watch an arbitrary memory region using one of the following commands (see section 10.1 Expressions):
() watch *((char *) address)@length
() watch {char[length]} address
|
PowerPC embedded processors support masked watchpoints. See the discussion
about the mask argument in 5.1.2 Setting Watchpoints.
PowerPC embedded processors support hardware accelerated
ranged breakpoints. A ranged breakpoint stops execution of
the inferior whenever it executes an instruction at any address within
the range it specifies. To set a ranged breakpoint in ,
use the break-range command.
provides the following PowerPC-specific commands:
break-range start-location, end-location
set powerpc soft-float
show powerpc soft-float
set powerpc vector-abi
show powerpc vector-abi
set powerpc exact-watchpoints
show powerpc exact-watchpoints
target dink32 dev
target ppcbug dev
target ppcbug1 dev
target sds dev
The following commands specific to the SDS protocol are supported by :
set sdstimeout nsec
show sdstimeout
sds command
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
target op50n dev
target w89k dev
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
enables developers to debug tasks running on
Sparclet targets from a Unix host.
uses code that runs on
both the Unix host and on the Sparclet target. The program
is installed and executed on the Unix host.
remotetimeout args
remotetimeout.
This option is set by the user, and args represents the number of
seconds waits for responses.
When compiling for debugging, include the options `-g' to get debug information and `-Ttext' to relocate the program to where you wish to load it on the target. You may also want to add the options `-n' or `-N' in order to reduce the size of the sections. Example:
sparclet-aout-gcc prog.c -Ttext 0x12010000 -g -o prog -N |
You can use objdump to verify that the addresses are what you intended:
sparclet-aout-objdump --headers --syms prog |
Once you have set
your Unix execution search path to find , you are ready to
run . From your Unix host, run
(or sparclet-aout-gdb, depending on your installation).
comes up showing the prompt:
(gdbslet) |
21.3.9.1 Setting File to Debug Setting the file to debug 21.3.9.2 Connecting to Sparclet 21.3.9.3 Sparclet Download Sparclet download 21.3.9.4 Running and Debugging Running and debugging
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The command file lets you choose with program to debug.
(gdbslet) file prog |
then attempts to read the symbol table of `prog'. locates the file by searching the directories listed in the command search path. If the file was compiled with debug information (option `-g'), source files will be searched as well. locates the source files by searching the directories listed in the directory search path (see section Your Program's Environment). If it fails to find a file, it displays a message such as:
prog: No such file or directory. |
When this happens, add the appropriate directories to the search paths with
the commands path and dir, and execute the
target command again.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The command target lets you connect to a Sparclet target.
To connect to a target on serial port "ttya", type:
(gdbslet) target sparclet /dev/ttya Remote target sparclet connected to /dev/ttya main () at ../prog.c:3 |
displays messages like these:
Connected to ttya. |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Once connected to the Sparclet target,
you can use the
load command to download the file from the host to the target.
The file name and load offset should be given as arguments to the load
command.
Since the file format is aout, the program must be loaded to the starting
address. You can use objdump to find out what this value is. The load
offset is an offset which is added to the VMA (virtual memory address)
of each of the file's sections.
For instance, if the program
`prog' was linked to text address 0x1201000, with data at 0x12010160
and bss at 0x12010170, in , type:
(gdbslet) load prog 0x12010000 Loading section .text, size 0xdb0 vma 0x12010000 |
If the code is loaded at a different address then what the program was linked
to, you may need to use the section and add-symbol-file commands
to tell where to map the symbol table.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can now begin debugging the task using 's execution control
commands, b, step, run, etc. See the
manual for the list of commands.
(gdbslet) b main Breakpoint 1 at 0x12010000: file prog.c, line 3. (gdbslet) run Starting program: prog Breakpoint 1, main (argc=1, argv=0xeffff21c) at prog.c:3 3 char *symarg = 0; (gdbslet) step 4 char *execarg = "hello!"; (gdbslet) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
target sparclite dev
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When configured for debugging Zilog Z8000 targets, includes a Z8000 simulator.
For the Z8000 family, `target sim' simulates either the Z8002 (the unsegmented variant of the Z8000 architecture) or the Z8001 (the segmented variant). The simulator recognizes which architecture is appropriate by inspecting the object code.
target sim args
After specifying this target, you can debug programs for the simulated
CPU in the same style as programs for your host computer; use the
file command to load a new program image, the run command
to run your program, and so on.
As well as making available all the usual machine registers (see section Registers), the Z8000 simulator provides three additional items of information as specially named registers:
cycles
insts
time
You can refer to these values in expressions with the usual conventions; for example, `b fputc if $cycles>5000' sets a conditional breakpoint that suspends only after at least 5000 simulated clock ticks.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When configured for debugging the Atmel AVR, supports the following AVR-specific commands:
info io_registers
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When configured for debugging CRIS, provides the following CRIS-specific commands:
set cris-version ver
show cris-version
set cris-dwarf2-cfi
gcc-cris whose version is below
R59.
show cris-dwarf2-cfi
set cris-mode mode
show cris-mode
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
For the Renesas Super-H processor, provides these commands:
set sh calling-convention convention
show sh calling-convention
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes characteristics of architectures that affect all uses of with the architecture, both native and cross.
21.4.1 AArch64 21.4.2 x86 Architecture-specific Issues 21.4.3 Alpha 21.4.4 MIPS 21.4.5 HPPA HP PA architecture 21.4.6 Cell Broadband Engine SPU architecture 21.4.7 PowerPC
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When is debugging the AArch64 architecture, it provides the following special commands:
set debug aarch64
show debug aarch64
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
set struct-convention mode
structs and
unions from functions to mode. Possible values of
mode are "pcc", "reg", and "default" (the
default). "default" or "pcc" means that structs
are returned on the stack, while "reg" means that a
struct or a union whose size is 1, 2, 4, or 8 bytes will
be returned in a register.
show struct-convention
structs
from functions.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
See the following section.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Alpha- and MIPS-based computers use an unusual stack frame, which sometimes requires to search backward in the object code to find the beginning of a function.
To improve response time (especially for embedded applications, where may be restricted to a slow serial line for this search) you may want to limit the size of this search, using one of these commands:
set heuristic-fence-post limit
heuristic-fence-post must search
and therefore the longer it takes to run. You should only need to use
this command when debugging a stripped executable.
show heuristic-fence-post
These commands are available only when is configured for debugging programs on Alpha or MIPS processors.
Several MIPS-specific commands are available when debugging MIPS programs:
set mips abi arg
show mips abi
set mips compression arg
Possible values of arg are `mips16' and `micromips'. The default compressed ISA encoding is `mips16', as executables containing MIPS16 code frequently are not identified as such.
This setting is "sticky"; that is, it retains its value across debugging sessions until reset either explicitly with this command or implicitly from an executable.
The compiler and/or assembler typically add symbol table annotations to identify functions compiled for the MIPS16 or microMIPS ISAs. If these function-scope annotations are present, uses them in preference to the global compressed ISA encoding setting.
show mips compression
set mipsfpu
show mipsfpu
set mips mask-address arg
show mips mask-address
set remote-mips64-transfers-32bit-regs
show remote-mips64-transfers-32bit-regs
set debug mips
show debug mips
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When is debugging the HP PA architecture, it provides the following special commands:
set debug hppa
show debug hppa
maint print unwind address
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When is debugging the Cell Broadband Engine SPU architecture, it provides the following special commands:
info spu event
info spu signal
info spu mailbox
info spu dma
info spu proxydma
set spu stop-on-load arg
on,
will give control to the user when a new SPE thread enters its main
function. The default is off.
show spu stop-on-load
set spu auto-flush-cache arg
on, will automatically cause the SPE software-managed
cache to be flushed whenever SPE execution stops. This provides a consistent
view of PowerPC memory that is accessed via the cache. If an application
does not use the software-managed cache, this option has no effect.
show spu auto-flush-cache
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When is debugging the PowerPC architecture, it provides a set of
pseudo-registers to enable inspection of 128-bit wide Decimal Floating Point
numbers stored in the floating point registers. These values must be stored
in two consecutive registers, always starting at an even register like
f0 or f2.
The pseudo-registers go from $dl0 through $dl15, and are formed
by joining the even/odd register pairs f0 and f1 for $dl0,
f2 and f3 for $dl1 and so on.
For POWER7 processors, provides a set of pseudo-registers, the 64-bit wide Extended Floating Point Registers (`f32' through `f63').
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can alter the way interacts with you by using the
set command. For commands controlling how displays
data, see Print Settings. Other settings are
described here.
22.1 Prompt 22.2 Command Editing Command editing 22.3 Command History Command history 22.4 Screen Size Screen size 22.5 Numbers 22.6 Configuring the Current ABI Configuring the current ABI 22.7 Automatically loading associated files 22.8 Optional Warnings and Messages Optional warnings and messages 22.9 Optional Messages about Internal Happenings Optional messages about internal happenings 22.10 Other Miscellaneous Settings
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
indicates its readiness to read a command by printing a string
called the prompt. This string is normally `()'. You
can change the prompt string with the set prompt command. For
instance, when debugging with , it is useful to change
the prompt in one of the sessions so that you can always tell
which one you are talking to.
Note: set prompt does not add a space for you after the
prompt you set. This allows you to set a prompt which ends in a space
or a prompt that does not.
set prompt newprompt
show prompt
Versions of that ship with Python scripting enabled have prompt extensions. The commands for interacting with these extensions are:
set extended-prompt prompt
For example:
set extended-prompt Current working directory: \w (gdb) |
Note that when an extended-prompt is set, it takes control of the prompt_hook hook. See prompt_hook, for further information.
show extended-prompt
set extended-prompt, are replaced with the
corresponding strings each time the prompt is displayed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
reads its input commands via the Readline interface. This
GNU library provides consistent behavior for programs which provide a
command line interface to the user. Advantages are GNU Emacs-style
or vi-style inline editing of commands, csh-like history
substitution, and a storage and recall of command history across
debugging sessions.
You may control the behavior of command line editing in with the
command set.
set editing
set editing on
set editing off
show editing
@xref{Command Line Editing},
for more details about the Readline
interface. Users unfamiliar with GNU Emacs or vi are
encouraged to read that chapter.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can keep track of the commands you type during your debugging sessions, so that you can be certain of precisely what happened. Use these commands to manage the command history facility.
uses the GNU History library, a part of the Readline package, to provide the history facility. @xref{Using History Interactively}, for the detailed description of the History library.
To issue a command to without affecting certain aspects of the state which is seen by users, prefix it with `server ' (see section 28.2 The Server Prefix). This means that this command will not affect the command history, nor will it affect 's notion of which command to repeat if RET is pressed on a line by itself.
The server prefix does not affect the recording of values into the value
history; to print a value without recording it into the value history,
use the output command instead of the print command.
Here is the description of commands related to command history.
set history filename fname
GDBHISTFILE, or to
`./.gdb_history' (`./_gdb_history' on MS-DOS) if this variable
is not set.
set history save
set history save on
set history filename command. By default, this option is disabled.
set history save off
set history size size
HISTSIZE, or to 256 if this variable is not set.
History expansion assigns special meaning to the character !. @xref{Event Designators}, for more details.
Since ! is also the logical not operator in C, history expansion
is off by default. If you decide to enable history expansion with the
set history expansion on command, you may sometimes need to
follow ! (when it is used as logical not, in an expression) with
a space or a tab to prevent it from being expanded. The readline
history facilities do not attempt substitution on the strings
!= and !(, even when history expansion is enabled.
The commands to control history expansion are:
set history expansion on
set history expansion
set history expansion off
show history
show history filename
show history save
show history size
show history expansion
show history by itself displays all four states.
show commands
show commands n
show commands +
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Certain commands to may produce large amounts of information output to the screen. To help you read all of it, pauses and asks you for input at the end of each page of output. Type RET when you want to continue the output, or q to discard the remaining output. Also, the screen width setting determines when to wrap lines of output. Depending on what is being printed, tries to break the line at a readable place, rather than simply letting it overflow onto the following line.
Normally knows the size of the screen from the terminal
driver software. For example, on Unix uses the termcap data base
together with the value of the TERM environment variable and the
stty rows and stty cols settings. If this is not correct,
you can override it with the set height and set
width commands:
set height lpp
show height
set width cpl
show width
set commands specify a screen height of lpp lines and
a screen width of cpl characters. The associated show
commands display the current settings.
If you specify a height of zero lines, does not pause during output no matter how long the output is. This is useful if output is to a file or to an editor buffer.
Likewise, you can specify `set width 0' to prevent from wrapping its output.
set pagination on
set pagination off
set height 0. Note that
running with the `--batch' option (see section -batch) also automatically disables pagination.
show pagination
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can always enter numbers in octal, decimal, or hexadecimal in by the usual conventions: octal numbers begin with `0', decimal numbers end with `.', and hexadecimal numbers begin with `0x'. Numbers that neither begin with `0' or `0x', nor end with a `.' are, by default, entered in base 10; likewise, the default display for numbers--when no particular format is specified--is base 10. You can change the default base for both input and output with the commands described below.
set input-radix base
set input-radix 012 set input-radix 10. set input-radix 0xa |
sets the input base to decimal. On the other hand, `set input-radix 10' leaves the input radix unchanged, no matter what it was, since `10', being without any leading or trailing signs of its base, is interpreted in the current radix. Thus, if the current radix is 16, `10' is interpreted in hex, i.e. as 16 decimal, which doesn't change the radix.
set output-radix base
show input-radix
show output-radix
set radix [base]
show radix
set radix sets the radix of input and output to
the same base; without an argument, it resets the radix back to its
default value of 10.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can determine the ABI (Application Binary Interface) of your application automatically. However, sometimes you need to override its conclusions. Use these commands to manage 's view of the current ABI.
One configuration can debug binaries for multiple operating
system targets, either via remote debugging or native emulation.
will autodetect the OS ABI (Operating System ABI) in use,
but you can override its conclusion using the set osabi command.
One example where this is useful is in debugging of binaries which use
an alternate C library (e.g. UCLIBC for GNU/Linux) which does
not have the same identifying marks that the standard C library for your
platform provides.
When is debugging the AArch64 architecture, it provides a
"Newlib" OS ABI. This is useful for handling setjmp and
longjmp when debugging binaries that use the NEWLIB C library.
The "Newlib" OS ABI can be selected by set osabi Newlib.
show osabi
set osabi
set osabi abi
Generally, the way that an argument of type float is passed to a
function depends on whether the function is prototyped. For a prototyped
(i.e. ANSI/ISO style) function, float arguments are passed unchanged,
according to the architecture's convention for float. For unprototyped
(i.e. K&R style) functions, float arguments are first promoted to type
double and then passed.
Unfortunately, some forms of debug information do not reliably indicate whether a function is prototyped. If calls a function that is not marked as prototyped, it consults set coerce-float-to-double.
set coerce-float-to-double
set coerce-float-to-double on
float will be promoted to double when passed
to an unprototyped function. This is the default setting.
set coerce-float-to-double off
float will be passed directly to unprototyped
functions.
show coerce-float-to-double
float to double.
needs to know the ABI used for your program's C++
objects. The correct C++ ABI depends on which C++ compiler was
used to build your application. only fully supports
programs with a single C++ ABI; if your program contains code using
multiple C++ ABI's or if can not identify your
program's ABI correctly, you can tell which ABI to use.
Currently supported ABI's include "gnu-v2", for g++ versions
before 3.0, "gnu-v3", for g++ versions 3.0 and later, and
"hpaCC" for the HP ANSI C++ compiler. Other C++ compilers may
use the "gnu-v2" or "gnu-v3" ABI's as well. The default setting is
"auto".
show cp-abi
set cp-abi
set cp-abi abi
set cp-abi auto
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
sometimes reads files with commands and settings automatically, without being explicitly told so by the user. We call this feature auto-loading. While auto-loading is useful for automatically adapting to the needs of your project, it can sometimes produce unexpected results or introduce security risks (e.g., if the file comes from untrusted sources).
Note that loading of these associated files (including the local `.gdbinit'
file) requires accordingly configured auto-load safe-path
(see section 22.7.4 Security restriction for auto-loading).
For these reasons, includes commands and options to let you control when to auto-load files and which files should be auto-loaded.
set auto-load off
$ gdb -iex "set auto-load off" untrusted-executable corefile |
Be aware that system init file (see section C.6 System-wide configuration and settings)
and init files from your home directory (see Home Directory Init File)
still get read (as they come from generally trusted directories).
To prevent from auto-loading even those init files, use the
`-nx' option (see section 2.1.2 Choosing Modes), in addition to
set auto-load no.
show auto-load
(gdb) show auto-load
gdb-scripts: Auto-loading of canned sequences of commands scripts is on.
libthread-db: Auto-loading of inferior specific libthread_db is on.
local-gdbinit: Auto-loading of .gdbinit script from current directory
is on.
python-scripts: Auto-loading of Python scripts is on.
safe-path: List of directories from which it is safe to auto-load files
is $debugdir:$datadir/auto-load.
scripts-directory: List of directories from which to load auto-loaded scripts
is $debugdir:$datadir/auto-load.
|
info auto-load
(gdb) info auto-load
gdb-scripts:
Loaded Script
Yes /home/user/gdb/gdb-gdb.gdb
libthread-db: No auto-loaded libthread-db.
local-gdbinit: Local .gdbinit file "/home/user/gdb/.gdbinit" has been
loaded.
python-scripts:
Loaded Script
Yes /home/user/gdb/gdb-gdb.py
|
These are various kinds of files can automatically load:
.debug_gdb_scripts section,
controlled by set auto-load python-scripts.
These are control commands for the auto-loading:
| See set auto-load off. | Disable auto-loading globally. |
| See show auto-load. | Show setting of all kinds of files. |
| See info auto-load. | Show state of all kinds of files. |
| See set auto-load gdb-scripts. | Control for command scripts. |
| See show auto-load gdb-scripts. | Show setting of command scripts. |
| See info auto-load gdb-scripts. | Show state of command scripts. |
| See set auto-load python-scripts. | Control for Python scripts. |
| See show auto-load python-scripts. | Show setting of Python scripts. |
| See info auto-load python-scripts. | Show state of Python scripts. |
| See set auto-load scripts-directory. | Control for auto-loaded scripts location. |
| See show auto-load scripts-directory. | Show auto-loaded scripts location. |
| See set auto-load local-gdbinit. | Control for init file in the current directory. |
| See show auto-load local-gdbinit. | Show setting of init file in the current directory. |
| See info auto-load local-gdbinit. | Show state of init file in the current directory. |
| See set auto-load libthread-db. | Control for thread debugging library. |
| See show auto-load libthread-db. | Show setting of thread debugging library. |
| See info auto-load libthread-db. | Show state of thread debugging library. |
| See set auto-load safe-path. | Control directories trusted for automatic loading. |
| See show auto-load safe-path. | Show directories trusted for automatic loading. |
| See add-auto-load-safe-path. | Add directory trusted for automatic loading. |
@xref{Python Auto-loading}.
22.7.1 Automatically loading init file in the current directory `set/show/info auto-load local-gdbinit' 22.7.2 Automatically loading thread debugging library `set/show/info auto-load libthread-db' 22.7.3 The `objfile-gdb.gdb' file `set/show/info auto-load gdb-script' 22.7.4 Security restriction for auto-loading `set/show/info auto-load safe-path' 22.7.5 Displaying files tried for auto-load `set/show debug auto-load'
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, reads and executes the canned sequences of commands from init file (if any) in the current working directory, see Init File in the Current Directory during Startup.
Note that loading of this local `.gdbinit' file also requires accordingly
configured auto-load safe-path (see section 22.7.4 Security restriction for auto-loading).
set auto-load local-gdbinit [on|off]
show auto-load local-gdbinit
info auto-load local-gdbinit
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This feature is currently present only on GNU/Linux native hosts.
reads in some cases thread debugging library from places specific to the inferior (see set libthread-db-search-path).
The special `libthread-db-search-path' entry `$sdir' is processed without checking this `set auto-load libthread-db' switch as system libraries have to be trusted in general. In all other cases of `libthread-db-search-path' entries checks first if `set auto-load libthread-db' is enabled before trying to open such thread debugging library.
Note that loading of this debugging library also requires accordingly configured
auto-load safe-path (see section 22.7.4 Security restriction for auto-loading).
set auto-load libthread-db [on|off]
show auto-load libthread-db
info auto-load libthread-db
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
tries to load an `objfile-gdb.gdb' file containing canned sequences of commands (see section 23.1 Canned Sequences of Commands), as long as `set auto-load gdb-scripts' is set to `on'.
Note that loading of this script file also requires accordingly configured
auto-load safe-path (see section 22.7.4 Security restriction for auto-loading).
For more background refer to the similar Python scripts auto-loading description (see section 23.2.3.1 The `objfile-gdb.py' file).
set auto-load gdb-scripts [on|off]
show auto-load gdb-scripts
info auto-load gdb-scripts [regexp]
If regexp is supplied only canned sequences of commands scripts with matching names are printed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
As the files of inferior can come from untrusted source (such as submitted by an application user) does not always load any files automatically. provides the `set auto-load safe-path' setting to list directories trusted for loading files not explicitly requested by user. Each directory can also be a shell wildcard pattern.
If the path is not set properly you will see a warning and the file will not get loaded:
$ ./gdb -q ./gdb
Reading symbols from /home/user/gdb/gdb...done.
warning: File "/home/user/gdb/gdb-gdb.gdb" auto-loading has been
declined by your `auto-load safe-path' set
to "$debugdir:$datadir/auto-load".
warning: File "/home/user/gdb/gdb-gdb.py" auto-loading has been
declined by your `auto-load safe-path' set
to "$debugdir:$datadir/auto-load".
|
To instruct to go ahead and use the init files anyway, invoke like this:
$ gdb -q -iex "set auto-load safe-path /home/user/gdb" ./gdb |
The list of trusted directories is controlled by the following commands:
set auto-load safe-path [directories]
FNM_PATHNAME for system function fnmatch
(see section `Wildcard Matching' in GNU C Library Reference Manual).
If you omit directories, `auto-load safe-path' will be reset to
its default value as specified during compilation.
The list of directories uses path separator (`:' on GNU and Unix
systems, `;' on MS-Windows and MS-DOS) to separate directories, similarly
to the PATH environment variable.
show auto-load safe-path
add-auto-load-safe-path
This variable defaults to what --with-auto-load-dir has been configured
to (see with-auto-load-dir). `$debugdir' and `$datadir'
substitution applies the same as for set auto-load scripts-directory.
The default set auto-load safe-path value can be also overriden by
configuration option `--with-auto-load-safe-path'.
Setting this variable to `/' disables this security protection, corresponding configuration option is `--without-auto-load-safe-path'. This variable is supposed to be set to the system directories writable by the system superuser only. Users can add their source directories in init files in their home directories (see Home Directory Init File). See also deprecated init file in the current directory (see Init File in the Current Directory during Startup).
To force to load the files it declined to load in the previous example, you could use one of the following ways:
On the other hand you can also explicitly forbid automatic files loading which also suppresses any such warning messages:
This setting applies to the file names as entered by user. If no entry matches tries as a last resort to also resolve all the file names into their canonical form (typically resolving symbolic links) and compare the entries again. already canonicalizes most of the filenames on its own before starting the comparison so a canonical form of directories is recommended to be entered.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
For better visibility of all the file locations where you can place scripts to be auto-loaded with inferior -- or to protect yourself against accidental execution of untrusted scripts -- provides a feature for printing all the files attempted to be loaded. Both existing and non-existing files may be printed.
For example the list of directories from which it is safe to auto-load files (see section 22.7.4 Security restriction for auto-loading) applies also to canonicalized filenames which may not be too obvious while setting it up.
(gdb) set debug auto-load on
(gdb) file ~/src/t/true
auto-load: Loading canned sequences of commands script "/tmp/true-gdb.gdb"
for objfile "/tmp/true".
auto-load: Updating directories of "/usr:/opt".
auto-load: Using directory "/usr".
auto-load: Using directory "/opt".
warning: File "/tmp/true-gdb.gdb" auto-loading has been declined
by your `auto-load safe-path' set to "/usr:/opt".
|
set debug auto-load [on|off]
show debug auto-load
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, is silent about its inner workings. If you are
running on a slow machine, you may want to use the set verbose
command. This makes tell you when it does a lengthy
internal operation, so you will not think it has crashed.
Currently, the messages controlled by set verbose are those
which announce that the symbol table for a source file is being read;
see symbol-file in Commands to Specify Files.
set verbose on
set verbose off
show verbose
set verbose is on or off.
By default, if encounters bugs in the symbol table of an object file, it is silent; but if you are debugging a compiler, you may find this information useful (see section Errors Reading Symbol Files).
set complaints limit
show complaints
By default, is cautious, and asks what sometimes seems to be a lot of stupid questions to confirm certain commands. For example, if you try to run a program which is already running:
() run The program being debugged has been started already. Start it from the beginning? (y or n) |
If you are willing to unflinchingly face the consequences of your own commands, you can disable this "feature":
set confirm off
set confirm on
show confirm
If you need to debug user-defined commands or sourced files you may find it useful to enable command tracing. In this mode each command will be printed as it is executed, prefixed with one or more `+' symbols, the quantity denoting the call depth of each command.
set trace-commands on
set trace-commands off
show trace-commands
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
has commands that enable optional debugging messages from various subsystems; normally these commands are of interest to maintainers, or when reporting a bug. This section documents those commands.
set exec-done-display
show exec-done-display
set debug aarch64
show debug aarch64
set debug arch
show debug arch
set debug aix-thread
show debug aix-thread
set debug check-physname
show debug check-physname
set debug coff-pe-read
show debug coff-pe-read
set debug dwarf2-die
show debug dwarf2-die
set debug dwarf2-read
show debug dwarf2-read
set debug displaced
show debug displaced
set debug event
show debug event
set debug expression
show debug expression
set debug frame
show debug frame
set debug gnu-nat
show debug gnu-nat
set debug infrun
show debug infrun
set debug jit
show debug jit
set debug lin-lwp
show debug lin-lwp
set debug mach-o
show debug mach-o
set debug notification
show debug notification
set debug observer
show debug observer
set debug overload
show debug overload
set debug parser
yydebug variable in the expression
parser. See section `Tracing Your Parser' in Bison, for
details. The default is off.
show debug parser
set debug remote
show debug remote
set debug serial
show debug serial
set debug solib-frv
show debug solib-frv
set debug symtab-create
show debug symtab-create
set debug target
run command.
show debug target
set debug timestamp
show debug timestamp
set debugvarobj
show debugvarobj
set debug xml
show debug xml
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
set interactive-mode
on, forces to assume that GDB was started
in a terminal. In practice, this means that should wait
for the user to answer queries generated by commands entered at
the command prompt. If off, forces to operate
in the opposite mode, and it uses the default answers to all queries.
If auto (the default), tries to determine whether
its standard input is a terminal, and works in interactive-mode if it
is, non-interactively otherwise.
In the vast majority of cases, the debugger should be able to guess correctly which mode should be used. But this setting can be useful in certain specific cases, such as running a MinGW inside a cygwin window.
show interactive-mode
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides three mechanisms for extension. The first is based on composition of commands, the second is based on the Python scripting language, and the third is for defining new aliases of existing commands.
To facilitate the use of the first two extensions, is capable of evaluating the contents of a file. When doing so, can recognize which scripting language is being used by looking at the filename extension. Files with an unrecognized filename extension are always treated as a Command Files. See section Command files.
You can control how evaluates these files with the following setting:
set script-extension off
set script-extension soft
set script-extension strict
show script-extension
script-extension option.
23.1 Canned Sequences of Commands 23.2 Scripting using Python 23.3 Creating new spellings of existing commands
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Aside from breakpoint commands (see section Breakpoint Command Lists), provides two ways to store sequences of commands for execution as a unit: user-defined commands and command files.
23.1.1 User-defined Commands How to define your own commands 23.1.2 User-defined Command Hooks Hooks for user-defined commands 23.1.3 Command Files How to write scripts of commands to be stored in a file 23.1.4 Commands for Controlled Output Commands for controlled output
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A user-defined command is a sequence of commands to
which you assign a new name as a command. This is done with the
define command. User commands may accept up to 10 arguments
separated by whitespace. Arguments are accessed within the user command
via $arg0...$arg9. A trivial example:
define adder print $arg0 + $arg1 + $arg2 end |
To execute the command use:
adder 1 2 3 |
This defines the command adder, which prints the sum of
its three arguments. Note the arguments are text substitutions, so they may
reference variables, use complex expressions, or even perform inferior
functions calls.
In addition, $argc may be used to find out how many arguments have
been passed. This expands to a number in the range 0...10.
define adder
if $argc == 2
print $arg0 + $arg1
end
if $argc == 3
print $arg0 + $arg1 + $arg2
end
end
|
define commandname
The definition of the command is made up of other command lines,
which are given following the define command. The end of these
commands is marked by a line containing end.
document commandname
help. The command commandname must already be
defined. This command reads lines of documentation just as define
reads the lines of the command definition, ending with end.
After the document command is finished, help on command
commandname displays the documentation you have written.
You may use the document command again to change the
documentation of a command. Redefining the command with define
does not change the documentation.
dont-repeat
help user-defined
show user
show user commandname
show max-user-call-depth
set max-user-call-depth
max-user-call-depth controls how many recursion
levels are allowed in user-defined commands before suspects an
infinite recursion and aborts the command.
This does not apply to user-defined python commands.
In addition to the above commands, user-defined commands frequently use control flow commands, described in 23.1.3 Command Files.
When user-defined commands are executed, the commands of the definition are not printed. An error in any command stops execution of the user-defined command.
If used interactively, commands that would ask for confirmation proceed without asking when used inside a user-defined command. Many commands that normally print messages to say what they are doing omit the messages when used in a user-defined command.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You may define hooks, which are a special kind of user-defined command. Whenever you run the command `foo', if the user-defined command `hook-foo' exists, it is executed (with no arguments) before that command.
A hook may also be defined which is run after the command you executed. Whenever you run the command `foo', if the user-defined command `hookpost-foo' exists, it is executed (with no arguments) after that command. Post-execution hooks may exist simultaneously with pre-execution hooks, for the same command.
It is valid for a hook to call the command which it hooks. If this occurs, the hook is not re-executed, thereby avoiding infinite recursion.
In addition, a pseudo-command, `stop' exists. Defining (`hook-stop') makes the associated commands execute every time execution stops in your program: before breakpoint commands are run, displays are printed, or the stack frame is printed.
For example, to ignore SIGALRM signals while
single-stepping, but treat them normally during normal execution,
you could define:
define hook-stop handle SIGALRM nopass end define hook-run handle SIGALRM pass end define hook-continue handle SIGALRM pass end |
As a further example, to hook at the beginning and end of the echo
command, and to add extra text to the beginning and end of the message,
you could define:
define hook-echo echo <<<--- end define hookpost-echo echo --->>>\n end () echo Hello World <<<---Hello World--->>> () |
You can define a hook for any single-word command in , but
not for command aliases; you should define a hook for the basic command
name, e.g. backtrace rather than bt.
You can hook a multi-word command by adding hook- or
hookpost- to the last word of the command, e.g.
`define target hook-remote' to add a hook to `target remote'.
If an error occurs during the execution of your hook, execution of commands stops and issues a prompt (before the command that you actually typed had a chance to run).
If you try to define a hook which does not match any known command, you
get a warning from the define command.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A command file for is a text file made of lines that are commands. Comments (lines starting with #) may also be included. An empty line in a command file does nothing; it does not mean to repeat the last command, as it would from the terminal.
You can request the execution of a command file with the source
command. Note that the source command is also used to evaluate
scripts that are not Command Files. The exact behavior can be configured
using the script-extension setting.
See section Extending GDB.
source [-s] [-v] filename
The lines in a command file are generally executed sequentially, unless the order of execution is changed by one of the flow-control commands described below. The commands are not printed as they are executed. An error in any command terminates execution of the command file and control is returned to the console.
first searches for filename in the current directory. If the file is not found there, and filename does not specify a directory, then also looks for the file on the source search path (specified with the `directory' command); except that `$cdir' is not searched because the compilation directory is not relevant to scripts.
If -s is specified, then searches for filename
on the search path even if filename specifies a directory.
The search is done by appending filename to each element of the
search path. So, for example, if filename is `mylib/myscript'
and the search path contains `/home/user' then will
look for the script `/home/user/mylib/myscript'.
The search is also done if filename is an absolute path.
For example, if filename is `/tmp/myscript' and
the search path contains `/home/user' then will
look for the script `/home/user/tmp/myscript'.
For DOS-like systems, if filename contains a drive specification,
it is stripped before concatenation. For example, if filename is
`d:myscript' and the search path contains `c:/tmp' then
will look for the script `c:/tmp/myscript'.
If -v, for verbose mode, is given then displays
each command as it is executed. The option must be given before
filename, and is interpreted as part of the filename anywhere else.
Commands that would ask for confirmation if used interactively proceed without asking when used in a command file. Many commands that normally print messages to say what they are doing omit the messages when called from command files.
also accepts command input from standard input. In this mode, normal output goes to standard output and error output goes to standard error. Errors in a command file supplied on standard input do not terminate execution of the command file--execution continues with the next command.
gdb < cmds > log 2>&1 |
(The syntax above will vary depending on the shell used.) This example will execute commands from the file `cmds'. All output and errors would be directed to `log'.
Since commands stored on command files tend to be more general than commands typed interactively, they frequently need to deal with complicated situations, such as different or unexpected values of variables and symbols, changes in how the program being debugged is built, etc. provides a set of flow-control commands to deal with these complexities. Using these commands, you can write complex scripts that loop over data structures, execute commands conditionally, etc.
if
else
if command takes a single argument, which is an
expression to evaluate. It is followed by a series of commands that
are executed only if the expression is true (its value is nonzero).
There can then optionally be an else line, followed by a series
of commands that are only executed if the expression was false. The
end of the list is marked by a line containing end.
while
if: the command takes a single argument, which is an expression
to evaluate, and must be followed by the commands to execute, one per
line, terminated by an end. These commands are called the
body of the loop. The commands in the body of while are
executed repeatedly as long as the expression evaluates to true.
loop_break
while loop in whose body it is included.
Execution of the script continues after that whiles end
line.
loop_continue
while loop in whose body it is included. Execution
branches to the beginning of the while loop, where it evaluates
the controlling expression.
end
if,
else, or while flow-control commands.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
During the execution of a command file or a user-defined command, normal output is suppressed; the only output that appears is what is explicitly printed by the commands in the definition. This section describes three commands useful for generating exactly the output you want.
echo text
A backslash at the end of text can be used, as in C, to continue the command onto subsequent lines. For example,
echo This is some text\n\ which is continued\n\ onto several lines.\n |
produces the same output as
echo This is some text\n echo which is continued\n echo onto several lines.\n |
output expression
output/fmt expression
print. See section Output Formats, for more information.
printf template, expressions...
printf (template, expressions...); |
As in C printf, ordinary characters in template
are printed verbatim, while conversion specification introduced
by the `%' character cause subsequent expressions to be
evaluated, their values converted and formatted according to type and
style information encoded in the conversion specifications, and then
printed.
For example, you can print two values in hex like this:
printf "foo, bar-foo = 0x%x, 0x%x\n", foo, bar-foo |
printf supports all the standard C conversion
specifications, including the flags and modifiers between the `%'
character and the conversion letter, with the following exceptions:
LC_NUMERIC') is not supported.
Note that the `ll' type modifier is supported only if the
underlying C implementation used to build supports
the long long int type, and the `L' type modifier is
supported only if long double type is available.
As in C, printf supports simple backslash-escape
sequences, such as \n, `\t', `\\', `\"',
`\a', and `\f', that consist of backslash followed by a
single character. Octal and hexadecimal escape sequences are not
supported.
Additionally, printf supports conversion specifications for DFP
(Decimal Floating Point) types using the following length modifiers
together with a floating point specifier.
letters:
Decimal32 types.
Decimal64 types.
Decimal128 types.
If the underlying C implementation used to build has
support for the three length modifiers for DFP types, other modifiers
such as width and precision will also be available for to use.
In case there is no such C support, no additional modifiers will be
available and the value will be printed in the standard way.
Here's an example of printing DFP types using the above conversion letters:
printf "D32: %Hf - D64: %Df - D128: %DDf\n",1.2345df,1.2E10dd,1.2E1dl |
eval template, expressions...
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can script using the Python programming language. This feature is available only if was configured using `--with-python'.
Python scripts used by should be installed in `data-directory/python', where data-directory is the data directory as determined at startup (see section 18.6 GDB Data Files). This directory, known as the python directory, is automatically added to the Python Search Path in order to allow the Python interpreter to locate all scripts installed at this location.
Additionally, commands and convenience functions which are written in Python and are located in the `data-directory/python/gdb/command' or `data-directory/python/gdb/function' directories are automatically imported when starts.
23.2.1 Python Commands Accessing Python from . 23.2.2 Python API Accessing from Python. 23.2.3 Python Auto-loading Automatically loading Python code. 23.2.4 Python modules Python modules provided by .
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides two commands for accessing the Python interpreter, and one related setting:
python-interactive [command]
pi [command]
python-interactive command can be used
to start an interactive Python prompt. To return to ,
type the EOF character (e.g., Ctrl-D on an empty prompt).
Alternatively, a single-line Python command can be given as an argument and evaluated. If the command is an expression, the result will be printed; otherwise, nothing will be printed. For example:
() python-interactive 2 + 3 5 |
python [command]
py [command]
python command can be used to evaluate Python code.
If given an argument, the python command will evaluate the
argument as a Python command. For example:
() python print 23 23 |
If you do not provide an argument to python, it will act as a
multi-line command, like define. In this case, the Python
script is made up of subsequent command lines, given after the
python command. This command list is terminated using a line
containing end. For example:
() python Type python script End with a line saying just "end". >print 23 >end 23 |
set python print-stack
set python print-stack: if full, then
full Python stack printing is enabled; if none, then Python stack
and message printing is disabled; if message, the default, only
the message component of the error is printed.
It is also possible to execute a Python script from the interpreter:
source `script-name'
script-extension setting. See section Extending GDB.
python execfile ("script-name")
execfile Python built-in function,
and thus is always available.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
At startup, overrides Python's sys.stdout and
sys.stderr to print using 's output-paging streams.
A Python program which outputs to one of these streams may have its
output interrupted by the user (see section 22.4 Screen Size). In this
situation, a Python KeyboardInterrupt exception is thrown.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
introduces a new Python module, named gdb. All
methods and classes added by are placed in this module.
automatically imports the gdb module for
use in all scripts evaluated by the python command.
from_tty specifies whether ought to consider this
command as having originated from the user invoking it interactively.
It must be a boolean value. If omitted, it defaults to False.
By default, any output produced by command is sent to
's standard output. If the to_string parameter is
True, then output will be collected by gdb.execute and
returned as a string. The default is False, in which case the
return value is None. If to_string is True, the
virtual terminal will be temporarily set to unlimited width
and height, and its pagination will be disabled; see section 22.4 Screen Size.
If the named parameter does not exist, this function throws a
gdb.error (see section 23.2.2.2 Exception Handling). Otherwise, the
parameter's value is converted to a Python value of the appropriate
type, and returned.
gdb.error exception will be
raised.
If no exception is raised, the return value is always an instance of
gdb.Value (see section 23.2.2.3 Values From Inferior).
gdb.Value.
expression must be a string.
This function can be useful when implementing a new command
(see section 23.2.2.12 Commands In Python), as it provides a way to parse the
command's argument as an expression. It is also useful simply to
compute values, for example, it is the only way to get the value of a
convenience variable (see section 10.11 Convenience Variables) as a gdb.Value.
gdb.Symtab_and_line object corresponding to the
pc value. See section 23.2.2.20 Symbol table representation in Python.. If an invalid
value of pc is passed as an argument, then the symtab and
line attributes of the returned gdb.Symtab_and_line object
will be None and 0 respectively.
post_event will be run in the order in which they
were posted; however, there is no way to know when they will be
processed relative to other events inside .
is not thread-safe. If your Python program uses multiple
threads, you must be careful to only call -specific
functions in the main thread. post_event ensures
this. For example:
() python
>import threading
>
>class Writer():
> def __init__(self, message):
> self.message = message;
> def __call__(self):
> gdb.write(self.message)
>
>class MyThread1 (threading.Thread):
> def run (self):
> gdb.post_event(Writer("Hello "))
>
>class MyThread2 (threading.Thread):
> def run (self):
> gdb.post_event(Writer("World\n"))
>
>MyThread1().start()
>MyThread2().start()
>end
() Hello World
|
gdb.STDOUT
gdb.STDERR
gdb.STDLOG
Writing to sys.stdout or sys.stderr will automatically
call this function and will automatically direct the output to the
relevant stream.
gdb.STDOUT
gdb.STDERR
gdb.STDLOG
Flushing sys.stdout or sys.stderr will automatically
call this function for the relevant stream.
gdb.parameter('target-charset') in
that `auto' is never returned.
gdb.parameter('target-wide-charset') in that `auto' is
never returned.
None.
None if
the expression has been fully parsed). The second element contains
either None or another tuple that contains all the locations
that match the expression represented as gdb.Symtab_and_line
objects (see section 23.2.2.20 Symbol table representation in Python.). If expression is
provided, it is decoded the way that 's inbuilt
break or edit commands do (see section 9.2 Specifying a Location).
If prompt_hook is callable, will call the method assigned to this operation before a prompt is displayed by .
The parameter current_prompt contains the current
prompt. This method must return a Python string, or None. If
a string is returned, the prompt will be set to that
string. If None is returned, will continue to use
the current prompt.
Some prompts cannot be substituted in . Secondary prompts such as those used by readline for command input, and annotation related prompts are prohibited from being changed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When executing the python command, Python exceptions
uncaught within the Python code are translated to calls to
error-reporting mechanism. If the command that called
python does not handle the error, will
terminate it and print an error message containing the Python
exception name, the associated value, and the Python call stack
backtrace at the point where the exception was raised. Example:
() python print foo Traceback (most recent call last): File "<string>", line 1, in <module> NameError: name 'foo' is not defined |
errors that happen in commands invoked by Python code are converted to Python exceptions. The type of the Python exception depends on the error.
gdb.error
RuntimeError, for compatibility with earlier
versions of .
If an error occurring in does not fit into some more specific category, then the generated exception will have this type.
gdb.MemoryError
gdb.error which is thrown when an
operation tried to access invalid memory in the inferior.
KeyboardInterrupt
KeyboardInterrupt exception.
In all cases, your exception handler will see the error message as its value and the Python call stack backtrace at the Python statement closest to where the error occured as the traceback.
When implementing commands in Python via gdb.Command,
it is useful to be able to throw an exception that doesn't cause a
traceback to be printed. For example, the user may have invoked the
command incorrectly. Use the gdb.GdbError exception
to handle this case. Example:
(gdb) python
>class HelloWorld (gdb.Command):
> """Greet the whole world."""
> def __init__ (self):
> super (HelloWorld, self).__init__ ("hello-world", gdb.COMMAND_USER)
> def invoke (self, args, from_tty):
> argv = gdb.string_to_argv (args)
> if len (argv) != 0:
> raise gdb.GdbError ("hello-world takes no arguments")
> print "Hello, World!"
>HelloWorld ()
>end
(gdb) hello-world 42
hello-world takes no arguments
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides values it obtains from the inferior program in
an object of type gdb.Value. uses this object
for its internal bookkeeping of the inferior's values, and for
fetching values when necessary.
Inferior values that are simple scalars can be used directly in
Python expressions that are valid for the value's data type. Here's
an example for an integer or floating-point value some_val:
bar = some_val + 2 |
As result of this, bar will also be a gdb.Value object
whose values are of the same type as those of some_val.
Inferior values that are structures or instances of some class can
be accessed using the Python dictionary syntax. For example, if
some_val is a gdb.Value instance holding a structure, you
can access its foo element with:
bar = some_val['foo'] |
Again, bar will also be a gdb.Value object.
A gdb.Value that represents a function can be executed via
inferior function call. Any arguments provided to the call must match
the function's prototype, and must be provided in the order specified
by that prototype.
For example, some_val is a gdb.Value instance
representing a function that takes two integers as arguments. To
execute this function, call it like so:
result = some_val (10,20) |
Any values returned from a function call will be stored as a
gdb.Value.
The following attributes are provided:
gdb.Value object representing the address. Otherwise,
this attribute holds None.
gdb.Value. The value of this attribute is a
gdb.Type object (see section 23.2.2.4 Types In Python).
gdb.Value. This uses C++ run-time
type information (RTTI) to determine the dynamic type of the
value. If this value is of class type, it will return the class in
which the value is embedded, if any. If this value is of pointer or
reference to a class type, it will compute the dynamic type of the
referenced object, and return a pointer or reference to that type,
respectively. In all other cases, it will return the value's static
type.
Note that this feature will only work when debugging a C++ program that includes RTTI for the object in question. Otherwise, it will just return the static type of the value as in ptype foo (see section ptype).
True if this
gdb.Value has not yet been fetched from the inferior.
does not fetch values until necessary, for efficiency.
For example:
myval = gdb.parse_and_eval ('somevar')
|
The value of somevar is not fetched at this time. It will be
fetched when the value is needed, or when the fetch_lazy
method is invoked.
The following methods are provided:
gdb.Value via
this object initializer. Specifically:
long type for the
current architecture.
long long type for the
current architecture.
double type for the
current architecture.
gdb.Value
val is a gdb.Value, then a copy of the value is made.
gdb.LazyString
val is a gdb.LazyString (see section 23.2.2.23 Python representation of lazy strings.), then the lazy string's value method is called, and
its result is used.
gdb.Value that is the result of
casting this instance to the type described by type, which must
be a gdb.Type object. If the cast cannot be performed for some
reason, this method throws an exception.
gdb.Value object
whose contents is the object pointed to by the pointer. For example, if
foo is a C pointer to an int, declared in your C program as
int *foo; |
then you can use the corresponding gdb.Value to access what
foo points to like this:
bar = foo.dereference () |
The result bar will be a gdb.Value object holding the
value pointed to by foo.
A similar function Value.referenced_value exists which also
returns gdb.Value objects corresonding to the values pointed to
by pointer values (and additionally, values referenced by reference
values). However, the behavior of Value.dereference
differs from Value.referenced_value by the fact that the
behavior of Value.dereference is identical to applying the C
unary operator * on a given value. For example, consider a
reference to a pointer ptrref, declared in your C++ program
as
typedef int *intptr; ... int val = 10; intptr ptr = &val; intptr &ptrref = ptr; |
Though ptrref is a reference value, one can apply the method
Value.dereference to the gdb.Value object corresponding
to it and obtain a gdb.Value which is identical to that
corresponding to val. However, if you apply the method
Value.referenced_value, the result would be a gdb.Value
object identical to that corresponding to ptr.
py_ptrref = gdb.parse_and_eval ("ptrref")
py_val = py_ptrref.dereference ()
py_ptr = py_ptrref.referenced_value ()
|
The gdb.Value object py_val is identical to that
corresponding to val, and py_ptr is identical to that
corresponding to ptr. In general, Value.dereference can
be applied whenever the C unary operator * can be applied
to the corresponding C value. For those cases where applying both
Value.dereference and Value.referenced_value is allowed,
the results obtained need not be identical (as we have seen in the above
example). The results are however identical when applied on
gdb.Value objects corresponding to pointers (gdb.Value
objects with type code TYPE_CODE_PTR) in a C/C++ program.
gdb.Value object corresponding to the value referenced by the
pointer/reference value. For pointer data types,
Value.dereference and Value.referenced_value produce
identical results. The difference between these methods is that
Value.dereference cannot get the values referenced by reference
values. For example, consider a reference to an int, declared
in your C++ program as
int val = 10; int &ref = val; |
then applying Value.dereference to the gdb.Value object
corresponding to ref will result in an error, while applying
Value.referenced_value will result in a gdb.Value object
identical to that corresponding to val.
py_ref = gdb.parse_and_eval ("ref")
er_ref = py_ref.dereference () # Results in error
py_val = py_ref.referenced_value () # Returns the referenced value
|
The gdb.Value object py_val is identical to that
corresponding to val.
Value.cast, but works as if the C++ dynamic_cast
operator were used. Consult a C++ reference for details.
Value.cast, but works as if the C++ reinterpret_cast
operator were used. Consult a C++ reference for details.
gdb.Value represents a string, then this method
converts the contents to a Python string. Otherwise, this method will
throw an exception.
Strings are recognized in a language-specific way; whether a given
gdb.Value represents a string is determined by the current
language.
For C-like languages, a value is a string if it is a pointer to or an array of characters or ints. The string is assumed to be terminated by a zero of the appropriate width. However if the optional length argument is given, the string will be converted to that given length, ignoring any embedded zeros that the string may contain.
If the optional encoding argument is given, it must be a string
naming the encoding of the string in the gdb.Value, such as
"ascii", "iso-8859-6" or "utf-8". It accepts
the same encodings as the corresponding argument to Python's
string.decode method, and the Python codec machinery will be used
to convert the string. If encoding is not given, or if
encoding is the empty string, then either the target-charset
(see section 10.20 Character Sets) will be used, or a language-specific encoding
will be used, if the current language is able to supply one.
The optional errors argument is the same as the corresponding
argument to Python's string.decode method.
If the optional length argument is given, the string will be fetched and converted to the given length.
gdb.Value represents a string, then this method
converts the contents to a gdb.LazyString (see section 23.2.2.23 Python representation of lazy strings.). Otherwise, this method will throw an exception.
If the optional encoding argument is given, it must be a string
naming the encoding of the gdb.LazyString. Some examples are:
`ascii', `iso-8859-6' or `utf-8'. If the
encoding argument is an encoding that does
recognize, will raise an error.
When a lazy string is printed, the encoding machinery is used to convert the string during printing. If the optional encoding argument is not provided, or is an empty string, will automatically select the encoding most suitable for the string type. For further information on encoding in please see 10.20 Character Sets.
If the optional length argument is given, the string will be fetched and encoded to the length of characters specified. If the length argument is not provided, the string will be fetched and encoded until a null of appropriate width is found.
gdb.Value object is currently a lazy value
(gdb.Value.is_lazy is True), then the value is
fetched from the inferior. Any errors that occur in the process
will produce a Python exception.
If the gdb.Value object is not a lazy value, this method
has no effect.
This method does not return a value.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
represents types from the inferior using the class
gdb.Type.
The following type-related functions are available in the gdb
module:
If block is given, then name is looked up in that scope. Otherwise, it is searched for globally.
Ordinarily, this function will return an instance of gdb.Type.
If the named type cannot be found, it will throw an exception.
If the type is a structure or class type, or an enum type, the fields
of that type can be accessed using the Python dictionary syntax.
For example, if some_type is a gdb.Type instance holding
a structure type, you can access its foo field with:
bar = some_type['foo'] |
bar will be a gdb.Field object; see below under the
description of the Type.fields method for a description of the
gdb.Field class.
An instance of Type has the following attributes:
TYPE_CODE_ constants defined below.
char units. Usually, a
target's char type will be an 8-bit byte. However, on some
unusual platforms, this type may have a different size.
struct, union, or enum in C and C++; not all
languages have this concept. If this type has no tag name, then
None is returned.
The following methods are provided:
Each field is a gdb.Field object, with some pre-defined attributes:
bitpos
static fields (as in
C++ or Java). For non-static fields, the value is the bit
position of the field. For enum fields, the value is the
enumeration member's integer representation.
name
None for anonymous fields.
artificial
True if the field is artificial, usually meaning that
it was provided by the compiler and not the user. This attribute is
always provided, and is False if the field is not artificial.
is_base_class
True if the field represents a base class of a C++
structure. This attribute is always provided, and is False
if the field is not a base class of the type that is the argument of
fields, or if that type was not a C++ class.
bitsize
type
Type,
but it can be None in some situations.
gdb.Type object which represents an array of this
type. If one argument is given, it is the inclusive upper bound of
the array; in this case the lower bound is zero. If two arguments are
given, the first argument is the lower bound of the array, and the
second argument is the upper bound of the array. An array's length
must not be negative, but the bounds can be.
gdb.Type object which represents a vector of this
type. If one argument is given, it is the inclusive upper bound of
the vector; in this case the lower bound is zero. If two arguments are
given, the first argument is the lower bound of the vector, and the
second argument is the upper bound of the vector. A vector's length
must not be negative, but the bounds can be.
The difference between an array and a vector is that
arrays behave like in C: when used in expressions they decay to a pointer
to the first element whereas vectors are treated as first class values.
gdb.Type object which represents a
const-qualified variant of this type.
gdb.Type object which represents a
volatile-qualified variant of this type.
gdb.Type object which represents an unqualified
variant of this type. That is, the result is neither const nor
volatile.
Tuple object that contains two elements: the
low bound of the argument type and the high bound of that type. If
the type does not have a range, will raise a
gdb.error exception (see section 23.2.2.2 Exception Handling).
gdb.Type object which represents a reference to this
type.
gdb.Type object which represents a pointer to this
type.
gdb.Type that represents the real type,
after removing all layers of typedefs.
gdb.Type object which represents the target type
of this type.
For a pointer type, the target type is the type of the pointed-to object. For an array type (meaning C-like arrays), the target type is the type of the elements of the array. For a function or method type, the target type is the type of the return value. For a complex type, the target type is the type of the elements. For a typedef, the target type is the aliased type.
If the type does not have a target, this method will throw an exception.
gdb.Type is an instantiation of a template, this will
return a new gdb.Type which represents the type of the
nth template argument.
If this gdb.Type is not a template type, this will throw an
exception. Ordinarily, only C++ code will have template types.
If block is given, then name is looked up in that scope. Otherwise, it is searched for globally.
Each type has a code, which indicates what category this type falls
into. The available type categories are represented by constants
defined in the gdb module:
gdb.TYPE_CODE_PTR
gdb.TYPE_CODE_ARRAY
gdb.TYPE_CODE_STRUCT
gdb.TYPE_CODE_UNION
gdb.TYPE_CODE_ENUM
gdb.TYPE_CODE_FLAGS
gdb.TYPE_CODE_FUNC
gdb.TYPE_CODE_INT
gdb.TYPE_CODE_FLT
gdb.TYPE_CODE_VOID
void.
gdb.TYPE_CODE_SET
gdb.TYPE_CODE_RANGE
gdb.TYPE_CODE_STRING
gdb.TYPE_CODE_BITSTRING
gdb.TYPE_CODE_ERROR
gdb.TYPE_CODE_METHOD
gdb.TYPE_CODE_METHODPTR
gdb.TYPE_CODE_MEMBERPTR
gdb.TYPE_CODE_REF
gdb.TYPE_CODE_CHAR
gdb.TYPE_CODE_BOOL
gdb.TYPE_CODE_COMPLEX
gdb.TYPE_CODE_TYPEDEF
gdb.TYPE_CODE_NAMESPACE
gdb.TYPE_CODE_DECFLOAT
gdb.TYPE_CODE_INTERNAL_FUNCTION
Further support for types is provided in the gdb.types
Python module (see section 23.2.4.2 gdb.types).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
An example output is provided (see section 10.9 Pretty Printing).
A pretty-printer is just an object that holds a value and implements a specific interface, defined here.
This method must return an object conforming to the Python iterator protocol. Each item returned by the iterator must be a tuple holding two elements. The first element is the "name" of the child; the second element is the child's value. The value can be any Python object which is convertible to a value.
This method is optional. If it does not exist, will act as though the value has no children.
This method is optional. If it does exist, this method must return a string.
Some display hints are predefined by :
set print elements and
set print array.
to_string method returns a Python string of some
kind, then will call its internal language-specific
string-printing function to format the string. For the CLI this means
adding quotation marks, possibly escaping some characters, respecting
set print elements, and the like.
When printing from the CLI, if the to_string method exists,
then will prepend its result to the values returned by
children. Exactly how this formatting is done is dependent on
the display hint, and may change as more hints are added. Also,
depending on the print settings (see section 10.8 Print Settings), the CLI may
print just the result of to_string in a stack trace, omitting
the result of children.
If this method returns a string, it is printed verbatim.
Otherwise, if this method returns an instance of gdb.Value,
then prints this value. This may result in a call to
another pretty-printer.
If instead the method returns a Python value which is convertible to a
gdb.Value, then performs the conversion and prints
the resulting value. Again, this may result in a call to another
pretty-printer. Python scalars (integers, floats, and booleans) and
strings are convertible to gdb.Value; other types are not.
Finally, if this method returns None then no further operations
are peformed in this method and nothing is printed.
If the result is not one of these types, an exception is raised.
provides a function which can be used to look up the
default pretty-printer for a gdb.Value:
gdb.Value object as an argument. If a
pretty-printer for this value exists, then it is returned. If no such
printer exists, then this returns None.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Python list gdb.pretty_printers contains an array of
functions or callable objects that have been registered via addition
as a pretty-printer. Printers in this list are called global
printers, they're available when debugging all inferiors.
Each gdb.Progspace contains a pretty_printers attribute.
Each gdb.Objfile also contains a pretty_printers
attribute.
Each function on these lists is passed a single gdb.Value
argument and should return a pretty-printer object conforming to the
interface definition above (see section 23.2.2.5 Pretty Printing API). If a function
cannot create a pretty-printer for the value, it should return
None.
first checks the pretty_printers attribute of each
gdb.Objfile in the current program space and iteratively calls
each enabled lookup routine in the list for that gdb.Objfile
until it receives a pretty-printer object.
If no pretty-printer is found in the objfile lists, then
searches the pretty-printer list of the current program space,
calling each enabled function until an object is returned.
After these lists have been exhausted, it tries the global
gdb.pretty_printers list, again calling each enabled function until an
object is returned.
The order in which the objfiles are searched is not specified. For a given list, functions are always invoked from the head of the list, and iterated over sequentially until the end of the list, or a printer object is returned.
For various reasons a pretty-printer may not work. For example, the underlying data structure may have changed and the pretty-printer is out of date.
The consequences of a broken pretty-printer are severe enough that
provides support for enabling and disabling individual
printers. For example, if print frame-arguments is on,
a backtrace can become highly illegible if any argument is printed
with a broken printer.
Pretty-printers are enabled and disabled by attaching an enabled
attribute to the registered function or callable object. If this attribute
is present and its value is False, the printer is disabled, otherwise
the printer is enabled.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A pretty-printer consists of two parts: a lookup function to detect if the type is supported, and the printer itself.
Here is an example showing how a std::string printer might be
written. See section 23.2.2.5 Pretty Printing API, for details on the API this class
must provide.
class StdStringPrinter(object):
"Print a std::string"
def __init__(self, val):
self.val = val
def to_string(self):
return self.val['_M_dataplus']['_M_p']
def display_hint(self):
return 'string'
|
And here is an example showing how a lookup function for the printer example above might be written.
def str_lookup_function(val):
lookup_tag = val.type.tag
if lookup_tag == None:
return None
regex = re.compile("^std::basic_string |
The example lookup function extracts the value's type, and attempts to
match it to a type that it can pretty-print. If it is a type the
printer can pretty-print, it will return a printer object. If not, it
returns None.
We recommend that you put your core pretty-printers into a Python package. If your pretty-printers are for use with a library, we further recommend embedding a version number into the package name. This practice will enable to load multiple versions of your pretty-printers at the same time, because they will have different names.
You should write auto-loaded code (see section 23.2.3 Python Auto-loading) such that it
can be evaluated multiple times without changing its meaning. An
ideal auto-load file will consist solely of imports of your
printer modules, followed by a call to a register pretty-printers with
the current objfile.
Taken as a whole, this approach will scale nicely to multiple inferiors, each potentially using a different library version. Embedding a version number in the Python package name will ensure that is able to load both sets of printers simultaneously. Then, because the search for pretty-printers is done by objfile, and because your auto-loaded code took care to register your library's printers with a specific objfile, will find the correct printers for the specific version of the library used by each inferior.
To continue the std::string example (see section 23.2.2.5 Pretty Printing API),
this code might appear in gdb.libstdcxx.v6:
def register_printers(objfile):
objfile.pretty_printers.append(str_lookup_function)
|
And then the corresponding contents of the auto-load file would be:
import gdb.libstdcxx.v6 gdb.libstdcxx.v6.register_printers(gdb.current_objfile()) |
The previous example illustrates a basic pretty-printer. There are a few things that can be improved on. The printer doesn't have a name, making it hard to identify in a list of installed printers. The lookup function has a name, but lookup functions can have arbitrary, even identical, names.
Second, the printer only handles one type, whereas a library typically has several types. One could install a lookup function for each desired type in the library, but one could also have a single lookup function recognize several types. The latter is the conventional way this is handled. If a pretty-printer can handle multiple data types, then its subprinters are the printers for the individual data types.
The gdb.printing module provides a formal way of solving these
problems (see section 23.2.4.1 gdb.printing).
Here is another example that handles multiple types.
These are the types we are going to pretty-print:
struct foo { int a, b; };
struct bar { struct foo x, y; };
|
Here are the printers:
class fooPrinter:
"""Print a foo object."""
def __init__(self, val):
self.val = val
def to_string(self):
return ("a=<" + str(self.val["a"]) +
"> b=<" + str(self.val["b"]) + ">")
class barPrinter:
"""Print a bar object."""
def __init__(self, val):
self.val = val
def to_string(self):
return ("x=<" + str(self.val["x"]) +
"> y=<" + str(self.val["y"]) + ">")
|
This example doesn't need a lookup function, that is handled by the
gdb.printing module. Instead a function is provided to build up
the object that handles the lookup.
import gdb.printing
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter(
"my_library")
pp.add_printer('foo', '^foo$', fooPrinter)
pp.add_printer('bar', '^bar$', barPrinter)
return pp
|
And here is the autoload support:
import gdb.printing
import my_library
gdb.printing.register_pretty_printer(
gdb.current_objfile(),
my_library.build_pretty_printer())
|
Finally, when this printer is loaded into , here is the corresponding output of `info pretty-printer':
(gdb) info pretty-printer
my_library.so:
my_library
foo
bar
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides a way for Python code to customize type display. This is mainly useful for substituting canonical typedef names for types.
A type printer is just a Python object conforming to a certain protocol. A simple base class implementing the protocol is provided; see 23.2.4.2 gdb.types. A type printer must supply at least:
enable type-printer
and disable type-printer commands.
enable type-printer and disable type-printer
commands.
recognize method, as described below.
When displaying a type, say via the ptype command,
will compute a list of type recognizers. This is done by iterating
first over the per-objfile type printers (see section 23.2.2.16 Objfiles In Python),
followed by the per-progspace type printers (see section 23.2.2.15 Program Spaces In Python), and finally the global type printers.
will call the instantiate method of each enabled
type printer. If this method returns None, then the result is
ignored; otherwise, it is appended to the list of recognizers.
Then, when is going to display a type name, it iterates
over the list of recognizers. For each one, it calls the recognition
function, stopping if the function returns a non-None value.
The recognition function is defined as:
None. Otherwise,
return a string which is to be printed as the name of type.
type will be an instance of gdb.Type (see section 23.2.2.4 Types In Python).
uses this two-pass approach so that type printers can efficiently cache information without holding on to it too long. For example, it can be convenient to look up type information in a type printer and hold it for a recognizer's lifetime; if a single pass were done then type printers would have to make use of the event system in order to avoid holding information that could become stale as the inferior changed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Programs which are being run under are called inferiors
(see section 4.9 Debugging Multiple Inferiors and Programs). Python scripts can access
information about and manipulate inferiors controlled by
via objects of the gdb.Inferior class.
The following inferior-related functions are available in the gdb
module:
A gdb.Inferior object has the following attributes:
A gdb.Inferior object has the following methods:
True if the gdb.Inferior object is valid,
False if not. A gdb.Inferior object will become invalid
if the inferior no longer exists within . All other
gdb.Inferior methods will throw an exception if it is invalid
at the time the method is called.
Inferior.write_memory function. In Python 3, the return
value is a memoryview object.
Inferior.read_memory. If given, length
determines the number of bytes from buffer to be written.
gdb.read_memory. Returns a Python Long
containing the address where the pattern was found, or None if
the pattern could not be found.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
provides a general event facility so that Python code can be notified of various state changes, particularly changes that occur in the inferior.
An event is just an object that describes some state change. The type of the object and its attributes will vary depending on the details of the change. All the existing events are described below.
In order to be notified of an event, you must register an event handler
with an event registry. An event registry is an object in the
gdb.events module which dispatches particular events. A registry
provides methods to register and unregister event handlers:
Here is an example:
def exit_handler (event):
print "event type: exit"
print "exit code: %d" % (event.exit_code)
gdb.events.exited.connect (exit_handler)
|
In the above example we connect our handler exit_handler to the
registry events.exited. Once connected, exit_handler gets
called when the inferior exits. The argument event in this example is
of type gdb.ExitedEvent. As you can see in the example the
ExitedEvent object has an attribute which indicates the exit code of
the inferior.
The following is a listing of the event registries that are available and details of the events they emit:
events.cont
gdb.ThreadEvent.
Some events can be thread specific when is running in non-stop
mode. When represented in Python, these events all extend
gdb.ThreadEvent. Note, this event is not emitted directly; instead,
events which are emitted by this or other modules might extend this event.
Examples of these events are gdb.BreakpointEvent and
gdb.ContinueEvent.
None.
Emits gdb.ContinueEvent which extends gdb.ThreadEvent.
This event indicates that the inferior has been continued after a stop. For
inherited attribute refer to gdb.ThreadEvent above.
events.exited
events.ExitedEvent which indicates that the inferior has exited.
events.ExitedEvent has two attributes:
exited event.
events.stop
gdb.StopEvent which extends gdb.ThreadEvent.
Indicates that the inferior has stopped. All events emitted by this registry
extend StopEvent. As a child of gdb.ThreadEvent, gdb.StopEvent
will indicate the stopped thread when is running in non-stop
mode. Refer to gdb.ThreadEvent above for more details.
Emits gdb.SignalEvent which extends gdb.StopEvent.
This event indicates that the inferior or one of its threads has received as
signal. gdb.SignalEvent has the following attributes:
info signals in
the command prompt.
Also emits gdb.BreakpointEvent which extends gdb.StopEvent.
gdb.BreakpointEvent event indicates that one or more breakpoints have
been hit, and has the following attributes:
gdb.Breakpoint) that were hit.
See section 23.2.2.21 Manipulating breakpoints using Python, for details of the gdb.Breakpoint object.
gdb.BreakpointEvent.breakpoints attribute.
events.new_objfile
gdb.NewObjFileEvent which indicates that a new object file has
been loaded by . gdb.NewObjFileEvent has one attribute:
gdb.Objfile) which has been loaded.
See section 23.2.2.16 Objfiles In Python, for details of the gdb.Objfile object.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Python scripts can access information about, and manipulate inferior threads
controlled by , via objects of the gdb.InferiorThread class.
The following thread-related functions are available in the gdb
module:
None.
A gdb.InferiorThread object has the following attributes:
thread name, then this returns that name. Otherwise, if an
OS-supplied name is available, then it is returned. Otherwise, this
returns None.
This attribute can be assigned to. The new value must be a string
object, which sets the new name, or None, which removes any
user-specified thread name.
A gdb.InferiorThread object has the following methods:
True if the gdb.InferiorThread object is valid,
False if not. A gdb.InferiorThread object will become
invalid if the thread exits, or the inferior that the thread belongs
is deleted. All other gdb.InferiorThread methods will throw an
exception if it is invalid at the time the method is called.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can implement new CLI commands in Python. A CLI
command is implemented using an instance of the gdb.Command
class, most commonly using a subclass.
Command registers the new command
with . This initializer is normally invoked from the
subclass' own __init__ method.
name is the name of the command. If name consists of multiple words, then the initial words are looked for as prefix commands. In this case, if one of the prefix commands does not exist, an exception is raised.
There is no support for multi-line commands.
command_class should be one of the `COMMAND_' constants defined below. This argument tells how to categorize the new command in the help system.
completer_class is an optional argument. If given, it should be
one of the `COMPLETE_' constants defined below. This argument
tells how to perform completion for this command. If not
given, will attempt to complete using the object's
complete method (see below); if no such method is found, an
error will occur when completion is attempted.
prefix is an optional argument. If True, then the new
command is a prefix command; sub-commands of this command may be
registered.
The help text for the new command is taken from the Python documentation string for the command's class, if there is one. If no documentation string is provided, the default value "This command is not documented." is used.
dont_repeat method. This is similar
to the user command dont-repeat, see dont-repeat.
argument is a string. It is the argument to the command, after leading and trailing whitespace has been stripped.
from_tty is a boolean argument. When true, this means that the command was entered by the user at the terminal; when false it means that the command came from elsewhere.
If this method throws an exception, it is turned into a
error call. Otherwise, the return value is ignored.
To break argument up into an argv-like string use
gdb.string_to_argv. This function behaves identically to
's internal argument lexer buildargv.
It is recommended to use this for consistency.
Arguments are separated by spaces and may be quoted.
Example:
print gdb.string_to_argv ("1 2\ \\\"3 '4 \"5' \"6 '7\"")
['1', '2 "3', '4 "5', "6 '7"]
|
complete command (see section complete).
The arguments text and word are both strings. text holds the complete command line up to the cursor's location. word holds the last word of the command line; this is computed using a word-breaking heuristic.
The complete method can return several values:
complete to ensure that the
contents actually do complete the word. A zero-length sequence is
allowed, it means that there were no completions available. Only
string elements of the sequence are used; other elements in the
sequence are ignored.
When a new command is registered, it must be declared as a member of
some general class of commands. This is used to classify top-level
commands in the on-line help system; note that prefix commands are not
listed under their own category but rather that of their top-level
command. The available classifications are represented by constants
defined in the gdb module:
gdb.COMMAND_NONE
gdb.COMMAND_RUNNING
start, step, and continue are in this category.
Type help running at the prompt to see a list of
commands in this category.
gdb.COMMAND_DATA
call, find, and print are in this category. Type
help data at the prompt to see a list of commands
in this category.
gdb.COMMAND_STACK
backtrace, frame, and return are in this
category. Type help stack at the prompt to see a
list of commands in this category.
gdb.COMMAND_FILES
file, list and section are in this category.
Type help files at the prompt to see a list of
commands in this category.
gdb.COMMAND_SUPPORT
help, make, and shell are in this category. Type
help support at the prompt to see a list of
commands in this category.
gdb.COMMAND_STATUS
info, macro,
and show are in this category. Type help status at the
prompt to see a list of commands in this category.
gdb.COMMAND_BREAKPOINTS
break,
clear, and delete are in this category. Type help
breakpoints at the prompt to see a list of commands in
this category.
gdb.COMMAND_TRACEPOINTS
trace,
actions, and tfind are in this category. Type
help tracepoints at the prompt to see a list of
commands in this category.
gdb.COMMAND_USER
gdb.COMMAND_OBSCURE
checkpoint,
fork, and stop are in this category. Type help
obscure at the prompt to see a list of commands in this
category.
gdb.COMMAND_MAINTENANCE
maintenance and flushregs commands are in this category.
Type help internals at the prompt to see a list of
commands in this category.
A new command can use a predefined completion function, either by
specifying it via an argument at initialization, or by returning it
from the complete method. These predefined completion
constants are all defined in the gdb module:
gdb.COMPLETE_NONE
gdb.COMPLETE_FILENAME
gdb.COMPLETE_LOCATION
gdb.COMPLETE_COMMAND
gdb.COMPLETE_SYMBOL
The following code snippet shows how a trivial CLI command can be implemented in Python:
class HelloWorld (gdb.Command):
"""Greet the whole world."""
def __init__ (self):
super (HelloWorld, self).__init__ ("hello-world", gdb.COMMAND_USER)
def invoke (self, arg, from_tty):
print "Hello, World!"
HelloWorld ()
|
The last line instantiates the class, and is necessary to trigger the
registration of the command with . Depending on how the
Python code is read into , you may need to import the
gdb module explicitly.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can implement new parameters using Python. A new
parameter is implemented as an instance of the gdb.Parameter
class.
Parameters are exposed to the user via the set and
show commands. See section 3.3 Getting Help.
There are many parameters that already exist and can be set in
. Two examples are: set follow fork and
set charset. Setting these parameters influences certain
behavior in . Similarly, you can define parameters that
can be used to influence behavior in custom Python scripts and commands.
Parameter registers the new
parameter with . This initializer is normally invoked
from the subclass' own __init__ method.
name is the name of the new parameter. If name consists
of multiple words, then the initial words are looked for as prefix
parameters. An example of this can be illustrated with the
set print set of parameters. If name is
print foo, then print will be searched as the prefix
parameter. In this case the parameter can subsequently be accessed in
as set print foo.
If name consists of multiple words, and no prefix parameter group can be found, an exception is raised.
command-class should be one of the `COMMAND_' constants (see section 23.2.2.12 Commands In Python). This argument tells how to categorize the new parameter in the help system.
parameter-class should be one of the `PARAM_' constants defined below. This argument tells the type of the new parameter; this information is used for input validation and completion.
If parameter-class is PARAM_ENUM, then
enum-sequence must be a sequence of strings. These strings
represent the possible values for the parameter.
If parameter-class is not PARAM_ENUM, then the presence
of a fourth argument will cause an exception to be thrown.
The help text for the new parameter is taken from the Python documentation string for the parameter's class, if there is one. If there is no documentation string, a default value is used.
set command. The value is
examined when Parameter.__init__ is invoked; subsequent changes
have no effect.
show command. The value is
examined when Parameter.__init__ is invoked; subsequent changes
have no effect.
value attribute holds the underlying value of the
parameter. It can be read and assigned to just as any other
attribute. does validation when assignments are made.
There are two methods that should be implemented in any
Parameter class. These are:
set API (for example, set foo off).
The value attribute has already been populated with the new
value and may be used in output. This method must return a string.
show API has been invoked (for example, show foo). The
argument svalue receives the string representation of the
current value. This method must return a string.
When a new parameter is defined, its type must be specified. The
available types are represented by constants defined in the gdb
module:
gdb.PARAM_BOOLEAN
True
and False are the only valid values.
gdb.PARAM_AUTO_BOOLEAN
None.
gdb.PARAM_UINTEGER
gdb.PARAM_INTEGER
gdb.PARAM_STRING
gdb.PARAM_STRING_NOESCAPE
gdb.PARAM_OPTIONAL_FILENAME
None.
gdb.PARAM_FILENAME
PARAM_STRING_NOESCAPE, but uses file names for completion.
gdb.PARAM_ZINTEGER
PARAM_INTEGER, except 0
is interpreted as itself.
gdb.PARAM_ENUM
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can implement new convenience functions (see section 10.11 Convenience Variables)
in Python. A convenience function is an instance of a subclass of the
class gdb.Function.
Function registers the new function with
. The argument name is the name of the function,
a string. The function will be visible to the user as a convenience
variable of type internal function, whose name is the same as
the given name.
The documentation for the new function is taken from the documentation string for the new class.
gdb.Value, and then the function's
invoke method is called. Note that does not
predetermine the arity of convenience functions. Instead, all
available arguments are passed to invoke, following the
standard Python calling convention. In particular, a convenience
function can have default values for parameters without ill effect.
The return value of this method is used as its value in the enclosing
expression. If an ordinary Python value is returned, it is converted
to a gdb.Value following the usual rules.
The following code snippet shows how a trivial convenience function can be implemented in Python:
class Greet (gdb.Function):
"""Return string to greet someone.
Takes a name as argument."""
def __init__ (self):
super (Greet, self).__init__ ("greet")
def invoke (self, name):
return "Hello, %s!" % name.string ()
Greet ()
|
The last line instantiates the class, and is necessary to trigger the
registration of the function with . Depending on how the
Python code is read into , you may need to import the
gdb module explicitly.
Now you can use the function in an expression:
(gdb) print $greet("Bob")
$1 = "Hello, Bob!"
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A program space, or progspace, represents a symbolic view of an address space. It consists of all of the objfiles of the program. See section 23.2.2.16 Objfiles In Python. See section program spaces, for more details about program spaces.
The following progspace-related functions are available in the
gdb module:
Each progspace is represented by an instance of the gdb.Progspace
class.
pretty_printers attribute is a list of functions. It is
used to look up pretty-printers. A Value is passed to each
function in order; if the function returns None, then the
search continues. Otherwise, the return value should be an object
which is used to format the value. See section 23.2.2.5 Pretty Printing API, for more
information.
type_printers attribute is a list of type printer objects.
See section 23.2.2.8 Type Printing API, for more information.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
loads symbols for an inferior from various symbol-containing files (see section 18.1 Commands to Specify Files). These include the primary executable file, any shared libraries used by the inferior, and any separate debug info files (see section 18.2 Debugging Information in Separate Files). calls these symbol-containing files objfiles.
The following objfile-related functions are available in the
gdb module:
None.
Each objfile is represented by an instance of the gdb.Objfile
class.
pretty_printers attribute is a list of functions. It is
used to look up pretty-printers. A Value is passed to each
function in order; if the function returns None, then the
search continues. Otherwise, the return value should be an object
which is used to format the value. See section 23.2.2.5 Pretty Printing API, for more
information.
type_printers attribute is a list of type printer objects.
See section 23.2.2.8 Type Printing API, for more information.
A gdb.Objfile object has the following methods:
True if the gdb.Objfile object is valid,
False if not. A gdb.Objfile object can become invalid
if the object file it refers to is not loaded in any
longer. All other gdb.Objfile methods will throw an exception
if it is invalid at the time the method is called.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When the debugged program stops, is able to analyze its call
stack (see section Stack frames). The gdb.Frame class
represents a frame in the stack. A gdb.Frame object is only valid
while its corresponding frame exists in the inferior's stack. If you try
to use an invalid frame object, will throw a gdb.error
exception (see section 23.2.2.2 Exception Handling).
Two gdb.Frame objects can be compared for equality with the ==
operator, like:
() python print gdb.newest_frame() == gdb.selected_frame () True |
The following frame-related functions are available in the gdb module:
unwind_stop_reason method further down in this section).
A gdb.Frame object has the following methods:
gdb.Frame object is valid, false if not.
A frame object can become invalid if the frame it refers to doesn't
exist anymore in the inferior. All gdb.Frame methods will throw
an exception if it is invalid at the time the method is called.
None if it can't be
obtained.
gdb.Architecture object corresponding to the frame's
architecture. See section 23.2.2.24 Python representation of architectures.
gdb.NORMAL_FRAME
gdb.DUMMY_FRAME
gdb.INLINE_FRAME
gdb.NORMAL_FRAME that is older than this one.
gdb.TAILCALL_FRAME
gdb.SIGTRAMP_FRAME
gdb.ARCH_FRAME
gdb.SENTINEL_FRAME
gdb.NORMAL_FRAME, but it is only used for the
newest frame.
gdb.frame_stop_reason_string to convert the value returned by this
function to a string. The value can be one of:
gdb.FRAME_UNWIND_NO_REASON
gdb.FRAME_UNWIND_NULL_ID
gdb.FRAME_UNWIND_OUTERMOST
gdb.FRAME_UNWIND_UNAVAILABLE
gdb.FRAME_UNWIND_INNER_ID
gdb.FRAME_UNWIND_SAME_ID
gdb.FRAME_UNWIND_NO_SAVED_PC
gdb.FRAME_UNWIND_FIRST_ERROR
reason = gdb.selected_frame().unwind_stop_reason ()
reason_str = gdb.frame_stop_reason_string (reason)
if reason >= gdb.FRAME_UNWIND_FIRST_ERROR:
print "An error occured: %s" % reason_str
|
gdb.Symbol object. block must be a
gdb.Block object.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Within each frame, maintains information on each block
stored in that frame. These blocks are organized hierarchically, and
are represented individually in Python as a gdb.Block.
Please see 23.2.2.17 Accessing inferior stack frames from Python., for a more in-depth discussion on
frames. Furthermore, see Examining the Stack, for more
detailed technical information on 's book-keeping of the
stack.
A gdb.Block is iterable. The iterator returns the symbols
(see section 23.2.2.19 Python representation of Symbols.) local to the block. Python programs
should not assume that a specific block object will always contain a
given symbol, since changes in features and
infrastructure may cause symbols move across blocks in a symbol
table.
The following block-related functions are available in the gdb
module:
gdb.Block containing the given pc value. If the
block cannot be found for the pc value specified, the function
will return None.
A gdb.Block object has the following methods:
True if the gdb.Block object is valid,
False if not. A block object can become invalid if the block it
refers to doesn't exist anymore in the inferior. All other
gdb.Block methods will throw an exception if it is invalid at
the time the method is called. The block's validity is also checked
during iteration over symbols of the block.
A gdb.Block object has the following attributes:
gdb.Symbol. If the
block is not named, then this attribute holds None. This
attribute is not writable.
None. This attribute is not writable.
True if the gdb.Block object is a global block,
False if not. This attribute is not
writable.
True if the gdb.Block object is a static block,
False if not. This attribute is not writable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
represents every variable, function and type as an
entry in a symbol table. See section Examining the Symbol Table.
Similarly, Python represents these symbols in with the
gdb.Symbol object.
The following symbol-related functions are available in the gdb
module:
name is the name of the symbol. It must be a string. The
optional block argument restricts the search to symbols visible
in that block. The block argument must be a
gdb.Block object. If omitted, the block for the current frame
is used. The optional domain argument restricts
the search to the domain type. The domain argument must be a
domain constant defined in the gdb module and described later
in this chapter.
The result is a tuple of two elements.
The first element is a gdb.Symbol object or None if the symbol
is not found.
If the symbol is found, the second element is True if the symbol
is a field of a method's object (e.g., this in C++),
otherwise it is False.
If the symbol is not found, the second element is False.
name is the name of the symbol. It must be a string.
The optional domain argument restricts the search to the domain type.
The domain argument must be a domain constant defined in the gdb
module and described later in this chapter.
The result is a gdb.Symbol object or None if the symbol
is not found.
A gdb.Symbol object has the following attributes:
None if no type is recorded.
This attribute is represented as a gdb.Type object.
See section 23.2.2.4 Types In Python. This attribute is not writable.
gdb.Symtab object. See section 23.2.2.20 Symbol table representation in Python.. This attribute is not writable.
name or linkage_name, depending on whether the user
asked to display demangled or mangled names.
gdb module and described later in this chapter.
True if evaluating this symbol's value requires a frame
(see section 23.2.2.17 Accessing inferior stack frames from Python.) and False otherwise. Typically,
local variables will require a frame, but other symbols will not.
True if the symbol is an argument of a function.
True if the symbol is a constant.
True if the symbol is a function or a method.
True if the symbol is a variable.
A gdb.Symbol object has the following methods:
True if the gdb.Symbol object is valid,
False if not. A gdb.Symbol object can become invalid if
the symbol it refers to does not exist in any longer.
All other gdb.Symbol methods will throw an exception if it is
invalid at the time the method is called.
gdb.Value. For
functions, this computes the address of the function, cast to the
appropriate type. If the symbol requires a frame in order to compute
its value, then frame must be given. If frame is not
given, or if frame is invalid, then this method will throw an
exception.
The available domain categories in gdb.Symbol are represented
as constants in the gdb module:
gdb.SYMBOL_UNDEF_DOMAIN
gdb.SYMBOL_VAR_DOMAIN
gdb.SYMBOL_STRUCT_DOMAIN
gdb.SYMBOL_LABEL_DOMAIN
gdb.SYMBOL_VARIABLES_DOMAIN
SYMBOLS_VAR_DOMAIN; it
contains everything minus functions and types.
gdb.SYMBOL_FUNCTION_DOMAIN
gdb.SYMBOL_TYPES_DOMAIN
The available address class categories in gdb.Symbol are represented
as constants in the gdb module:
gdb.SYMBOL_LOC_UNDEF
gdb.SYMBOL_LOC_CONST
gdb.SYMBOL_LOC_STATIC
gdb.SYMBOL_LOC_REGISTER
gdb.SYMBOL_LOC_ARG
gdb.SYMBOL_LOC_REF_ARG
LOC_ARG except that the value's address is stored at the
offset, not the value itself.
gdb.SYMBOL_LOC_REGPARM_ADDR
LOC_REGISTER except
the register holds the address of the argument instead of the argument
itself.
gdb.SYMBOL_LOC_LOCAL
gdb.SYMBOL_LOC_TYPEDEF
SYMBOL_STRUCT_DOMAIN all
have this class.
gdb.SYMBOL_LOC_BLOCK
gdb.SYMBOL_LOC_CONST_BYTES
gdb.SYMBOL_LOC_UNRESOLVED
gdb.SYMBOL_LOC_OPTIMIZED_OUT
gdb.SYMBOL_LOC_COMPUTED
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Access to symbol table data maintained by on the inferior
is exposed to Python via two objects: gdb.Symtab_and_line and
gdb.Symtab. Symbol table and line data for a frame is returned
from the find_sal method in gdb.Frame object.
See section 23.2.2.17 Accessing inferior stack frames from Python..
For more information on 's symbol table management, see Examining the Symbol Table, for more information.
A gdb.Symtab_and_line object has the following attributes:
gdb.Symtab) for this frame.
This attribute is not writable.
A gdb.Symtab_and_line object has the following methods:
True if the gdb.Symtab_and_line object is valid,
False if not. A gdb.Symtab_and_line object can become
invalid if the Symbol table and line object it refers to does not
exist in any longer. All other
gdb.Symtab_and_line methods will throw an exception if it is
invalid at the time the method is called.
A gdb.Symtab object has the following attributes:
A gdb.Symtab object has the following methods:
True if the gdb.Symtab object is valid,
False if not. A gdb.Symtab object can become invalid if
the symbol table it refers to does not exist in any
longer. All other gdb.Symtab methods will throw an exception
if it is invalid at the time the method is called.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Python code can manipulate breakpoints via the gdb.Breakpoint
class.
break command, or in the case of a watchpoint, by the watch
command. The optional type denotes the breakpoint to create
from the types defined later in this chapter. This argument can be
either: gdb.BP_BREAKPOINT or gdb.BP_WATCHPOINT. type
defaults to gdb.BP_BREAKPOINT. The optional internal argument
allows the breakpoint to become invisible to the user. The breakpoint
will neither be reported when created, nor will it be listed in the
output from info breakpoints (but will be listed with the
maint info breakpoints command). The optional wp_class
argument defines the class of watchpoint to create, if type is
gdb.BP_WATCHPOINT. If a watchpoint class is not provided, it is
assumed to be a gdb.WP_WRITE class.
gdb.Breakpoint class can be sub-classed and, in
particular, you may choose to implement the stop method.
If this method is defined as a sub-class of gdb.Breakpoint,
it will be called when the inferior reaches any location of a
breakpoint which instantiates that sub-class. If the method returns
True, the inferior will be stopped at the location of the
breakpoint, otherwise the inferior will continue.
If there are multiple breakpoints at the same location with a
stop method, each one will be called regardless of the
return status of the previous. This ensures that all stop
methods have a chance to execute at that location. In this scenario
if one of the methods returns True but the others return
False, the inferior will still be stopped.
You should not alter the execution state of the inferior (i.e., step, next, etc.), alter the current frame context (i.e., change the current active frame), or alter, add or delete any breakpoint. As a general rule, you should not alter any data within or the inferior at this time.
Example stop implementation:
class MyBreakpoint (gdb.Breakpoint):
def stop (self):
inf_val = gdb.parse_and_eval("foo")
if inf_val == 3:
return True
return False
|
The available watchpoint types represented by constants are defined in the
gdb module:
gdb.WP_READ
gdb.WP_WRITE
gdb.WP_ACCESS
True if this Breakpoint object is valid,
False otherwise. A Breakpoint object can become invalid
if the user deletes the breakpoint. In this case, the object still
exists, but the underlying breakpoint does not. In the cases of
watchpoint scope, the watchpoint remains valid even if execution of the
inferior leaves the scope of that watchpoint.
Breakpoint object. Any further access
to this object's attributes or methods will raise an error.
True if the breakpoint is enabled, and
False otherwise. This attribute is writable.
True if the breakpoint is silent, and
False otherwise. This attribute is writable.
Note that a breakpoint can also be silent if it has commands and the
first command is silent. This is not reported by the
silent attribute.
None. This attribute is writable.
None. This attribute
is writable.
The available types are represented by constants defined in the gdb
module:
gdb.BP_BREAKPOINT
gdb.BP_WATCHPOINT
gdb.BP_HARDWARE_WATCHPOINT
gdb.BP_READ_WATCHPOINT
gdb.BP_ACCESS_WATCHPOINT
None. This
attribute is not writable.
None. This attribute is not writable.
None. This attribute is writable.
None. This attribute is not writable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A finish breakpoint is a temporary breakpoint set at the return address of
a frame, based on the finish command. gdb.FinishBreakpoint
extends gdb.Breakpoint. The underlying breakpoint will be disabled
and deleted when the execution will run out of the breakpoint scope (i.e.
Breakpoint.stop or FinishBreakpoint.out_of_scope triggered).
Finish breakpoints are thread specific and must be create with the right
thread selected.
gdb.Frame
object frame. If frame is not provided, this defaults to the
newest frame. The optional internal argument allows the breakpoint to
become invisible to the user. See section 23.2.2.21 Manipulating breakpoints using Python, for further
details about this argument.
longjmp, C++ exceptions,
return command, ...), a function may not properly terminate, and
thus never hit the finish breakpoint. When notices such a
situation, the out_of_scope callback will be triggered.
You may want to sub-class gdb.FinishBreakpoint and override this
method:
class MyFinishBreakpoint (gdb.FinishBreakpoint)
def stop (self):
print "normal finish"
return True
def out_of_scope ():
print "abnormal finish"
|
gdb.FinishBreakpoint object had debug symbols, this
attribute will contain a gdb.Value object corresponding to the return
value of the function. The value will be None if the function return
type is void or if the return value was not computable. This attribute
is not writable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A lazy string is a string whose contents is not retrieved or encoded until it is needed.
A gdb.LazyString is represented in as an
address that points to a region of memory, an encoding
that will be used to encode that region of memory, and a length
to delimit the region of memory that represents the string. The
difference between a gdb.LazyString and a string wrapped within
a gdb.Value is that a gdb.LazyString will be treated
differently by when printing. A gdb.LazyString is
retrieved and encoded during printing, while a gdb.Value
wrapping a string is immediately retrieved and encoded on creation.
A gdb.LazyString object has the following functions:
gdb.LazyString to a gdb.Value. This value
will point to the string in memory, but will lose all the delayed
retrieval, encoding and handling that applies to a
gdb.LazyString.
target method. See section 23.2.2.4 Types In Python. This attribute is not
writable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
uses architecture specific parameters and artifacts in a
number of its various computations. An architecture is represented
by an instance of the gdb.Architecture class.
A gdb.Architecture class has the following methods:
dict with the following string keys:
addr
asm
disassembly-flavor. See section 9.6 Source and Machine Code.
length
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When a new object file is read (for example, due to the file
command, or because the inferior has loaded a shared library),
will look for Python support scripts in several ways:
`objfile-gdb.py' (see section 23.2.3.1 The `objfile-gdb.py' file)
and .debug_gdb_scripts section
(see section 23.2.3.2 The .debug_gdb_scripts section).
The auto-loading feature is useful for supplying application-specific debugging commands and scripts.
Auto-loading can be enabled or disabled, and the list of auto-loaded scripts can be printed.
set auto-load python-scripts [on|off]
show auto-load python-scripts
info auto-load python-scripts [regexp]
Also printed is the list of Python scripts that were mentioned in
the .debug_gdb_scripts section and were not found
(see section 23.2.3.2 The .debug_gdb_scripts section).
This is useful because their names are not printed when
tries to load them and fails. There may be many of them, and printing
an error message for each one is problematic.
If regexp is supplied only Python scripts with matching names are printed.
Example:
(gdb) info auto-load python-scripts
Loaded Script
Yes py-section-script.py
full name: /tmp/py-section-script.py
No my-foo-pretty-printers.py
|
When reading an auto-loaded file, sets the
current objfile. This is available via the gdb.current_objfile
function (see section 23.2.2.16 Objfiles In Python). This can be useful for
registering objfile-specific pretty-printers.
23.2.3.1 The `objfile-gdb.py' file 23.2.3.2 The .debug_gdb_scriptssection23.2.3.3 Which flavor to choose?
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When a new object file is read, looks for
a file named `objfile-gdb.py' (we call it script-name below),
where objfile is the object file's real name, formed by ensuring
that the file name is absolute, following all symlinks, and resolving
. and .. components. If this file exists and is
readable, will evaluate it as a Python script.
If this file does not exist, then will look for script-name file in all of the directories as specified below.
Note that loading of this script file also requires accordingly configured
auto-load safe-path (see section 22.7.4 Security restriction for auto-loading).
For object files using `.exe' suffix tries to load first the scripts normally according to its `.exe' filename. But if no scripts are found also tries script filenames matching the object file without its `.exe' suffix. This `.exe' stripping is case insensitive and it is attempted on any platform. This makes the script filenames compatible between Unix and MS-Windows hosts.
set auto-load scripts-directory [directories]
Each entry here needs to be covered also by the security setting
set auto-load safe-path (see set auto-load safe-path).
This variable defaults to `$debugdir:$datadir/auto-load'. The default
set auto-load safe-path value can be also overriden by
configuration option `--with-auto-load-dir'.
Any reference to `$debugdir' will get replaced by debug-file-directory value (see section 18.2 Debugging Information in Separate Files) and any reference to `$datadir' will get replaced by data-directory which is determined at startup (see section 18.6 GDB Data Files). `$debugdir' and `$datadir' must be placed as a directory component -- either alone or delimited by `/' or `\' directory separators, depending on the host platform.
The list of directories uses path separator (`:' on GNU and Unix
systems, `;' on MS-Windows and MS-DOS) to separate directories, similarly
to the PATH environment variable.
show auto-load scripts-directory
does not track which files it has already auto-loaded this way. will load the associated script every time the corresponding objfile is opened. So your `-gdb.py' file should be careful to avoid errors if it is evaluated more than once.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
.debug_gdb_scripts section For systems using file formats like ELF and COFF, when loads a new object file it will look for a special section named `.debug_gdb_scripts'. If this section exists, its contents is a list of names of scripts to load.
will look for each specified script file first in the current directory and then along the source search path (see section Specifying Source Directories), except that `$cdir' is not searched, since the compilation directory is not relevant to scripts.
Entries can be placed in section .debug_gdb_scripts with,
for example, this GCC macro:
/* Note: The "MS" section flags are to remove duplicates. */
#define DEFINE_GDB_SCRIPT(script_name) \
asm("\
.pushsection \".debug_gdb_scripts\", \"MS\",@progbits,1\n\
.byte 1\n\
.asciz \"" script_name "\"\n\
.popsection \n\
");
|
Then one can reference the macro in a header or source file like this:
DEFINE_GDB_SCRIPT ("my-app-scripts.py")
|
The script name may include directories if desired.
Note that loading of this script file also requires accordingly configured
auto-load safe-path (see section 22.7.4 Security restriction for auto-loading).
If the macro is put in a header, any application or library using this header will get a reference to the specified script.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Given the multiple ways of auto-loading Python scripts, it might not always be clear which one to choose. This section provides some guidance.
Benefits of the `-gdb.py' way:
Scripts specified in the .debug_gdb_scripts section are searched for
in the source search path.
For publicly installed libraries, e.g., `libstdc++', there typically
isn't a source directory in which to find the script.
Benefits of the .debug_gdb_scripts way:
Scripts for libraries done the `-gdb.py' way require an objfile to trigger their loading. When an application is statically linked the only objfile available is the executable, and it is cumbersome to attach all the scripts from all the input libraries to the executable's `-gdb.py' script.
Some classes can be entirely inlined, and thus there may not be an associated shared library to attach a `-gdb.py' script to.
In some circumstances, apps can be built out of large collections of internal
libraries, and the build infrastructure necessary to install the
`-gdb.py' scripts in a place where can find them is
cumbersome. It may be easier to specify the scripts in the
.debug_gdb_scripts section as relative paths, and add a path to the
top of the source tree to the source search path.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
comes with several modules to assist writing Python code.
23.2.4.1 gdb.printing Building and registering pretty-printers. 23.2.4.2 gdb.types Utilities for working with types. 23.2.4.3 gdb.prompt Utilities for prompt value substitution.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This module provides a collection of utilities for working with pretty-printers.
PrettyPrinter (name, subprinters=None)
SubPrettyPrinter (name)
RegexpCollectionPrettyPrinter (name)
FlagEnumerationPrinter (name)
enum values. Unlike
's built-in enum printing, this printer attempts to
work properly when there is some overlap between the enumeration
constants. name is the name of the printer and also the name of
the enum type to look up.
register_pretty_printer (obj, printer, replace=False)
True then any existing copy of the printer
is replaced. Otherwise a RuntimeError exception is raised
if a printer with the same name already exists.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This module provides a collection of utilities for working with
gdb.Type objects.
get_basic_type (type)
C++ example:
typedef const int const_int;
const_int foo (3);
const_int& foo_ref (foo);
int main () { return 0; }
|
Then in gdb:
(gdb) start
(gdb) python import gdb.types
(gdb) python foo_ref = gdb.parse_and_eval("foo_ref")
(gdb) python print gdb.types.get_basic_type(foo_ref.type)
int
|
has_field (type, field)
True if type, assumed to be a type with fields
(e.g., a structure or union), has field field.
make_enum_dict (enum_type)
dictionary type produced from enum_type.
deep_items (type)
gdb.Type.iteritems method, except that the iterator returned
by deep_items will recursively traverse anonymous struct or
union fields. For example:
struct A
{
int a;
union {
int b0;
int b1;
};
};
|
Then in :
() python import gdb.types
() python struct_a = gdb.lookup_type("struct A")
() python print struct_a.keys ()
{['a', '']}
() python print [k for k,v in gdb.types.deep_items(struct_a)]
{['a', 'b0', 'b1']}
|
get_type_recognizers ()
apply_type_recognizers (recognizers, type_obj)
None. This is called by
during the type-printing process (see section 23.2.2.8 Type Printing API).
register_type_printer (locus, printer)
gdb.Objfile, in
which case the printer is registered with that objfile; a
gdb.Progspace, in which case the printer is registered with
that progspace; or None, in which case the printer is
registered globally.
TypePrinter
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This module provides a method for prompt value-substitution.
substitute_prompt (string)
The escape sequences you can pass to this function are:
\\
\e
\f
\n
\p
\r
\t
\v
\w
\[
\]
For example:
substitute_prompt (``frame: \f,
print arguments: \p{print frame-arguments}'')
|
will return the string:
"frame: main, print arguments: scalars" |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is often useful to define alternate spellings of existing commands. For example, if a new command defined in Python has a long name to type, it is handy to have an abbreviated version of it that involves less typing.
itself uses aliases. For example `s' is an alias of the `step' command even though it is otherwise an ambiguous abbreviation of other commands like `set' and `show'.
Aliases are also used to provide shortened or more common versions of multi-word commands. For example, provides the `tty' alias of the `set inferior-tty' command.
You can define a new alias with the `alias' command.
alias [-a] [--] ALIAS = COMMAND
ALIAS specifies the name of the new alias. Each word of ALIAS must consist of letters, numbers, dashes and underscores.
COMMAND specifies the name of an existing command that is being aliased.
The `-a' option specifies that the new alias is an abbreviation of the command. Abbreviations are not shown in command lists displayed by the `help' command.
The `--' option specifies the end of options, and is useful when ALIAS begins with a dash.
Here is a simple example showing how to make an abbreviation of a command so that there is less to type. Suppose you were tired of typing `disas', the current shortest unambiguous abbreviation of the `disassemble' command and you wanted an even shorter version named `di'. The following will accomplish this.
(gdb) alias -a di = disas |
Note that aliases are different from user-defined commands. With a user-defined command, you also need to write documentation for it with the `document' command. An alias automatically picks up the documentation of the existing command.
Here is an example where we make `elms' an abbreviation of `elements' in the `set print elements' command. This is to show that you can make an abbreviation of any part of a command.
(gdb) alias -a set print elms = set print elements (gdb) alias -a show print elms = show print elements (gdb) set p elms 20 (gdb) show p elms Limit on string chars or array elements to print is 200. |
Note that if you are defining an alias of a `set' command, and you want to have an alias for the corresponding `show' command, then you need to define the latter separately.
Unambiguously abbreviated commands are allowed in COMMAND and ALIAS, just as they are normally.
(gdb) alias -a set pr elms = set p ele |
Finally, here is an example showing the creation of a one word alias for a more complex command. This creates alias `spe' of the command `set print elements'.
(gdb) alias spe = set print elements (gdb) spe 20 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
supports multiple command interpreters, and some command infrastructure to allow users or user interface writers to switch between interpreters or run commands in other interpreters.
currently supports two command interpreters, the console interpreter (sometimes called the command-line interpreter or CLI) and the machine interface interpreter (or GDB/MI). This manual describes both of these interfaces in great detail.
By default, will start with the console interpreter. However, the user may choose to start with another interpreter by specifying the `-i' or `--interpreter' startup options. Defined interpreters include:
console
mi
mi2). Used primarily
by programs wishing to use as a backend for a debugger GUI
or an IDE. For more information, see The GDB/MI Interface.
mi2
mi1
The interpreter being used by may not be dynamically switched at runtime. Although possible, this could lead to a very precarious situation. Consider an IDE using GDB/MI. If a user enters the command "interpreter-set console" in a console view, would switch to using the console interpreter, rendering the IDE inoperable!
Although you may only choose a single interpreter at startup, you may execute
commands in any interpreter from the current interpreter using the appropriate
command. If you are running the console interpreter, simply use the
interpreter-exec command:
interpreter-exec mi "-data-list-register-names" |
GDB/MI has a similar command, although it is only available in versions of which support GDB/MI version 2 (or greater).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
25.1 TUI Overview TUI overview 25.2 TUI Key Bindings TUI key bindings 25.3 TUI Single Key Mode TUI single key mode 25.4 TUI-specific Commands TUI-specific commands 25.5 TUI Configuration Variables TUI configuration variables
The Text User Interface (TUI) is a terminal
interface which uses the curses library to show the source
file, the assembly output, the program registers and
commands in separate text windows. The TUI mode is supported only
on platforms where a suitable version of the curses library
is available.
The TUI mode is enabled by default when you invoke as ` -tui'. You can also switch in and out of TUI mode while runs by using various TUI commands and key bindings, such as C-x C-a. See section TUI Key Bindings.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In TUI mode, can display several text windows:
The source and assembly windows show the current program position by highlighting the current line and marking it with a `>' marker. Breakpoints are indicated with two markers. The first marker indicates the breakpoint type:
B
b
H
h
The second marker indicates whether the breakpoint is enabled or not:
+
-
The source, assembly and register windows are updated when the current thread changes, when the frame changes, or when the program counter changes.
These windows are not all visible at the same time. The command window is always visible. The others can be arranged in several layouts:
A status line above the command window shows the following information:
No process.
?? is displayed.
?? is displayed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The TUI installs several key bindings in the readline keymaps (@xref{Command Line Editing}). The following key bindings are installed for both TUI mode and the standard mode.
Think of this key binding as the Emacs C-x 1 binding.
Think of it as the Emacs C-x 2 binding.
Think of it as the Emacs C-x o binding.
The following key bindings only work in the TUI mode:
Because the arrow keys scroll the active window in the TUI mode, they are not available for their normal use by readline unless the command window has the focus. When another window is active, you must use other readline key bindings such as C-p, C-n, C-b and C-f to control the command window.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The TUI also provides a SingleKey mode, which binds several frequently used commands to single keys. Type C-x s to switch into this mode, where the following key bindings are used:
Other keys temporarily switch to the command prompt. The key that was pressed is inserted in the editing buffer so that it is possible to type most commands without interaction with the TUI SingleKey mode. Once the command is entered the TUI SingleKey mode is restored. The only way to permanently leave this mode is by typing q or C-x s.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The TUI has specific commands to control the text windows. These commands are always available, even when is not in the TUI mode. When is in the standard mode, most of these commands will automatically switch to the TUI mode.
Note that if 's stdout is not connected to a
terminal, or has been started with the machine interface
interpreter (see section The GDB/MI Interface), most of
these commands will fail with an error, because it would not be
possible or desirable to enable curses window management.
info win
layout next
layout prev
layout src
layout asm
layout split
layout regs
focus next
focus prev
focus src
focus asm
focus regs
focus cmd
refresh
tui reg float
tui reg general
tui reg next
general, float, system, vector,
all, save, restore.
tui reg system
update
winheight name +count
winheight name -count
tabset nchars
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Several configuration variables control the appearance of TUI windows.
set tui border-kind kind
space
ascii
acs
set tui border-mode mode
set tui active-border-mode mode
normal
standout
reverse
half
half-standout
bold
bold-standout
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A special interface allows you to use GNU Emacs to view (and edit) the source files for the program you are debugging with .
To use this interface, use the command M-x gdb in Emacs. Give the executable file you want to debug as an argument. This command starts as a subprocess of Emacs, with input and output through a newly created Emacs buffer.
Running under Emacs can be just like running normally except for two things:
This applies both to commands and their output, and to the input and output done by the program you are debugging.
This is useful because it means that you can copy the text of previous commands and input them again; you can even use parts of the output in this way.
All the facilities of Emacs' Shell mode are available for interacting with your program. In particular, you can send signals the usual way--for example, C-c C-c for an interrupt, C-c C-z for a stop.
Each time displays a stack frame, Emacs automatically finds the source file for that frame and puts an arrow (`=>') at the left margin of the current line. Emacs uses a separate buffer for source display, and splits the screen to show both your session and the source.
Explicit list or search commands still produce output as
usual, but you probably have no reason to use them from Emacs.
We call this text command mode. Emacs 22.1, and later, also uses a graphical mode, enabled by default, which provides further buffers that can control the execution and describe the state of your program. See section `GDB Graphical Interface' in The GNU Emacs Manual.
If you specify an absolute file name when prompted for the M-x
gdb argument, then Emacs sets your current working directory to where
your program resides. If you only specify the file name, then Emacs
sets your current working directory to the directory associated
with the previous buffer. In this case, may find your
program by searching your environment's PATH variable, but on
some operating systems it might not find the source. So, although the
input and output session proceeds normally, the auxiliary
buffer does not display the current source and line of execution.
The initial working directory of is printed on the top line of the GUD buffer and this serves as a default for the commands that specify files for to operate on. See section Commands to Specify Files.
By default, M-x gdb calls the program called `gdb'. If you
need to call by a different name (for example, if you
keep several configurations around, with different names) you can
customize the Emacs variable gud-gdb-command-name to run the
one you want.
In the GUD buffer, you can use these special Emacs commands in addition to the standard Shell mode commands:
step command; also
update the display window to show the current file and location.
next command. Then update the display window
to show the current file and location.
stepi command; update
display window accordingly.
finish command.
continue
command.
up command.
down command.
In any source file, the Emacs command C-x SPC (gud-break)
tells to set a breakpoint on the source line point is on.
In text command mode, if you type M-x speedbar, Emacs displays a separate frame which shows a backtrace when the GUD buffer is current. Move point to any frame in the stack and type RET to make it become the current frame and display the associated source in the source buffer. Alternatively, click Mouse-2 to make the selected frame become the current one. In graphical mode, the speedbar displays watch expressions.
If you accidentally delete the source-display buffer, an easy way to get
it back is to type the command f in the buffer, to
request a frame display; when you run under Emacs, this recreates
the source buffer if necessary to show you the context of the current
frame.
The source files displayed in Emacs are in ordinary Emacs buffers which are visiting the source files in the usual way. You can edit the files with these buffers if you wish; but keep in mind that communicates with Emacs in terms of line numbers. If you add or delete lines from the text, the line numbers that knows cease to correspond properly with the code.
A more detailed description of Emacs' interaction with is given in the Emacs manual (see section `Debuggers' in The GNU Emacs Manual).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
GDB/MI is a line based machine oriented text interface to and is activated by specifying using the `--interpreter' command line option (see section 2.1.2 Choosing Modes). It is specifically intended to support the development of systems which use the debugger as just one small component of a larger system.
This chapter is a specification of the GDB/MI interface. It is written in the form of a reference manual.
Note that GDB/MI is still under construction, so some of the features described below are incomplete and subject to change (see section GDB/MI Development and Front Ends).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter uses the following notation:
| separates two alternatives.
[ something ] indicates that something is optional:
it may or may not be given.
( group )* means that group inside the parentheses
may repeat zero or more times.
( group )+ means that group inside the parentheses
may repeat one or more times.
"string" means a literal string.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Interaction of a GDB/MI frontend with involves three parts--commands sent to , responses to those commands and notifications. Each command results in exactly one response, indicating either successful completion of the command, or an error. For the commands that do not resume the target, the response contains the requested information. For the commands that resume the target, the response only indicates whether the target was successfully resumed. Notifications is the mechanism for reporting changes in the state of the target, or in state, that cannot conveniently be associated with a command and reported as part of that command response.
The important examples of notifications are:
There's no guarantee that whenever an MI command reports an error, or the target are in any specific state, and especially, the state is not reverted to the state before the MI command was processed. Therefore, whenever an MI command results in an error, we recommend that the frontend refreshes all the information shown in the user interface.
27.1.1 Context management 27.1.2 Asynchronous command execution and non-stop mode 27.1.3 Thread groups
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In most cases when accesses the target, this access is done in context of a specific thread and frame (see section 8.1 Stack Frames). Often, even when accessing global data, the target requires that a thread be specified. The CLI interface maintains the selected thread and frame, and supplies them to target on each command. This is convenient, because a command line user would not want to specify that information explicitly on each command, and because user interacts with via a single terminal, so no confusion is possible as to what thread and frame are the current ones.
In the case of MI, the concept of selected thread and frame is less useful. First, a frontend can easily remember this information itself. Second, a graphical frontend can have more than one window, each one used for debugging a different thread, and the frontend might want to access additional threads for internal purposes. This increases the risk that by relying on implicitly selected thread, the frontend may be operating on a wrong one. Therefore, each MI command should explicitly specify which thread and frame to operate on. To make it possible, each MI command accepts the `--thread' and `--frame' options, the value to each is identifier for thread and frame to operate on.
Usually, each top-level window in a frontend allows the user to select a thread and a frame, and remembers the user selection for further operations. However, in some cases may suggest that the current thread be changed. For example, when stopping on a breakpoint it is reasonable to switch to the thread where breakpoint is hit. For another example, if the user issues the CLI `thread' command via the frontend, it is desirable to change the frontend's selected thread to the one specified by user. communicates the suggestion to change current thread using the `=thread-selected' notification. No such notification is available for the selected frame at the moment.
Note that historically, MI shares the selected thread with CLI, so
frontends used the -thread-select to execute commands in the
right context. However, getting this to work right is cumbersome. The
simplest way is for frontend to emit -thread-select command
before every command. This doubles the number of commands that need
to be sent. The alternative approach is to suppress -thread-select
if the selected thread in is supposed to be identical to the
thread the frontend wants to operate on. However, getting this
optimization right can be tricky. In particular, if the frontend
sends several commands to , and one of the commands changes the
selected thread, then the behaviour of subsequent commands will
change. So, a frontend should either wait for response from such
problematic commands, or explicitly add -thread-select for
all subsequent commands. No frontend is known to do this exactly
right, so it is suggested to just always pass the `--thread' and
`--frame' options.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On some targets, is capable of processing MI commands
even while the target is running. This is called asynchronous
command execution (see section 5.5.3 Background Execution). The frontend may
specify a preferrence for asynchronous execution using the
-gdb-set target-async 1 command, which should be emitted before
either running the executable or attaching to the target. After the
frontend has started the executable or attached to the target, it can
find if asynchronous execution is enabled using the
-list-target-features command.
Even if can accept a command while target is running, many commands that access the target do not work when the target is running. Therefore, asynchronous command execution is most useful when combined with non-stop mode (see section 5.5.2 Non-Stop Mode). Then, it is possible to examine the state of one thread, while other threads are running.
When a given thread is running, MI commands that try to access the
target in the context of that thread may not work, or may work only on
some targets. In particular, commands that try to operate on thread's
stack will not work, on any target. Commands that read memory, or
modify breakpoints, may work or not work, depending on the target. Note
that even commands that operate on global state, such as print,
set, and breakpoint commands, still access the target in the
context of a specific thread, so frontend should try to find a
stopped thread and perform the operation on that thread (using the
`--thread' option).
Which commands will work in the context of a running thread is
highly target dependent. However, the two commands
-exec-interrupt, to stop a thread, and -thread-info,
to find the state of a thread, will always work.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The key observation is that regardless of the structure of the target, MI can have a global list of threads, because most commands that accept the `--thread' option do not need to know what process that thread belongs to. Therefore, it is not necessary to introduce neither additional `--process' option, nor an notion of the current process in the MI interface. The only strictly new feature that is required is the ability to find how the threads are grouped into processes.
To allow the user to discover such grouping, and to support arbitrary
hierarchy of machines/cores/processes, MI introduces the concept of a
thread group. Thread group is a collection of threads and other
thread groups. A thread group always has a string identifier, a type,
and may have additional attributes specific to the type. A new
command, -list-thread-groups, returns the list of top-level
thread groups, which correspond to processes that is
debugging at the moment. By passing an identifier of a thread group
to the -list-thread-groups command, it is possible to obtain
the members of specific thread group.
To allow the user to easily discover processes, and other objects, he
wishes to debug, a concept of available thread group is
introduced. Available thread group is an thread group that
is not debugging, but that can be attached to, using the
-target-attach command. The list of available top-level thread
groups can be obtained using `-list-thread-groups --available'.
In general, the content of a thread group may be only retrieved only
after attaching to that thread group.
Thread groups are related to inferiors (see section 4.9 Debugging Multiple Inferiors and Programs). Each inferior corresponds to a thread group of a special type `process', and some additional operations are permitted on such thread groups.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
27.2.1 GDB/MI Input Syntax 27.2.2 GDB/MI Output Syntax
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
command ==>
cli-command | mi-command
cli-command ==>
[ token ] cli-command nl, where
cli-command is any existing CLI command.
mi-command ==>
[ token ] "-" operation ( " " option )*
[ " --" ] ( " " parameter )* nl
token ==>
option ==>
"-" parameter [ " " parameter ]
parameter ==>
non-blank-sequence | c-string
operation ==>
non-blank-sequence ==>
c-string ==>
""" seven-bit-iso-c-string-content """
nl ==>
CR | CR-LF
Notes:
token, when present, is passed back when the command
finishes.
Pragmatics:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The output from GDB/MI consists of zero or more out-of-band records followed, optionally, by a single result record. This result record is for the most recent command. The sequence of output records is terminated by `(gdb)'.
If an input command was prefixed with a token then the
corresponding output for that command will also be prefixed by that same
token.
output ==>
( out-of-band-record )* [ result-record ] "(gdb)" nl
result-record ==>
[ token ] "^" result-class ( "," result )* nl
out-of-band-record ==>
async-record | stream-record
async-record ==>
exec-async-output | status-async-output | notify-async-output
exec-async-output ==>
[ token ] "*" async-output
status-async-output ==>
[ token ] "+" async-output
notify-async-output ==>
[ token ] "=" async-output
async-output ==>
async-class ( "," result )* nl
result-class ==>
"done" | "running" | "connected" | "error" | "exit"
async-class ==>
"stopped" | others (where others will be added
depending on the needs--this is still in development).
result ==>
variable "=" value
variable ==>
string
value ==>
const | tuple | list
const ==>
c-string
tuple ==>
"{}" | "{" result ( "," result )* "}"
list ==>
"[]" | "[" value ( "," value )* "]" | "["
result ( "," result )* "]"
stream-record ==>
console-stream-output | target-stream-output | log-stream-output
console-stream-output ==>
"~" c-string
target-stream-output ==>
"@" c-string
log-stream-output ==>
"&" c-string
nl ==>
CR | CR-LF
token ==>
Notes:
token is from the corresponding request. Note that
for all async output, while the token is allowed by the grammar and
may be output by future versions of for select async
output messages, it is generally omitted. Frontends should treat
all async output as reporting general changes in the state of the
target and there should be no need to associate async output to any
prior command.
See section GDB/MI Stream Records, for more details about the various output records.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
For the developers convenience CLI commands can be entered directly,
but there may be some unexpected behaviour. For example, commands
that query the user will behave as if the user replied yes, breakpoint
command lists are not executed and some CLI commands, such as
if, when and define, prompt for further input with
`>', which is not valid MI output.
This feature may be removed at some stage in the future and it is
recommended that front ends use the -interpreter-exec command
(see -interpreter-exec).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The application which takes the MI output and presents the state of the program being debugged to the user is called a front end.
Although GDB/MI is still incomplete, it is currently being used by a variety of front ends to . This makes it difficult to introduce new functionality without breaking existing usage. This section tries to minimize the problems by describing how the protocol might change.
Some changes in MI need not break a carefully designed front end, and for these the MI version will remain unchanged. The following is a list of changes that may occur within one level, so front ends should parse MI output in a way that can handle them:
in_scope (see -var-update) may be extended.
If the changes are likely to break front ends, the MI version level will be increased by one. This will allow the front end to parse the output according to the MI version. Apart from mi0, new versions of will not support old versions of MI and it will be the responsibility of the front end to work with the new one.
The best way to avoid unexpected changes in MI that might break your front end is to make your project known to developers and follow development on gdb@sourceware.org and gdb-patches@sourceware.org.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In addition to a number of out-of-band notifications, the response to a GDB/MI command includes one of the following result indications:
"^done" [ "," results ]
results are the return
values.
"^running"
"^connected"
"^error" "," c-string
c-string contains the corresponding
error message.
"^exit"
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
internally maintains a number of output streams: the console, the target, and the log. The output intended for each of these streams is funneled through the GDB/MI interface using stream records.
Each stream record begins with a unique prefix character which
identifies its stream (see section GDB/MI Output Syntax). In addition to the prefix, each stream record contains a
string-output. This is either raw text (with an implicit new
line) or a quoted C string (which does not contain an implicit newline).
"~" string-output
"@" string-output
"&" string-output
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Async records are used to notify the GDB/MI client of additional changes that have occurred. Those changes can either be a consequence of GDB/MI commands (e.g., a breakpoint modified) or a result of target activity (e.g., target stopped).
The following is the list of possible async records:
*running,thread-id="thread"
*stopped,reason="reason",thread-id="id",stopped-threads="stopped",core="core"
breakpoint-hit
watchpoint-trigger
read-watchpoint-trigger
access-watchpoint-trigger
function-finished
location-reached
watchpoint-scope
end-stepping-range
exited-signalled
exited
exited-normally
signal-received
solib-event
stop-on-solib-events (see section 18.1 Commands to Specify Files) is
set or when a catch load or catch unload catchpoint is
in use (see section 5.1.3 Setting Catchpoints).
fork
catch fork
(see section 5.1.3 Setting Catchpoints) has been used.
vfork
catch vfork
(see section 5.1.3 Setting Catchpoints) has been used.
syscall-entry
catch
syscall (see section 5.1.3 Setting Catchpoints) has been used.
syscall-entry
catch syscall (see section 5.1.3 Setting Catchpoints) has been used.
exec
exec. This is reported when catch exec
(see section 5.1.3 Setting Catchpoints) has been used.
The id field identifies the thread that directly caused the stop
-- for example by hitting a breakpoint. Depending on whether all-stop
mode is in effect (see section 5.5.1 All-Stop Mode), may either
stop all threads, or only the thread that directly triggered the stop.
If all threads are stopped, the stopped field will have the
value of "all". Otherwise, the value of the stopped
field will be a list of thread identifiers. Presently, this list will
always include a single thread, but frontend should be prepared to see
several threads in the list. The core field reports the
processor core on which the stop event has happened. This field may be absent
if such information is not available.
=thread-group-added,id="id"
=thread-group-removed,id="id"
=thread-group-started,id="id",pid="pid"
=thread-group-exited,id="id"[,exit-code="code"]
=thread-created,id="id",group-id="gid"
=thread-exited,id="id",group-id="gid"
=thread-selected,id="id"
-thread-select
command but is emitted whenever an MI command that is not documented
to change the selected thread actually changes it. In particular,
invoking, directly or indirectly (via user-defined command), the CLI
thread command, will generate this notification.
We suggest that in response to this notification, front ends highlight the selected thread and cause subsequent commands to apply to that thread.
=library-loaded,...
=library-unloaded,...
=library-loaded notification.
The thread-group field, if present, specifies the id of the
thread group in whose context the library was unloaded. If the field is
absent, it means the library was unloaded in the context of all present
thread groups.
=traceframe-changed,num=tfnum,tracepoint=tpnum
=traceframe-changed,end
=tsv-created,name=name,initial=initial
=tsv-deleted,name=name
=tsv-deleted
=tsv-modified,name=name,initial=initial[,current=current]
=breakpoint-created,bkpt={...}
=breakpoint-modified,bkpt={...}
=breakpoint-deleted,id=number
The bkpt argument is of the same form as returned by the various breakpoint commands; See section 27.8 GDB/MI Breakpoint Commands. The number is the ordinal number of the breakpoint.
Note that if a breakpoint is emitted in the result record of a command, then it will not also be emitted in an async record.
=record-started,thread-group="id"
=record-stopped,thread-group="id"
=cmd-param-changed,param=param,value=value
set param is
changed to value. In the multi-word set command,
the param is the whole parameter list to set command.
For example, In command set check type on, param
is check type and value is on.
=memory-changed,thread-group=id,addr=addr,len=len[,type="code"]
type="code" part is reported if the memory written to holds
executable code.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When reports information about a breakpoint, a tracepoint, a watchpoint, or a catchpoint, it uses a tuple with the following fields:
number
type
catch-type
disp
enabled
enable.
addr
func
filename
fullname
line
at
pending
evaluated-by
thread
task
cond
ignore
enable
traceframe-usage
static-tracepoint-marker-string-id
mask
pass
original-location
times
installed
what
For example, here is what the output of -break-insert
(see section 27.8 GDB/MI Breakpoint Commands) might be:
-> -break-insert main
<- ^done,bkpt={number="1",type="breakpoint",disp="keep",
enabled="y",addr="0x08048564",func="main",file="myprog.c",
fullname="/home/nickrob/myprog.c",line="68",thread-groups=["i1"],
times="0"}
<- (gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Response from many MI commands includes an information about stack frame. This information is a tuple that may have the following fields:
level
func
addr
file
line
from
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Whenever has to report an information about a thread, it uses a tuple with the following fields:
id
target-id
details
state
core
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Whenever a *stopped record is emitted because the program
stopped after hitting an exception catchpoint (see section 5.1.3 Setting Catchpoints),
provides the name of the exception that was raised via
the exception-name field.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This subsection presents several simple examples of interaction using the GDB/MI interface. In these examples, `->' means that the following line is passed to GDB/MI as input, while `<-' means the output received from GDB/MI.
Note the line breaks shown in the examples are here only for readability, they don't appear in the real output.
Setting a breakpoint generates synchronous output which contains detailed information of the breakpoint.
-> -break-insert main
<- ^done,bkpt={number="1",type="breakpoint",disp="keep",
enabled="y",addr="0x08048564",func="main",file="myprog.c",
fullname="/home/nickrob/myprog.c",line="68",thread-groups=["i1"],
times="0"}
<- (gdb)
|
Program execution generates asynchronous records and MI gives the reason that execution stopped.
-> -exec-run
<- ^running
<- (gdb)
<- *stopped,reason="breakpoint-hit",disp="keep",bkptno="1",thread-id="0",
frame={addr="0x08048564",func="main",
args=[{name="argc",value="1"},{name="argv",value="0xbfc4d4d4"}],
file="myprog.c",fullname="/home/nickrob/myprog.c",line="68"}
<- (gdb)
-> -exec-continue
<- ^running
<- (gdb)
<- *stopped,reason="exited-normally"
<- (gdb)
|
Quitting just prints the result class `^exit'.
-> (gdb) <- -gdb-exit <- ^exit |
Please note that `^exit' is printed immediately, but it might take some time for to actually exit. During that time, performs necessary cleanups, including killing programs being debugged or disconnecting from debug hardware, so the frontend should wait till exits and should only forcibly kill if it fails to exit in reasonable time.
Here's what happens if you pass a non-existent command:
-> -rubbish <- ^error,msg="Undefined MI command: rubbish" <- (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The remaining sections describe blocks of commands. Each block of commands is laid out in a fashion similar to this section.
The motivation for this collection of commands.
A brief introduction to this collection of commands as a whole.
For each command in the block, the following is described:
-command args... |
The corresponding CLI command(s), if any.
Example(s) formatted for readability. Some of the described commands have not been implemented yet and these are labeled N.A. (not available).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section documents GDB/MI commands for manipulating breakpoints.
-break-after Command
-break-after number count |
The breakpoint number number is not in effect until it has been hit count times. To see how this is reflected in the output of the `-break-list' command, see the description of the `-break-list' command below.
The corresponding command is `ignore'.
(gdb)
-break-insert main
^done,bkpt={number="1",type="breakpoint",disp="keep",
enabled="y",addr="0x000100d0",func="main",file="hello.c",
fullname="/home/foo/hello.c",line="5",thread-groups=["i1"],
times="0"}
(gdb)
-break-after 1 3
~
^done
(gdb)
-break-list
^done,BreakpointTable={nr_rows="1",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x000100d0",func="main",file="hello.c",fullname="/home/foo/hello.c",
line="5",thread-groups=["i1"],times="0",ignore="3"}]}
(gdb)
|
-break-commands Command
-break-commands number [ command1 ... commandN ] |
Specifies the CLI commands that should be executed when breakpoint number is hit. The parameters command1 to commandN are the commands. If no command is specified, any previously-set commands are cleared. See section 5.1.7 Breakpoint Command Lists. Typical use of this functionality is tracing a program, that is, printing of values of some variables whenever breakpoint is hit and then continuing.
The corresponding command is `commands'.
(gdb)
-break-insert main
^done,bkpt={number="1",type="breakpoint",disp="keep",
enabled="y",addr="0x000100d0",func="main",file="hello.c",
fullname="/home/foo/hello.c",line="5",thread-groups=["i1"],
times="0"}
(gdb)
-break-commands 1 "print v" "continue"
^done
(gdb)
|
-break-condition Command
-break-condition number expr |
Breakpoint number will stop the program only if the condition in expr is true. The condition becomes part of the `-break-list' output (see the description of the `-break-list' command below).
The corresponding command is `condition'.
(gdb)
-break-condition 1 1
^done
(gdb)
-break-list
^done,BreakpointTable={nr_rows="1",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x000100d0",func="main",file="hello.c",fullname="/home/foo/hello.c",
line="5",cond="1",thread-groups=["i1"],times="0",ignore="3"}]}
(gdb)
|
-break-delete Command
-break-delete ( breakpoint )+ |
Delete the breakpoint(s) whose number(s) are specified in the argument list. This is obviously reflected in the breakpoint list.
The corresponding command is `delete'.
(gdb)
-break-delete 1
^done
(gdb)
-break-list
^done,BreakpointTable={nr_rows="0",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[]}
(gdb)
|
-break-disable Command
-break-disable ( breakpoint )+ |
Disable the named breakpoint(s). The field `enabled' in the break list is now set to `n' for the named breakpoint(s).
The corresponding command is `disable'.
(gdb)
-break-disable 2
^done
(gdb)
-break-list
^done,BreakpointTable={nr_rows="1",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="2",type="breakpoint",disp="keep",enabled="n",
addr="0x000100d0",func="main",file="hello.c",fullname="/home/foo/hello.c",
line="5",thread-groups=["i1"],times="0"}]}
(gdb)
|
-break-enable Command
-break-enable ( breakpoint )+ |
Enable (previously disabled) breakpoint(s).
The corresponding command is `enable'.
(gdb)
-break-enable 2
^done
(gdb)
-break-list
^done,BreakpointTable={nr_rows="1",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="2",type="breakpoint",disp="keep",enabled="y",
addr="0x000100d0",func="main",file="hello.c",fullname="/home/foo/hello.c",
line="5",thread-groups=["i1"],times="0"}]}
(gdb)
|
-break-info Command
-break-info breakpoint |
Get information about a single breakpoint.
The result is a table of breakpoints. See section 27.5.4 GDB/MI Breakpoint Information, for details on the format of each breakpoint in the table.
The corresponding command is `info break breakpoint'.
-break-insert Command
-break-insert [ -t ] [ -h ] [ -f ] [ -d ] [ -a ]
[ -c condition ] [ -i ignore-count ]
[ -p thread-id ] [ location ]
|
If specified, location, can be one of:
The possible optional parameters of this command are:
See section 27.5.4 GDB/MI Breakpoint Information, for details on the format of the resulting breakpoint.
Note: this format is open to change.
The corresponding commands are `break', `tbreak', `hbreak', and `thbreak'.
(gdb)
-break-insert main
^done,bkpt={number="1",addr="0x0001072c",file="recursive2.c",
fullname="/home/foo/recursive2.c,line="4",thread-groups=["i1"],
times="0"}
(gdb)
-break-insert -t foo
^done,bkpt={number="2",addr="0x00010774",file="recursive2.c",
fullname="/home/foo/recursive2.c,line="11",thread-groups=["i1"],
times="0"}
(gdb)
-break-list
^done,BreakpointTable={nr_rows="2",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x0001072c", func="main",file="recursive2.c",
fullname="/home/foo/recursive2.c,"line="4",thread-groups=["i1"],
times="0"},
bkpt={number="2",type="breakpoint",disp="del",enabled="y",
addr="0x00010774",func="foo",file="recursive2.c",
fullname="/home/foo/recursive2.c",line="11",thread-groups=["i1"],
times="0"}]}
(gdb)
|
-break-list Command
-break-list |
Displays the list of inserted breakpoints, showing the following fields:
If there are no breakpoints or watchpoints, the BreakpointTable
body field is an empty list.
The corresponding command is `info break'.
(gdb)
-break-list
^done,BreakpointTable={nr_rows="2",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x000100d0",func="main",file="hello.c",line="5",thread-groups=["i1"],
times="0"},
bkpt={number="2",type="breakpoint",disp="keep",enabled="y",
addr="0x00010114",func="foo",file="hello.c",fullname="/home/foo/hello.c",
line="13",thread-groups=["i1"],times="0"}]}
(gdb)
|
Here's an example of the result when there are no breakpoints:
(gdb)
-break-list
^done,BreakpointTable={nr_rows="0",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[]}
(gdb)
|
-break-passcount Command
-break-passcount tracepoint-number passcount |
Set the passcount for tracepoint tracepoint-number to passcount. If the breakpoint referred to by tracepoint-number is not a tracepoint, error is emitted. This corresponds to CLI command `passcount'.
-break-watch Command
-break-watch [ -a | -r ] |
Create a watchpoint. With the `-a' option it will create an access watchpoint, i.e., a watchpoint that triggers either on a read from or on a write to the memory location. With the `-r' option, the watchpoint created is a read watchpoint, i.e., it will trigger only when the memory location is accessed for reading. Without either of the options, the watchpoint created is a regular watchpoint, i.e., it will trigger when the memory location is accessed for writing. See section Setting Watchpoints.
Note that `-break-list' will report a single list of watchpoints and breakpoints inserted.
The corresponding commands are `watch', `awatch', and `rwatch'.
Setting a watchpoint on a variable in the main function:
(gdb)
-break-watch x
^done,wpt={number="2",exp="x"}
(gdb)
-exec-continue
^running
(gdb)
*stopped,reason="watchpoint-trigger",wpt={number="2",exp="x"},
value={old="-268439212",new="55"},
frame={func="main",args=[],file="recursive2.c",
fullname="/home/foo/bar/recursive2.c",line="5"}
(gdb)
|
Setting a watchpoint on a variable local to a function. will stop the program execution twice: first for the variable changing value, then for the watchpoint going out of scope.
(gdb)
-break-watch C
^done,wpt={number="5",exp="C"}
(gdb)
-exec-continue
^running
(gdb)
*stopped,reason="watchpoint-trigger",
wpt={number="5",exp="C"},value={old="-276895068",new="3"},
frame={func="callee4",args=[],
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="13"}
(gdb)
-exec-continue
^running
(gdb)
*stopped,reason="watchpoint-scope",wpnum="5",
frame={func="callee3",args=[{name="strarg",
value="0x11940 \"A string argument.\""}],
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="18"}
(gdb)
|
Listing breakpoints and watchpoints, at different points in the program execution. Note that once the watchpoint goes out of scope, it is deleted.
(gdb)
-break-watch C
^done,wpt={number="2",exp="C"}
(gdb)
-break-list
^done,BreakpointTable={nr_rows="2",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x00010734",func="callee4",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/devo/gdb/testsuite/gdb.mi/basics.c"line="8",thread-groups=["i1"],
times="1"},
bkpt={number="2",type="watchpoint",disp="keep",
enabled="y",addr="",what="C",thread-groups=["i1"],times="0"}]}
(gdb)
-exec-continue
^running
(gdb)
*stopped,reason="watchpoint-trigger",wpt={number="2",exp="C"},
value={old="-276895068",new="3"},
frame={func="callee4",args=[],
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="13"}
(gdb)
-break-list
^done,BreakpointTable={nr_rows="2",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x00010734",func="callee4",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/devo/gdb/testsuite/gdb.mi/basics.c",line="8",thread-groups=["i1"],
times="1"},
bkpt={number="2",type="watchpoint",disp="keep",
enabled="y",addr="",what="C",thread-groups=["i1"],times="-5"}]}
(gdb)
-exec-continue
^running
^done,reason="watchpoint-scope",wpnum="2",
frame={func="callee3",args=[{name="strarg",
value="0x11940 \"A string argument.\""}],
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="18"}
(gdb)
-break-list
^done,BreakpointTable={nr_rows="1",nr_cols="6",
hdr=[{width="3",alignment="-1",col_name="number",colhdr="Num"},
{width="14",alignment="-1",col_name="type",colhdr="Type"},
{width="4",alignment="-1",col_name="disp",colhdr="Disp"},
{width="3",alignment="-1",col_name="enabled",colhdr="Enb"},
{width="10",alignment="-1",col_name="addr",colhdr="Address"},
{width="40",alignment="2",col_name="what",colhdr="What"}],
body=[bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x00010734",func="callee4",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/devo/gdb/testsuite/gdb.mi/basics.c",line="8",
thread-groups=["i1"],times="1"}]}
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section documents GDB/MI commands for manipulating catchpoints.
-catch-load Command
-catch-load [ -t ] [ -d ] regexp |
Add a catchpoint for library load events. If the `-t' option is used, the catchpoint is a temporary one (see section Setting Breakpoints). If the `-d' option is used, the catchpoint is created in a disabled state. The `regexp' argument is a regular expression used to match the name of the loaded library.
The corresponding command is `catch load'.
-catch-load -t foo.so
^done,bkpt={number="1",type="catchpoint",disp="del",enabled="y",
what="load of library matching foo.so",catch-type="load",times="0"}
(gdb)
|
-catch-unload Command
-catch-unload [ -t ] [ -d ] regexp |
Add a catchpoint for library unload events. If the `-t' option is used, the catchpoint is a temporary one (see section Setting Breakpoints). If the `-d' option is used, the catchpoint is created in a disabled state. The `regexp' argument is a regular expression used to match the name of the unloaded library.
The corresponding command is `catch unload'.
-catch-unload -d bar.so
^done,bkpt={number="2",type="catchpoint",disp="keep",enabled="n",
what="load of library matching bar.so",catch-type="unload",times="0"}
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-exec-arguments Command
-exec-arguments args |
Set the inferior program arguments, to be used in the next `-exec-run'.
The corresponding command is `set args'.
(gdb) -exec-arguments -v word ^done (gdb) |
-environment-cd Command
-environment-cd pathdir |
Set 's working directory.
The corresponding command is `cd'.
(gdb) -environment-cd /kwikemart/marge/ezannoni/flathead-dev/devo/gdb ^done (gdb) |
-environment-directory Command
-environment-directory [ -r ] [ pathdir ]+ |
Add directories pathdir to beginning of search path for source files. If the `-r' option is used, the search path is reset to the default search path. If directories pathdir are supplied in addition to the `-r' option, the search path is first reset and then addition occurs as normal. Multiple directories may be specified, separated by blanks. Specifying multiple directories in a single command results in the directories added to the beginning of the search path in the same order they were presented in the command. If blanks are needed as part of a directory name, double-quotes should be used around the name. In the command output, the path will show up separated by the system directory-separator character. The directory-separator character must not be used in any directory name. If no directories are specified, the current search path is displayed.
The corresponding command is `dir'.
(gdb) -environment-directory /kwikemart/marge/ezannoni/flathead-dev/devo/gdb ^done,source-path="/kwikemart/marge/ezannoni/flathead-dev/devo/gdb:$cdir:$cwd" (gdb) -environment-directory "" ^done,source-path="/kwikemart/marge/ezannoni/flathead-dev/devo/gdb:$cdir:$cwd" (gdb) -environment-directory -r /home/jjohnstn/src/gdb /usr/src ^done,source-path="/home/jjohnstn/src/gdb:/usr/src:$cdir:$cwd" (gdb) -environment-directory -r ^done,source-path="$cdir:$cwd" (gdb) |
-environment-path Command
-environment-path [ -r ] [ pathdir ]+ |
Add directories pathdir to beginning of search path for object files. If the `-r' option is used, the search path is reset to the original search path that existed at gdb start-up. If directories pathdir are supplied in addition to the `-r' option, the search path is first reset and then addition occurs as normal. Multiple directories may be specified, separated by blanks. Specifying multiple directories in a single command results in the directories added to the beginning of the search path in the same order they were presented in the command. If blanks are needed as part of a directory name, double-quotes should be used around the name. In the command output, the path will show up separated by the system directory-separator character. The directory-separator character must not be used in any directory name. If no directories are specified, the current path is displayed.
The corresponding command is `path'.
(gdb) -environment-path ^done,path="/usr/bin" (gdb) -environment-path /kwikemart/marge/ezannoni/flathead-dev/ppc-eabi/gdb /bin ^done,path="/kwikemart/marge/ezannoni/flathead-dev/ppc-eabi/gdb:/bin:/usr/bin" (gdb) -environment-path -r /usr/local/bin ^done,path="/usr/local/bin:/usr/bin" (gdb) |
-environment-pwd Command
-environment-pwd |
Show the current working directory.
The corresponding command is `pwd'.
(gdb) -environment-pwd ^done,cwd="/kwikemart/marge/ezannoni/flathead-dev/devo/gdb" (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-thread-info Command
-thread-info [ thread-id ] |
Reports information about either a specific thread, if the thread-id parameter is present, or about all threads. When printing information about all threads, also reports the current thread.
The `info thread' command prints the same information about all threads.
The result is a list of threads. The following attributes are defined for a given thread:
thread name command, then this name is given. Otherwise, if
can extract the thread name from the target, then that
name is given. If cannot find the thread name, then this
field is omitted.
stopped
running
-thread-info
^done,threads=[
{id="2",target-id="Thread 0xb7e14b90 (LWP 21257)",
frame={level="0",addr="0xffffe410",func="__kernel_vsyscall",
args=[]},state="running"},
{id="1",target-id="Thread 0xb7e156b0 (LWP 21254)",
frame={level="0",addr="0x0804891f",func="foo",
args=[{name="i",value="10"}],
file="/tmp/a.c",fullname="/tmp/a.c",line="158"},
state="running"}],
current-thread-id="1"
(gdb)
|
-thread-list-ids Command
-thread-list-ids |
Produces a list of the currently known thread ids. At the end of the list it also prints the total number of such threads.
This command is retained for historical reasons, the
-thread-info command should be used instead.
Part of `info threads' supplies the same information.
(gdb)
-thread-list-ids
^done,thread-ids={thread-id="3",thread-id="2",thread-id="1"},
current-thread-id="1",number-of-threads="3"
(gdb)
|
-thread-select Command
-thread-select threadnum |
Make threadnum the current thread. It prints the number of the new current thread, and the topmost frame for that thread.
This command is deprecated in favor of explicitly using the `--thread' option to each command.
The corresponding command is `thread'.
(gdb)
-exec-next
^running
(gdb)
*stopped,reason="end-stepping-range",thread-id="2",line="187",
file="../../../devo/gdb/testsuite/gdb.threads/linux-dp.c"
(gdb)
-thread-list-ids
^done,
thread-ids={thread-id="3",thread-id="2",thread-id="1"},
number-of-threads="3"
(gdb)
-thread-select 3
^done,new-thread-id="3",
frame={level="0",func="vprintf",
args=[{name="format",value="0x8048e9c \"%*s%c %d %c\\n\""},
{name="arg",value="0x2"}],file="vprintf.c",line="31"}
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-ada-task-info Command
-ada-task-info [ task-id ] |
Reports information about either a specific Ada task, if the task-id parameter is present, or about all Ada tasks.
The `info tasks' command prints the same information about all Ada tasks (see section 15.4.9.5 Extensions for Ada Tasks).
The result is a table of Ada tasks. The following columns are defined for each Ada task:
This field should always exist, as Ada tasks are always implemented on top of a thread. But if cannot find this corresponding thread for any reason, the field is omitted.
-ada-task-info
^done,tasks={nr_rows="3",nr_cols="8",
hdr=[{width="1",alignment="-1",col_name="current",colhdr=""},
{width="3",alignment="1",col_name="id",colhdr="ID"},
{width="9",alignment="1",col_name="task-id",colhdr="TID"},
{width="4",alignment="1",col_name="thread-id",colhdr=""},
{width="4",alignment="1",col_name="parent-id",colhdr="P-ID"},
{width="3",alignment="1",col_name="priority",colhdr="Pri"},
{width="22",alignment="-1",col_name="state",colhdr="State"},
{width="1",alignment="2",col_name="name",colhdr="Name"}],
body=[{current="*",id="1",task-id=" 644010",thread-id="1",priority="48",
state="Child Termination Wait",name="main_task"}]}
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These are the asynchronous commands which generate the out-of-band record `*stopped'. Currently only really executes asynchronously with remote targets and this interaction is mimicked in other cases.
-exec-continue Command
-exec-continue [--reverse] [--all|--thread-group N] |
Resumes the execution of the inferior program, which will continue to execute until it reaches a debugger stop event. If the `--reverse' option is specified, execution resumes in reverse until it reaches a stop event. Stop events may include
The corresponding corresponding is `continue'.
-exec-continue
^running
(gdb)
@Hello world
*stopped,reason="breakpoint-hit",disp="keep",bkptno="2",frame={
func="foo",args=[],file="hello.c",fullname="/home/foo/bar/hello.c",
line="13"}
(gdb)
|
-exec-finish Command
-exec-finish [--reverse] |
Resumes the execution of the inferior program until the current function is exited. Displays the results returned by the function. If the `--reverse' option is specified, resumes the reverse execution of the inferior program until the point where current function was called.
The corresponding command is `finish'.
Function returning void.
-exec-finish
^running
(gdb)
@hello from foo
*stopped,reason="function-finished",frame={func="main",args=[],
file="hello.c",fullname="/home/foo/bar/hello.c",line="7"}
(gdb)
|
Function returning other than void. The name of the internal
variable storing the result is printed, together with the
value itself.
-exec-finish
^running
(gdb)
*stopped,reason="function-finished",frame={addr="0x000107b0",func="foo",
args=[{name="a",value="1"],{name="b",value="9"}},
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
gdb-result-var="$1",return-value="0"
(gdb)
|
-exec-interrupt Command
-exec-interrupt [--all|--thread-group N] |
Interrupts the background execution of the target. Note how the token associated with the stop message is the one for the execution command that has been interrupted. The token for the interrupt itself only appears in the `^done' output. If the user is trying to interrupt a non-running program, an error message will be printed.
Note that when asynchronous execution is enabled, this command is asynchronous just like other execution commands. That is, first the `^done' response will be printed, and the target stop will be reported after that using the `*stopped' notification.
In non-stop mode, only the context thread is interrupted by default. All threads (in all inferiors) will be interrupted if the `--all' option is specified. If the `--thread-group' option is specified, all threads in that group will be interrupted.
The corresponding command is `interrupt'.
(gdb)
111-exec-continue
111^running
(gdb)
222-exec-interrupt
222^done
(gdb)
111*stopped,signal-name="SIGINT",signal-meaning="Interrupt",
frame={addr="0x00010140",func="foo",args=[],file="try.c",
fullname="/home/foo/bar/try.c",line="13"}
(gdb)
(gdb)
-exec-interrupt
^error,msg="mi_cmd_exec_interrupt: Inferior not executing."
(gdb)
|
-exec-jump Command
-exec-jump location |
Resumes execution of the inferior program at the location specified by parameter. See section 9.2 Specifying a Location, for a description of the different forms of location.
The corresponding command is `jump'.
-exec-jump foo.c:10 *running,thread-id="all" ^running |
-exec-next Command
-exec-next [--reverse] |
Resumes execution of the inferior program, stopping when the beginning of the next source line is reached.
If the `--reverse' option is specified, resumes reverse execution of the inferior program, stopping at the beginning of the previous source line. If you issue this command on the first line of a function, it will take you back to the caller of that function, to the source line where the function was called.
The corresponding command is `next'.
-exec-next ^running (gdb) *stopped,reason="end-stepping-range",line="8",file="hello.c" (gdb) |
-exec-next-instruction Command
-exec-next-instruction [--reverse] |
Executes one machine instruction. If the instruction is a function call, continues until the function returns. If the program stops at an instruction in the middle of a source line, the address will be printed as well.
If the `--reverse' option is specified, resumes reverse execution of the inferior program, stopping at the previous instruction. If the previously executed instruction was a return from another function, it will continue to execute in reverse until the call to that function (from the current stack frame) is reached.
The corresponding command is `nexti'.
(gdb) -exec-next-instruction ^running (gdb) *stopped,reason="end-stepping-range", addr="0x000100d4",line="5",file="hello.c" (gdb) |
-exec-return Command
-exec-return |
Makes current function return immediately. Doesn't execute the inferior. Displays the new current frame.
The corresponding command is `return'.
(gdb)
200-break-insert callee4
200^done,bkpt={number="1",addr="0x00010734",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",line="8"}
(gdb)
000-exec-run
000^running
(gdb)
000*stopped,reason="breakpoint-hit",disp="keep",bkptno="1",
frame={func="callee4",args=[],
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="8"}
(gdb)
205-break-delete
205^done
(gdb)
111-exec-return
111^done,frame={level="0",func="callee3",
args=[{name="strarg",
value="0x11940 \"A string argument.\""}],
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="18"}
(gdb)
|
-exec-run Command
-exec-run [--all | --thread-group N] |
Starts execution of the inferior from the beginning. The inferior executes until either a breakpoint is encountered or the program exits. In the latter case the output will include an exit code, if the program has exited exceptionally.
When no option is specified, the current inferior is started. If the `--thread-group' option is specified, it should refer to a thread group of type `process', and that thread group will be started. If the `--all' option is specified, then all inferiors will be started.
The corresponding command is `run'.
(gdb)
-break-insert main
^done,bkpt={number="1",addr="0x0001072c",file="recursive2.c",line="4"}
(gdb)
-exec-run
^running
(gdb)
*stopped,reason="breakpoint-hit",disp="keep",bkptno="1",
frame={func="main",args=[],file="recursive2.c",
fullname="/home/foo/bar/recursive2.c",line="4"}
(gdb)
|
Program exited normally:
(gdb) -exec-run ^running (gdb) x = 55 *stopped,reason="exited-normally" (gdb) |
Program exited exceptionally:
(gdb) -exec-run ^running (gdb) x = 55 *stopped,reason="exited",exit-code="01" (gdb) |
Another way the program can terminate is if it receives a signal such as
SIGINT. In this case, GDB/MI displays this:
(gdb) *stopped,reason="exited-signalled",signal-name="SIGINT", signal-meaning="Interrupt" |
-exec-step Command
-exec-step [--reverse] |
Resumes execution of the inferior program, stopping when the beginning of the next source line is reached, if the next source line is not a function call. If it is, stop at the first instruction of the called function. If the `--reverse' option is specified, resumes reverse execution of the inferior program, stopping at the beginning of the previously executed source line.
The corresponding command is `step'.
Stepping into a function:
-exec-step
^running
(gdb)
*stopped,reason="end-stepping-range",
frame={func="foo",args=[{name="a",value="10"},
{name="b",value="0"}],file="recursive2.c",
fullname="/home/foo/bar/recursive2.c",line="11"}
(gdb)
|
Regular stepping:
-exec-step ^running (gdb) *stopped,reason="end-stepping-range",line="14",file="recursive2.c" (gdb) |
-exec-step-instruction Command
-exec-step-instruction [--reverse] |
Resumes the inferior which executes one machine instruction. If the `--reverse' option is specified, resumes reverse execution of the inferior program, stopping at the previously executed instruction. The output, once has stopped, will vary depending on whether we have stopped in the middle of a source line or not. In the former case, the address at which the program stopped will be printed as well.
The corresponding command is `stepi'.
(gdb)
-exec-step-instruction
^running
(gdb)
*stopped,reason="end-stepping-range",
frame={func="foo",args=[],file="try.c",
fullname="/home/foo/bar/try.c",line="10"}
(gdb)
-exec-step-instruction
^running
(gdb)
*stopped,reason="end-stepping-range",
frame={addr="0x000100f4",func="foo",args=[],file="try.c",
fullname="/home/foo/bar/try.c",line="10"}
(gdb)
|
-exec-until Command
-exec-until [ location ] |
Executes the inferior until the location specified in the argument is reached. If there is no argument, the inferior executes until a source line greater than the current one is reached. The reason for stopping in this case will be `location-reached'.
The corresponding command is `until'.
(gdb)
-exec-until recursive2.c:6
^running
(gdb)
x = 55
*stopped,reason="location-reached",frame={func="main",args=[],
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="6"}
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-stack-info-frame Command
-stack-info-frame |
Get info on the selected frame.
The corresponding command is `info frame' or `frame' (without arguments).
(gdb)
-stack-info-frame
^done,frame={level="1",addr="0x0001076c",func="callee3",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="17"}
(gdb)
|
-stack-info-depth Command
-stack-info-depth [ max-depth ] |
Return the depth of the stack. If the integer argument max-depth is specified, do not count beyond max-depth frames.
There's no equivalent command.
For a stack with frame levels 0 through 11:
(gdb) -stack-info-depth ^done,depth="12" (gdb) -stack-info-depth 4 ^done,depth="4" (gdb) -stack-info-depth 12 ^done,depth="12" (gdb) -stack-info-depth 11 ^done,depth="11" (gdb) -stack-info-depth 13 ^done,depth="12" (gdb) |
-stack-list-arguments Command
-stack-list-arguments print-values
[ low-frame high-frame ]
|
Display a list of the arguments for the frames between low-frame and high-frame (inclusive). If low-frame and high-frame are not provided, list the arguments for the whole call stack. If the two arguments are equal, show the single frame at the corresponding level. It is an error if low-frame is larger than the actual number of frames. On the other hand, high-frame may be larger than the actual number of frames, in which case only existing frames will be returned.
If print-values is 0 or --no-values, print only the names of
the variables; if it is 1 or --all-values, print also their
values; and if it is 2 or --simple-values, print the name,
type and value for simple data types, and the name and type for arrays,
structures and unions.
Use of this command to obtain arguments in a single frame is deprecated in favor of the `-stack-list-variables' command.
does not have an equivalent command. gdbtk has a
`gdb_get_args' command which partially overlaps with the
functionality of `-stack-list-arguments'.
(gdb)
-stack-list-frames
^done,
stack=[
frame={level="0",addr="0x00010734",func="callee4",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="8"},
frame={level="1",addr="0x0001076c",func="callee3",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="17"},
frame={level="2",addr="0x0001078c",func="callee2",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="22"},
frame={level="3",addr="0x000107b4",func="callee1",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="27"},
frame={level="4",addr="0x000107e0",func="main",
file="../../../devo/gdb/testsuite/gdb.mi/basics.c",
fullname="/home/foo/bar/devo/gdb/testsuite/gdb.mi/basics.c",line="32"}]
(gdb)
-stack-list-arguments 0
^done,
stack-args=[
frame={level="0",args=[]},
frame={level="1",args=[name="strarg"]},
frame={level="2",args=[name="intarg",name="strarg"]},
frame={level="3",args=[name="intarg",name="strarg",name="fltarg"]},
frame={level="4",args=[]}]
(gdb)
-stack-list-arguments 1
^done,
stack-args=[
frame={level="0",args=[]},
frame={level="1",
args=[{name="strarg",value="0x11940 \"A string argument.\""}]},
frame={level="2",args=[
{name="intarg",value="2"},
{name="strarg",value="0x11940 \"A string argument.\""}]},
{frame={level="3",args=[
{name="intarg",value="2"},
{name="strarg",value="0x11940 \"A string argument.\""},
{name="fltarg",value="3.5"}]},
frame={level="4",args=[]}]
(gdb)
-stack-list-arguments 0 2 2
^done,stack-args=[frame={level="2",args=[name="intarg",name="strarg"]}]
(gdb)
-stack-list-arguments 1 2 2
^done,stack-args=[frame={level="2",
args=[{name="intarg",value="2"},
{name="strarg",value="0x11940 \"A string argument.\""}]}]
(gdb)
|
-stack-list-frames Command
-stack-list-frames [ low-frame high-frame ] |
List the frames currently on the stack. For each frame it displays the following info:
$pc value for that frame.
$pc.
If invoked without arguments, this command prints a backtrace for the whole stack. If given two integer arguments, it shows the frames whose levels are between the two arguments (inclusive). If the two arguments are equal, it shows the single frame at the corresponding level. It is an error if low-frame is larger than the actual number of frames. On the other hand, high-frame may be larger than the actual number of frames, in which case only existing frames will be returned.
The corresponding commands are `backtrace' and `where'.
Full stack backtrace:
(gdb)
-stack-list-frames
^done,stack=
[frame={level="0",addr="0x0001076c",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="11"},
frame={level="1",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="2",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="3",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="4",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="5",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="6",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="7",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="8",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="9",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="10",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="11",addr="0x00010738",func="main",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="4"}]
(gdb)
|
Show frames between low_frame and high_frame:
(gdb)
-stack-list-frames 3 5
^done,stack=
[frame={level="3",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="4",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"},
frame={level="5",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"}]
(gdb)
|
Show a single frame:
(gdb)
-stack-list-frames 3 3
^done,stack=
[frame={level="3",addr="0x000107a4",func="foo",
file="recursive2.c",fullname="/home/foo/bar/recursive2.c",line="14"}]
(gdb)
|
-stack-list-locals Command
-stack-list-locals print-values |
Display the local variable names for the selected frame. If
print-values is 0 or --no-values, print only the names of
the variables; if it is 1 or --all-values, print also their
values; and if it is 2 or --simple-values, print the name,
type and value for simple data types, and the name and type for arrays,
structures and unions. In this last case, a frontend can immediately
display the value of simple data types and create variable objects for
other data types when the user wishes to explore their values in
more detail.
This command is deprecated in favor of the `-stack-list-variables' command.
`info locals' in , `gdb_get_locals' in gdbtk.
(gdb)
-stack-list-locals 0
^done,locals=[name="A",name="B",name="C"]
(gdb)
-stack-list-locals --all-values
^done,locals=[{name="A",value="1"},{name="B",value="2"},
{name="C",value="{1, 2, 3}"}]
-stack-list-locals --simple-values
^done,locals=[{name="A",type="int",value="1"},
{name="B",type="int",value="2"},{name="C",type="int [3]"}]
(gdb)
|
-stack-list-variables Command
-stack-list-variables print-values |
Display the names of local variables and function arguments for the selected frame. If
print-values is 0 or --no-values, print only the names of
the variables; if it is 1 or --all-values, print also their
values; and if it is 2 or --simple-values, print the name,
type and value for simple data types, and the name and type for arrays,
structures and unions.
(gdb)
-stack-list-variables --thread 1 --frame 0 --all-values
^done,variables=[{name="x",value="11"},{name="s",value="{a = 1, b = 2}"}]
(gdb)
|
-stack-select-frame Command
-stack-select-frame framenum |
Change the selected frame. Select a different frame framenum on the stack.
This command in deprecated in favor of passing the `--frame' option to every command.
The corresponding commands are `frame', `up', `down', `select-frame', `up-silent', and `down-silent'.
(gdb) -stack-select-frame 2 ^done (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Variable objects are "object-oriented" MI interface for examining and changing values of expressions. Unlike some other MI interfaces that work with expressions, variable objects are specifically designed for simple and efficient presentation in the frontend. A variable object is identified by string name. When a variable object is created, the frontend specifies the expression for that variable object. The expression can be a simple variable, or it can be an arbitrary complex expression, and can even involve CPU registers. After creating a variable object, the frontend can invoke other variable object operations--for example to obtain or change the value of a variable object, or to change display format.
Variable objects have hierarchical tree structure. Any variable object that corresponds to a composite type, such as structure in C, has a number of child variable objects, for example corresponding to each element of a structure. A child variable object can itself have children, recursively. Recursion ends when we reach leaf variable objects, which always have built-in types. Child variable objects are created only by explicit request, so if a frontend is not interested in the children of a particular variable object, no child will be created.
For a leaf variable object it is possible to obtain its value as a string, or set the value from a string. String value can be also obtained for a non-leaf variable object, but it's generally a string that only indicates the type of the object, and does not list its contents. Assignment to a non-leaf variable object is not allowed. A frontend does not need to read the values of all variable objects each time the program stops. Instead, MI provides an update command that lists all variable objects whose values has changed since the last update operation. This considerably reduces the amount of data that must be transferred to the frontend. As noted above, children variable objects are created on demand, and only leaf variable objects have a real value. As result, gdb will read target memory only for leaf variables that frontend has created.
The automatic update is not always desirable. For example, a frontend might want to keep a value of some expression for future reference, and never update it. For another example, fetching memory is relatively slow for embedded targets, so a frontend might want to disable automatic update for the variables that are either not visible on the screen, or "closed". This is possible using so called "frozen variable objects". Such variable objects are never implicitly updated.
Variable objects can be either fixed or floating. For the fixed variable object, the expression is parsed when the variable object is created, including associating identifiers to specific variables. The meaning of expression never changes. For a floating variable object the values of variables whose names appear in the expressions are re-evaluated every time in the context of the current frame. Consider this example:
void do_work(...)
{
struct work_state state;
if (...)
do_work(...);
}
|
If a fixed variable object for the state variable is created in
this function, and we enter the recursive call, the variable
object will report the value of state in the top-level
do_work invocation. On the other hand, a floating variable
object will report the value of state in the current frame.
If an expression specified when creating a fixed variable object refers to a local variable, the variable object becomes bound to the thread and frame in which the variable object is created. When such variable object is updated, makes sure that the thread/frame combination the variable object is bound to still exists, and re-evaluates the variable object in context of that thread/frame.
The following is the complete set of GDB/MI operations defined to access this functionality:
| Operation | Description |
-enable-pretty-printing |
enable Python-based pretty-printing |
-var-create |
create a variable object |
-var-delete |
delete the variable object and/or its children |
-var-set-format |
set the display format of this variable |
-var-show-format |
show the display format of this variable |
-var-info-num-children |
tells how many children this object has |
-var-list-children |
return a list of the object's children |
-var-info-type |
show the type of this variable object |
-var-info-expression |
print parent-relative expression that this variable object represents |
-var-info-path-expression |
print full expression that this variable object represents |
-var-show-attributes |
is this variable editable? does it exist here? |
-var-evaluate-expression |
get the value of this variable |
-var-assign |
set the value of this variable |
-var-update |
update the variable and its children |
-var-set-frozen |
set frozeness attribute |
-var-set-update-range |
set range of children to display on update |
In the next subsection we describe each operation in detail and suggest how it can be used.
-enable-pretty-printing Command
-enable-pretty-printing |
allows Python-based visualizers to affect the output of the MI variable object commands. However, because there was no way to implement this in a fully backward-compatible way, a front end must request that this functionality be enabled.
Once enabled, this feature cannot be disabled.
Note that if Python support has not been compiled into , this command will still succeed (and do nothing).
This feature is currently (as of 7.0) experimental, and may work differently in future versions of .
-var-create Command
-var-create {name | "-"}
{frame-addr | "*" | "@"} expression
|
This operation creates a variable object, which allows the monitoring of a variable, the result of an expression, a memory cell or a CPU register.
The name parameter is the string by which the object can be referenced. It must be unique. If `-' is specified, the varobj system will generate a string "varNNNNNN" automatically. It will be unique provided that one does not specify name of that format. The command fails if a duplicate name is found.
The frame under which the expression should be evaluated can be specified by frame-addr. A `*' indicates that the current frame should be used. A `@' indicates that a floating variable object must be created.
expression is any expression valid on the current language set (must not begin with a `*'), or one of the following:
A varobj's contents may be provided by a Python-based pretty-printer. In this
case the varobj is known as a dynamic varobj. Dynamic varobjs
have slightly different semantics in some cases. If the
-enable-pretty-printing command is not sent, then
will never create a dynamic varobj. This ensures backward
compatibility for existing clients.
This operation returns attributes of the newly-created varobj. These are:
struct), or for a dynamic varobj, this value
will not be interesting.
on, the
actual (derived) type of the object is shown rather than the
declared one.
display_hint method. See section 23.2.2.5 Pretty Printing API.
Typical output will look like this:
name="name",numchild="N",type="type",thread-id="M", has_more="has_more" |
-var-delete Command
-var-delete [ -c ] name |
Deletes a previously created variable object and all of its children. With the `-c' option, just deletes the children.
Returns an error if the object name is not found.
-var-set-format Command
-var-set-format name format-spec |
Sets the output format for the value of the object name to be format-spec.
The syntax for the format-spec is as follows:
format-spec ==>
{binary | decimal | hexadecimal | octal | natural}
|
The natural format is the default format choosen automatically
based on the variable type (like decimal for an int, hex
for pointers, etc.).
For a variable with children, the format is set only on the variable itself, and the children are not affected.
-var-show-format Command
-var-show-format name |
Returns the format used to display the value of the object name.
format ==> format-spec |
-var-info-num-children Command
-var-info-num-children name |
Returns the number of children of a variable object name:
numchild=n |
Note that this number is not completely reliable for a dynamic varobj. It will return the current number of children, but more children may be available.
-var-list-children Command
-var-list-children [print-values] name [from to] |
Return a list of the children of the specified variable object and
create variable objects for them, if they do not already exist. With
a single argument or if print-values has a value of 0 or
--no-values, print only the names of the variables; if
print-values is 1 or --all-values, also print their
values; and if it is 2 or --simple-values print the name and
value for simple data types and just the name for arrays, structures
and unions.
from and to, if specified, indicate the range of children to report. If from or to is less than zero, the range is reset and all children will be reported. Otherwise, children starting at from (zero-based) and up to and excluding to will be reported.
If a child range is requested, it will only affect the current call to
-var-list-children, but not future calls to -var-update.
For this, you must instead use -var-set-update-range. The
intent of this approach is to enable a front end to implement any
update approach it likes; for example, scrolling a view may cause the
front end to request more children with -var-list-children, and
then the front end could call -var-set-update-range with a
different range to ensure that future updates are restricted to just
the visible items.
For each child the following results are returned:
For a dynamic varobj, this value cannot be used to form an expression. There is no way to do this at all with a dynamic varobj.
For C/C++ structures there are several pseudo children returned to designate access qualifiers. For these pseudo children exp is `public', `private', or `protected'. In this case the type and value are not present.
A dynamic varobj will not report the access qualifying pseudo-children, regardless of the language. This information is not available at all with a dynamic varobj.
on, the
actual (derived) type of the object is shown rather than the
declared one.
The result may have its own attributes:
display_hint method. See section 23.2.2.5 Pretty Printing API.
(gdb)
-var-list-children n
^done,numchild=n,children=[child={name=name,exp=exp,
numchild=n,type=type},(repeats N times)]
(gdb)
-var-list-children --all-values n
^done,numchild=n,children=[child={name=name,exp=exp,
numchild=n,value=value,type=type},(repeats N times)]
|
-var-info-type Command
-var-info-type name |
Returns the type of the specified variable name. The type is returned as a string in the same format as it is output by the CLI:
type=typename |
-var-info-expression Command
-var-info-expression name |
Returns a string that is suitable for presenting this variable object in user interface. The string is generally not valid expression in the current language, and cannot be evaluated.
For example, if a is an array, and variable object
A was created for a, then we'll get this output:
(gdb) -var-info-expression A.1 ^done,lang="C",exp="1" |
Here, the values of lang can be {"C" | "C++" | "Java"}.
Note that the output of the -var-list-children command also
includes those expressions, so the -var-info-expression command
is of limited use.
-var-info-path-expression Command
-var-info-path-expression name |
Returns an expression that can be evaluated in the current
context and will yield the same value that a variable object has.
Compare this with the -var-info-expression command, which
result can be used only for UI presentation. Typical use of
the -var-info-path-expression command is creating a
watchpoint from a variable object.
This command is currently not valid for children of a dynamic varobj, and will give an error when invoked on one.
For example, suppose C is a C++ class, derived from class
Base, and that the Base class has a member called
m_size. Assume a variable c is has the type of
C and a variable object C was created for variable
c. Then, we'll get this output:
(gdb) -var-info-path-expression C.Base.public.m_size ^done,path_expr=((Base)c).m_size) |
-var-show-attributes Command
-var-show-attributes name |
List attributes of the specified variable object name:
status=attr [ ( ,attr )* ] |
where attr is { { editable | noneditable } | TBD }.
-var-evaluate-expression Command
-var-evaluate-expression [-f format-spec] name |
Evaluates the expression that is represented by the specified variable
object and returns its value as a string. The format of the string
can be specified with the `-f' option. The possible values of
this option are the same as for -var-set-format
(see -var-set-format). If the `-f' option is not specified,
the current display format will be used. The current display format
can be changed using the -var-set-format command.
value=value |
Note that one must invoke -var-list-children for a variable
before the value of a child variable can be evaluated.
-var-assign Command
-var-assign name expression |
Assigns the value of expression to the variable object specified
by name. The object must be `editable'. If the variable's
value is altered by the assign, the variable will show up in any
subsequent -var-update list.
(gdb)
-var-assign var1 3
^done,value="3"
(gdb)
-var-update *
^done,changelist=[{name="var1",in_scope="true",type_changed="false"}]
(gdb)
|
-var-update Command
-var-update [print-values] {name | "*"}
|
Reevaluate the expressions corresponding to the variable object
name and all its direct and indirect children, and return the
list of variable objects whose values have changed; name must
be a root variable object. Here, "changed" means that the result of
-var-evaluate-expression before and after the
-var-update is different. If `*' is used as the variable
object names, all existing variable objects are updated, except
for frozen ones (see -var-set-frozen). The option
print-values determines whether both names and values, or just
names are printed. The possible values of this option are the same
as for -var-list-children (see -var-list-children). It is
recommended to use the `--all-values' option, to reduce the
number of MI commands needed on each program stop.
With the `*' parameter, if a variable object is bound to a currently running thread, it will not be updated, without any diagnostic.
If -var-set-update-range was previously used on a varobj, then
only the selected range of children will be reported.
-var-update reports all the changed varobjs in a tuple named
`changelist'.
Each item in the change list is itself a tuple holding:
"true"
"false"
"invalid"
file
command. The front end should normally choose to delete these variable
objects.
In the future new values may be added to this list so the front should be prepared for this possibility. See section GDB/MI Development and Front Ends.
When a varobj's type changes, its children are also likely to have
become incorrect. Therefore, the varobj's children are automatically
deleted when this attribute is `true'. Also, the varobj's update
range, when set using the -var-set-update-range command, is
unset.
The `numchild' field in other varobj responses is generally not valid for a dynamic varobj -- it will show the number of children that knows about, but because dynamic varobjs lazily instantiate their children, this will not reflect the number of children which may be available.
The `new_num_children' attribute only reports changes to the number of children known by . This is the only way to detect whether an update has removed children (which necessarily can only happen at the end of the update range).
-var-set-update-range), then they will
be listed in this attribute.
(gdb)
-var-assign var1 3
^done,value="3"
(gdb)
-var-update --all-values var1
^done,changelist=[{name="var1",value="3",in_scope="true",
type_changed="false"}]
(gdb)
|
-var-set-frozen Command
-var-set-frozen name flag |
Set the frozenness flag on the variable object name. The
flag parameter should be either `1' to make the variable
frozen or `0' to make it unfrozen. If a variable object is
frozen, then neither itself, nor any of its children, are
implicitly updated by -var-update of
a parent variable or by -var-update *. Only
-var-update of the variable itself will update its value and
values of its children. After a variable object is unfrozen, it is
implicitly updated by all subsequent -var-update operations.
Unfreezing a variable does not update it, only subsequent
-var-update does.
(gdb) -var-set-frozen V 1 ^done (gdb) |
-var-set-update-range command
-var-set-update-range name from to |
Set the range of children to be returned by future invocations of
-var-update.
from and to indicate the range of children to report. If from or to is less than zero, the range is reset and all children will be reported. Otherwise, children starting at from (zero-based) and up to and excluding to will be reported.
(gdb) -var-set-update-range V 1 2 ^done |
-var-set-visualizer command
-var-set-visualizer name visualizer |
Set a visualizer for the variable object name.
visualizer is the visualizer to use. The special value `None' means to disable any visualizer in use.
If not `None', visualizer must be a Python expression. This expression must evaluate to a callable object which accepts a single argument. will call this object with the value of the varobj name as an argument (this is done so that the same Python pretty-printing code can be used for both the CLI and MI). When called, this object must return an object which conforms to the pretty-printing interface (see section 23.2.2.5 Pretty Printing API).
The pre-defined function gdb.default_visualizer may be used to
select a visualizer by following the built-in process
(see section 23.2.2.6 Selecting Pretty-Printers). This is done automatically when
a varobj is created, and so ordinarily is not needed.
This feature is only available if Python support is enabled. The MI
command -list-features (see section 27.22 Miscellaneous GDB/MI Commands)
can be used to check this.
Resetting the visualizer:
(gdb) -var-set-visualizer V None ^done |
Reselecting the default (type-based) visualizer:
(gdb) -var-set-visualizer V gdb.default_visualizer ^done |
Suppose SomeClass is a visualizer class. A lambda expression
can be used to instantiate this class for a varobj:
(gdb) -var-set-visualizer V "lambda val: SomeClass()" ^done |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes the GDB/MI commands that manipulate data: examine memory and registers, evaluate expressions, etc.
-data-disassemble Command
-data-disassemble
[ -s start-addr -e end-addr ]
| [ -f filename -l linenum [ -n lines ] ]
-- mode
|
Where:
$pc)
The result of the -data-disassemble command will be a list named
`asm_insns', the contents of this list depend on the mode
used with the -data-disassemble command.
For modes 0 and 2 the `asm_insns' list contains tuples with the following fields:
address
func-name
offset
inst
opcodes
For modes 1 and 3 the `asm_insns' list contains tuples named `src_and_asm_line', each of which has the following fields:
line
file
fullname
If the source file is not found this field will contain the path as present in the debug information.
line_asm_insn
-data-disassemble in mode 0 and 2, so `address',
`func-name', `offset', `inst', and optionally
`opcodes'.
Note that whatever included in the `inst' field, is not manipulated directly by GDB/MI, i.e., it is not possible to adjust its format.
The corresponding command is `disassemble'.
Disassemble from the current value of $pc to $pc + 20:
(gdb)
-data-disassemble -s $pc -e "$pc + 20" -- 0
^done,
asm_insns=[
{address="0x000107c0",func-name="main",offset="4",
inst="mov 2, %o0"},
{address="0x000107c4",func-name="main",offset="8",
inst="sethi %hi(0x11800), %o2"},
{address="0x000107c8",func-name="main",offset="12",
inst="or %o2, 0x140, %o1\t! 0x11940 <_lib_version+8>"},
{address="0x000107cc",func-name="main",offset="16",
inst="sethi %hi(0x11800), %o2"},
{address="0x000107d0",func-name="main",offset="20",
inst="or %o2, 0x168, %o4\t! 0x11968 <_lib_version+48>"}]
(gdb)
|
Disassemble the whole main function. Line 32 is part of
main.
-data-disassemble -f basics.c -l 32 -- 0
^done,asm_insns=[
{address="0x000107bc",func-name="main",offset="0",
inst="save %sp, -112, %sp"},
{address="0x000107c0",func-name="main",offset="4",
inst="mov 2, %o0"},
{address="0x000107c4",func-name="main",offset="8",
inst="sethi %hi(0x11800), %o2"},
[...]
{address="0x0001081c",func-name="main",offset="96",inst="ret "},
{address="0x00010820",func-name="main",offset="100",inst="restore "}]
(gdb)
|
Disassemble 3 instructions from the start of main:
(gdb)
-data-disassemble -f basics.c -l 32 -n 3 -- 0
^done,asm_insns=[
{address="0x000107bc",func-name="main",offset="0",
inst="save %sp, -112, %sp"},
{address="0x000107c0",func-name="main",offset="4",
inst="mov 2, %o0"},
{address="0x000107c4",func-name="main",offset="8",
inst="sethi %hi(0x11800), %o2"}]
(gdb)
|
Disassemble 3 instructions from the start of main in mixed mode:
(gdb)
-data-disassemble -f basics.c -l 32 -n 3 -- 1
^done,asm_insns=[
src_and_asm_line={line="31",
file="../../../src/gdb/testsuite/gdb.mi/basics.c",
fullname="/absolute/path/to/src/gdb/testsuite/gdb.mi/basics.c",
line_asm_insn=[{address="0x000107bc",
func-name="main",offset="0",inst="save %sp, -112, %sp"}]},
src_and_asm_line={line="32",
file="../../../src/gdb/testsuite/gdb.mi/basics.c",
fullname="/absolute/path/to/src/gdb/testsuite/gdb.mi/basics.c",
line_asm_insn=[{address="0x000107c0",
func-name="main",offset="4",inst="mov 2, %o0"},
{address="0x000107c4",func-name="main",offset="8",
inst="sethi %hi(0x11800), %o2"}]}]
(gdb)
|
-data-evaluate-expression Command
-data-evaluate-expression expr |
Evaluate expr as an expression. The expression could contain an inferior function call. The function call will execute synchronously. If the expression contains spaces, it must be enclosed in double quotes.
The corresponding commands are `print', `output', and
`call'. In gdbtk only, there's a corresponding
`gdb_eval' command.
In the following example, the numbers that precede the commands are the tokens described in GDB/MI Command Syntax. Notice how GDB/MI returns the same tokens in its output.
211-data-evaluate-expression A 211^done,value="1" (gdb) 311-data-evaluate-expression &A 311^done,value="0xefffeb7c" (gdb) 411-data-evaluate-expression A+3 411^done,value="4" (gdb) 511-data-evaluate-expression "A + 3" 511^done,value="4" (gdb) |
-data-list-changed-registers Command
-data-list-changed-registers |
Display a list of the registers that have changed.
doesn't have a direct analog for this command; gdbtk
has the corresponding command `gdb_changed_register_list'.
On a PPC MBX board:
(gdb)
-exec-continue
^running
(gdb)
*stopped,reason="breakpoint-hit",disp="keep",bkptno="1",frame={
func="main",args=[],file="try.c",fullname="/home/foo/bar/try.c",
line="5"}
(gdb)
-data-list-changed-registers
^done,changed-registers=["0","1","2","4","5","6","7","8","9",
"10","11","13","14","15","16","17","18","19","20","21","22","23",
"24","25","26","27","28","30","31","64","65","66","67","69"]
(gdb)
|
-data-list-register-names Command
-data-list-register-names [ ( regno )+ ] |
Show a list of register names for the current target. If no arguments are given, it shows a list of the names of all the registers. If integer numbers are given as arguments, it will print a list of the names of the registers corresponding to the arguments. To ensure consistency between a register name and its number, the output list may include empty register names.
does not have a command which corresponds to
`-data-list-register-names'. In gdbtk there is a
corresponding command `gdb_regnames'.
For the PPC MBX board:
(gdb) -data-list-register-names ^done,register-names=["r0","r1","r2","r3","r4","r5","r6","r7", "r8","r9","r10","r11","r12","r13","r14","r15","r16","r17","r18", "r19","r20","r21","r22","r23","r24","r25","r26","r27","r28","r29", "r30","r31","f0","f1","f2","f3","f4","f5","f6","f7","f8","f9", "f10","f11","f12","f13","f14","f15","f16","f17","f18","f19","f20", "f21","f22","f23","f24","f25","f26","f27","f28","f29","f30","f31", "", "pc","ps","cr","lr","ctr","xer"] (gdb) -data-list-register-names 1 2 3 ^done,register-names=["r1","r2","r3"] (gdb) |
-data-list-register-values Command
-data-list-register-values fmt [ ( regno )*] |
Display the registers' contents. fmt is the format according to which the registers' contents are to be returned, followed by an optional list of numbers specifying the registers to display. A missing list of numbers indicates that the contents of all the registers must be returned.
Allowed formats for fmt are:
x
o
t
d
r
N
The corresponding commands are `info reg', `info
all-reg', and (in gdbtk) `gdb_fetch_registers'.
For a PPC MBX board (note: line breaks are for readability only, they don't appear in the actual output):
(gdb)
-data-list-register-values r 64 65
^done,register-values=[{number="64",value="0xfe00a300"},
{number="65",value="0x00029002"}]
(gdb)
-data-list-register-values x
^done,register-values=[{number="0",value="0xfe0043c8"},
{number="1",value="0x3fff88"},{number="2",value="0xfffffffe"},
{number="3",value="0x0"},{number="4",value="0xa"},
{number="5",value="0x3fff68"},{number="6",value="0x3fff58"},
{number="7",value="0xfe011e98"},{number="8",value="0x2"},
{number="9",value="0xfa202820"},{number="10",value="0xfa202808"},
{number="11",value="0x1"},{number="12",value="0x0"},
{number="13",value="0x4544"},{number="14",value="0xffdfffff"},
{number="15",value="0xffffffff"},{number="16",value="0xfffffeff"},
{number="17",value="0xefffffed"},{number="18",value="0xfffffffe"},
{number="19",value="0xffffffff"},{number="20",value="0xffffffff"},
{number="21",value="0xffffffff"},{number="22",value="0xfffffff7"},
{number="23",value="0xffffffff"},{number="24",value="0xffffffff"},
{number="25",value="0xffffffff"},{number="26",value="0xfffffffb"},
{number="27",value="0xffffffff"},{number="28",value="0xf7bfffff"},
{number="29",value="0x0"},{number="30",value="0xfe010000"},
{number="31",value="0x0"},{number="32",value="0x0"},
{number="33",value="0x0"},{number="34",value="0x0"},
{number="35",value="0x0"},{number="36",value="0x0"},
{number="37",value="0x0"},{number="38",value="0x0"},
{number="39",value="0x0"},{number="40",value="0x0"},
{number="41",value="0x0"},{number="42",value="0x0"},
{number="43",value="0x0"},{number="44",value="0x0"},
{number="45",value="0x0"},{number="46",value="0x0"},
{number="47",value="0x0"},{number="48",value="0x0"},
{number="49",value="0x0"},{number="50",value="0x0"},
{number="51",value="0x0"},{number="52",value="0x0"},
{number="53",value="0x0"},{number="54",value="0x0"},
{number="55",value="0x0"},{number="56",value="0x0"},
{number="57",value="0x0"},{number="58",value="0x0"},
{number="59",value="0x0"},{number="60",value="0x0"},
{number="61",value="0x0"},{number="62",value="0x0"},
{number="63",value="0x0"},{number="64",value="0xfe00a300"},
{number="65",value="0x29002"},{number="66",value="0x202f04b5"},
{number="67",value="0xfe0043b0"},{number="68",value="0xfe00b3e4"},
{number="69",value="0x20002b03"}]
(gdb)
|
-data-read-memory Command
This command is deprecated, use -data-read-memory-bytes instead.
-data-read-memory [ -o byte-offset ] address word-format word-size nr-rows nr-cols [ aschar ] |
where:
print command (see section Output Formats).
This command displays memory contents as a table of nr-rows by
nr-cols words, each word being word-size bytes. In total,
nr-rows * nr-cols * word-size bytes are read
(returned as `total-bytes'). Should less than the requested number
of bytes be returned by the target, the missing words are identified
using `N/A'. The number of bytes read from the target is returned
in `nr-bytes' and the starting address used to read memory in
`addr'.
The address of the next/previous row or page is available in `next-row' and `prev-row', `next-page' and `prev-page'.
The corresponding command is `x'. gdbtk has
`gdb_get_mem' memory read command.
Read six bytes of memory starting at bytes+6 but then offset by
-6 bytes. Format as three rows of two columns. One byte per
word. Display each word in hex.
(gdb)
9-data-read-memory -o -6 -- bytes+6 x 1 3 2
9^done,addr="0x00001390",nr-bytes="6",total-bytes="6",
next-row="0x00001396",prev-row="0x0000138e",next-page="0x00001396",
prev-page="0x0000138a",memory=[
{addr="0x00001390",data=["0x00","0x01"]},
{addr="0x00001392",data=["0x02","0x03"]},
{addr="0x00001394",data=["0x04","0x05"]}]
(gdb)
|
Read two bytes of memory starting at address shorts + 64 and
display as a single word formatted in decimal.
(gdb)
5-data-read-memory shorts+64 d 2 1 1
5^done,addr="0x00001510",nr-bytes="2",total-bytes="2",
next-row="0x00001512",prev-row="0x0000150e",
next-page="0x00001512",prev-page="0x0000150e",memory=[
{addr="0x00001510",data=["128"]}]
(gdb)
|
Read thirty two bytes of memory starting at bytes+16 and format
as eight rows of four columns. Include a string encoding with `x'
used as the non-printable character.
(gdb)
4-data-read-memory bytes+16 x 1 8 4 x
4^done,addr="0x000013a0",nr-bytes="32",total-bytes="32",
next-row="0x000013c0",prev-row="0x0000139c",
next-page="0x000013c0",prev-page="0x00001380",memory=[
{addr="0x000013a0",data=["0x10","0x11","0x12","0x13"],ascii="xxxx"},
{addr="0x000013a4",data=["0x14","0x15","0x16","0x17"],ascii="xxxx"},
{addr="0x000013a8",data=["0x18","0x19","0x1a","0x1b"],ascii="xxxx"},
{addr="0x000013ac",data=["0x1c","0x1d","0x1e","0x1f"],ascii="xxxx"},
{addr="0x000013b0",data=["0x20","0x21","0x22","0x23"],ascii=" !\"#"},
{addr="0x000013b4",data=["0x24","0x25","0x26","0x27"],ascii="$%&'"},
{addr="0x000013b8",data=["0x28","0x29","0x2a","0x2b"],ascii="()*+"},
{addr="0x000013bc",data=["0x2c","0x2d","0x2e","0x2f"],ascii=",-./"}]
(gdb)
|
-data-read-memory-bytes Command
-data-read-memory-bytes [ -o byte-offset ] address count |
where:
This command attempts to read all accessible memory regions in the specified range. First, all regions marked as unreadable in the memory map (if one is defined) will be skipped. See section 10.17 Memory Region Attributes. Second, will attempt to read the remaining regions. For each one, if reading full region results in an errors, will try to read a subset of the region.
In general, every single byte in the region may be readable or not, and the only way to read every readable byte is to try a read at every address, which is not practical. Therefore, will attempt to read all accessible bytes at either beginning or the end of the region, using a binary division scheme. This heuristic works well for reading accross a memory map boundary. Note that if a region has a readable range that is neither at the beginning or the end, will not read it.
The result record (see section 27.5.1 GDB/MI Result Records) that is output of the command includes a field named `memory' whose content is a list of tuples. Each tuple represent a successfully read memory block and has the following fields:
begin
end
offset
-data-read-memory-bytes.
contents
The corresponding command is `x'.
(gdb)
-data-read-memory-bytes &a 10
^done,memory=[{begin="0xbffff154",offset="0x00000000",
end="0xbffff15e",
contents="01000000020000000300"}]
(gdb)
|
-data-write-memory-bytes Command
-data-write-memory-bytes address contents -data-write-memory-bytes address contents [count] |
where:
There's no corresponding command.
(gdb) -data-write-memory-bytes &a "aabbccdd" ^done (gdb) |
(gdb) -data-write-memory-bytes &a "aabbccdd" 16e ^done (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The commands defined in this section implement MI support for tracepoints. For detailed introduction, see 13. Tracepoints.
-trace-find Command
-trace-find mode [parameters...] |
Find a trace frame using criteria defined by mode and
parameters. The following table lists permissible
modes and their parameters. For details of operation, see 13.2.1 tfind n.
If `none' was passed as mode, the response does not have fields. Otherwise, the response may have the following fields:
The corresponding command is `tfind'.
-trace-define-variable name [ value ] |
Create trace variable name if it does not exist. If value is specified, sets the initial value of the specified trace variable to that value. Note that the name should start with the `$' character.
The corresponding command is `tvariable'.
-trace-list-variables |
Return a table of all defined trace variables. Each element of the table has the following fields:
The corresponding command is `tvariables'.
(gdb)
-trace-list-variables
^done,trace-variables={nr_rows="1",nr_cols="3",
hdr=[{width="15",alignment="-1",col_name="name",colhdr="Name"},
{width="11",alignment="-1",col_name="initial",colhdr="Initial"},
{width="11",alignment="-1",col_name="current",colhdr="Current"}],
body=[variable={name="$trace_timestamp",initial="0"}
variable={name="$foo",initial="10",current="15"}]}
(gdb)
|
-trace-save [-r ] filename |
Saves the collected trace data to filename. Without the `-r' option, the data is downloaded from the target and saved in a local file. With the `-r' option the target is asked to perform the save.
The corresponding command is `tsave'.
-trace-start |
Starts a tracing experiments. The result of this command does not have any fields.
The corresponding command is `tstart'.
-trace-status |
Obtains the status of a tracing experiment. The result may include the following fields:
-trace-stop command. The value of `overflow' means
the tracing buffer is full. The value of `disconnection' means
tracing was automatically stopped when has disconnected.
The value of `passcount' means tracing was stopped when a
tracepoint was passed a maximal number of times for that tracepoint.
This field is present if `supported' field is not `0'.
1 means that the
trace buffer is circular and old trace frames will be discarded if
necessary to make room, 0 means that the trace buffer is linear
and may fill up.
1 means that
tracing will continue after disconnects, 0 means
that the trace run will stop.
The corresponding command is `tstatus'.
-trace-stop |
Stops a tracing experiment. The result of this command has the same
fields as -trace-status, except that the `supported' and
`running' fields are not output.
The corresponding command is `tstop'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-symbol-list-lines Command
-symbol-list-lines filename |
Print the list of lines that contain code and their associated program addresses for the given source filename. The entries are sorted in ascending PC order.
There is no corresponding command.
(gdb)
-symbol-list-lines basics.c
^done,lines=[{pc="0x08048554",line="7"},{pc="0x0804855a",line="8"}]
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes the GDB/MI commands to specify executable file names and to read in and obtain symbol table information.
-file-exec-and-symbols Command
-file-exec-and-symbols file |
Specify the executable file to be debugged. This file is the one from which the symbol table is also read. If no file is specified, the command clears the executable and symbol information. If breakpoints are set when using this command with no arguments, will produce error messages. Otherwise, no output is produced, except a completion notification.
The corresponding command is `file'.
(gdb) -file-exec-and-symbols /kwikemart/marge/ezannoni/TRUNK/mbx/hello.mbx ^done (gdb) |
-file-exec-file Command
-file-exec-file file |
Specify the executable file to be debugged. Unlike `-file-exec-and-symbols', the symbol table is not read from this file. If used without argument, clears the information about the executable file. No output is produced, except a completion notification.
The corresponding command is `exec-file'.
(gdb) -file-exec-file /kwikemart/marge/ezannoni/TRUNK/mbx/hello.mbx ^done (gdb) |
-file-list-exec-source-file Command
-file-list-exec-source-file |
List the line number, the current source file, and the absolute path to the current source file for the current executable. The macro information field has a value of `1' or `0' depending on whether or not the file includes preprocessor macro information.
The equivalent is `info source'
(gdb) 123-file-list-exec-source-file 123^done,line="1",file="foo.c",fullname="/home/bar/foo.c,macro-info="1" (gdb) |
-file-list-exec-source-files Command
-file-list-exec-source-files |
List the source files for the current executable.
It will always output both the filename and fullname (absolute file name) of a source file.
The equivalent is `info sources'.
gdbtk has an analogous command `gdb_listfiles'.
(gdb)
-file-list-exec-source-files
^done,files=[
{file=foo.c,fullname=/home/foo.c},
{file=/home/bar.c,fullname=/home/bar.c},
{file=gdb_could_not_find_fullpath.c}]
(gdb)
|
-file-symbol-file Command
-file-symbol-file file |
Read symbol table info from the specified file argument. When used without arguments, clears 's symbol table info. No output is produced, except for a completion notification.
The corresponding command is `symbol-file'.
(gdb) -file-symbol-file /kwikemart/marge/ezannoni/TRUNK/mbx/hello.mbx ^done (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-target-attach Command
-target-attach pid | gid | file |
Attach to a process pid or a file file outside of , or a thread group gid. If attaching to a thread group, the id previously returned by `-list-thread-groups --available' must be used.
The corresponding command is `attach'.
(gdb)
-target-attach 34
=thread-created,id="1"
*stopped,thread-id="1",frame={addr="0xb7f7e410",func="bar",args=[]}
^done
(gdb)
|
-target-detach Command
-target-detach [ pid | gid ] |
Detach from the remote target which normally resumes its execution. If either pid or gid is specified, detaches from either the specified process, or specified thread group. There's no output.
The corresponding command is `detach'.
(gdb) -target-detach ^done (gdb) |
-target-disconnect Command
-target-disconnect |
Disconnect from the remote target. There's no output and the target is generally not resumed.
The corresponding command is `disconnect'.
(gdb) -target-disconnect ^done (gdb) |
-target-download Command
-target-download |
Loads the executable onto the remote target. It prints out an update message every half second, which includes the fields:
Each message is sent as status record (see section GDB/MI Output Syntax).
In addition, it prints the name and size of the sections, as they are downloaded. These messages include the following fields:
At the end, a summary is printed.
The corresponding command is `load'.
Note: each status message appears on a single line. Here the messages have been broken down so that they can fit onto a page.
(gdb)
-target-download
+download,{section=".text",section-size="6668",total-size="9880"}
+download,{section=".text",section-sent="512",section-size="6668",
total-sent="512",total-size="9880"}
+download,{section=".text",section-sent="1024",section-size="6668",
total-sent="1024",total-size="9880"}
+download,{section=".text",section-sent="1536",section-size="6668",
total-sent="1536",total-size="9880"}
+download,{section=".text",section-sent="2048",section-size="6668",
total-sent="2048",total-size="9880"}
+download,{section=".text",section-sent="2560",section-size="6668",
total-sent="2560",total-size="9880"}
+download,{section=".text",section-sent="3072",section-size="6668",
total-sent="3072",total-size="9880"}
+download,{section=".text",section-sent="3584",section-size="6668",
total-sent="3584",total-size="9880"}
+download,{section=".text",section-sent="4096",section-size="6668",
total-sent="4096",total-size="9880"}
+download,{section=".text",section-sent="4608",section-size="6668",
total-sent="4608",total-size="9880"}
+download,{section=".text",section-sent="5120",section-size="6668",
total-sent="5120",total-size="9880"}
+download,{section=".text",section-sent="5632",section-size="6668",
total-sent="5632",total-size="9880"}
+download,{section=".text",section-sent="6144",section-size="6668",
total-sent="6144",total-size="9880"}
+download,{section=".text",section-sent="6656",section-size="6668",
total-sent="6656",total-size="9880"}
+download,{section=".init",section-size="28",total-size="9880"}
+download,{section=".fini",section-size="28",total-size="9880"}
+download,{section=".data",section-size="3156",total-size="9880"}
+download,{section=".data",section-sent="512",section-size="3156",
total-sent="7236",total-size="9880"}
+download,{section=".data",section-sent="1024",section-size="3156",
total-sent="7748",total-size="9880"}
+download,{section=".data",section-sent="1536",section-size="3156",
total-sent="8260",total-size="9880"}
+download,{section=".data",section-sent="2048",section-size="3156",
total-sent="8772",total-size="9880"}
+download,{section=".data",section-sent="2560",section-size="3156",
total-sent="9284",total-size="9880"}
+download,{section=".data",section-sent="3072",section-size="3156",
total-sent="9796",total-size="9880"}
^done,address="0x10004",load-size="9880",transfer-rate="6586",
write-rate="429"
(gdb)
|
No equivalent.
-target-select Command
-target-select type parameters ... |
Connect to the remote target. This command takes two args:
The output is a connection notification, followed by the address at which the target program is, in the following form:
^connected,addr="address",func="function name", args=[arg list] |
The corresponding command is `target'.
(gdb) -target-select remote /dev/ttya ^connected,addr="0xfe00a300",func="??",args=[] (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-target-file-put Command
-target-file-put hostfile targetfile |
Copy file hostfile from the host system (the machine running ) to targetfile on the target system.
The corresponding command is `remote put'.
(gdb) -target-file-put localfile remotefile ^done (gdb) |
-target-file-get Command
-target-file-get targetfile hostfile |
Copy file targetfile from the target system to hostfile on the host system.
The corresponding command is `remote get'.
(gdb) -target-file-get remotefile localfile ^done (gdb) |
-target-file-delete Command
-target-file-delete targetfile |
Delete targetfile from the target system.
The corresponding command is `remote delete'.
(gdb) -target-file-delete remotefile ^done (gdb) |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
-gdb-exit Command
-gdb-exit |
Exit immediately.
Approximately corresponds to `quit'.
(gdb) -gdb-exit ^exit |
-gdb-set Command
-gdb-set |
Set an internal variable.
The corresponding command is `set'.
(gdb) -gdb-set $foo=3 ^done (gdb) |
-gdb-show Command
-gdb-show |
Show the current value of a variable.
The corresponding command is `show'.
(gdb) -gdb-show annotate ^done,value="0" (gdb) |
-gdb-version Command
-gdb-version |
Show version information for . Used mostly in testing.
The equivalent is `show version'. by default shows this information when you start an interactive session.
(gdb) -gdb-version ~GNU gdb 5.2.1 ~Copyright 2000 Free Software Foundation, Inc. ~GDB is free software, covered by the GNU General Public License, and ~you are welcome to change it and/or distribute copies of it under ~ certain conditions. ~Type "show copying" to see the conditions. ~There is absolutely no warranty for GDB. Type "show warranty" for ~ details. ~This GDB was configured as "--host=sparc-sun-solaris2.5.1 --target=ppc-eabi". ^done (gdb) |
-list-features Command Returns a list of particular features of the MI protocol that this version of gdb implements. A feature can be a command, or a new field in an output of some command, or even an important bugfix. While a frontend can sometimes detect presence of a feature at runtime, it is easier to perform detection at debugger startup.
The command returns a list of strings, with each string naming an available feature. Each returned string is just a name, it does not have any internal structure. The list of possible feature names is given below.
Example output:
(gdb) -list-features ^done,result=["feature1","feature2"] |
The current list of features is:
-var-set-frozen command, as well
as possible presense of the frozen field in the output
of -varobj-create.
-break-insert
command.
-var-list-children
-thread-info command.
-data-read-memory-bytes and the
-data-write-memory-bytes commands.
-ada-task-info command.
-list-target-features Command
Returns a list of particular features that are supported by the
target. Those features affect the permitted MI commands, but
unlike the features reported by the -list-features command, the
features depend on which target GDB is using at the moment. Whenever
a target can change, due to commands such as -target-select,
-target-attach or -exec-run, the list of target features
may change, and the frontend should obtain it again.
Example output:
(gdb) -list-features ^done,result=["async"] |
The current list of features is:
-list-thread-groups Command
-list-thread-groups [ --available ] [ --recurse 1 ] [ group ... ] |
Lists thread groups (see section 27.1.3 Thread groups). When a single thread group is passed as the argument, lists the children of that group. When several thread group are passed, lists information about those thread groups. Without any parameters, lists information about all top-level thread groups.
Normally, thread groups that are being debugged are reported. With the `--available' option, reports thread groups available on the target.
The output of this command may have either a `threads' result or a `groups' result. The `thread' result has a list of tuples as value, with each tuple describing a thread (see section 27.5.6 GDB/MI Thread Information). The `groups' result has a list of tuples as value, each tuple describing a thread group. If top-level groups are requested (that is, no parameter is passed), or when several groups are passed, the output always has a `groups' result. The format of the `group' result is described below.
To reduce the number of roundtrips it's possible to list thread groups together with their children, by passing the `--recurse' option and the recursion depth. Presently, only recursion depth of 1 is permitted. If this option is present, then every reported thread group will also include its children, either as `group' or `threads' field.
In general, any combination of option and parameters is permitted, with the following caveats:
The `groups' result is a list of tuples, where each tuple may have the following fields:
id
type
pid
num_children
threads
cores
executable
-list-thread-groups
^done,groups=[{id="17",type="process",pid="yyy",num_children="2"}]
-list-thread-groups 17
^done,threads=[{id="2",target-id="Thread 0xb7e14b90 (LWP 21257)",
frame={level="0",addr="0xffffe410",func="__kernel_vsyscall",args=[]},state="running"},
{id="1",target-id="Thread 0xb7e156b0 (LWP 21254)",
frame={level="0",addr="0x0804891f",func="foo",args=[{name="i",value="10"}],
file="/tmp/a.c",fullname="/tmp/a.c",line="158"},state="running"}]]
-list-thread-groups --available
^done,groups=[{id="17",type="process",pid="yyy",num_children="2",cores=[1,2]}]
-list-thread-groups --available --recurse 1
^done,groups=[{id="17", types="process",pid="yyy",num_children="2",cores=[1,2],
threads=[{id="1",target-id="Thread 0xb7e14b90",cores=[1]},
{id="2",target-id="Thread 0xb7e14b90",cores=[2]}]},..]
-list-thread-groups --available --recurse 1 17 18
^done,groups=[{id="17", types="process",pid="yyy",num_children="2",cores=[1,2],
threads=[{id="1",target-id="Thread 0xb7e14b90",cores=[1]},
{id="2",target-id="Thread 0xb7e14b90",cores=[2]}]},...]
|
-info-os Command
-info-os [ type ] |
If no argument is supplied, the command returns a table of available operating-system-specific information types. If one of these types is supplied as an argument type, then the command returns a table of data of that type.
The types of information available depend on the target operating system.
The corresponding command is `info os'.
When run on a GNU/Linux system, the output will look something like this:
-info-os
^done,OSDataTable={nr_rows="9",nr_cols="3",
hdr=[{width="10",alignment="-1",col_name="col0",colhdr="Type"},
{width="10",alignment="-1",col_name="col1",colhdr="Description"},
{width="10",alignment="-1",col_name="col2",colhdr="Title"}],
body=[item={col0="processes",col1="Listing of all processes",
col2="Processes"},
item={col0="procgroups",col1="Listing of all process groups",
col2="Process groups"},
item={col0="threads",col1="Listing of all threads",
col2="Threads"},
item={col0="files",col1="Listing of all file descriptors",
col2="File descriptors"},
item={col0="sockets",col1="Listing of all internet-domain sockets",
col2="Sockets"},
item={col0="shm",col1="Listing of all shared-memory regions",
col2="Shared-memory regions"},
item={col0="semaphores",col1="Listing of all semaphores",
col2="Semaphores"},
item={col0="msg",col1="Listing of all message queues",
col2="Message queues"},
item={col0="modules",col1="Listing of all loaded kernel modules",
col2="Kernel modules"}]}
-info-os processes
^done,OSDataTable={nr_rows="190",nr_cols="4",
hdr=[{width="10",alignment="-1",col_name="col0",colhdr="pid"},
{width="10",alignment="-1",col_name="col1",colhdr="user"},
{width="10",alignment="-1",col_name="col2",colhdr="command"},
{width="10",alignment="-1",col_name="col3",colhdr="cores"}],
body=[item={col0="1",col1="root",col2="/sbin/init",col3="0"},
item={col0="2",col1="root",col2="[kthreadd]",col3="1"},
item={col0="3",col1="root",col2="[ksoftirqd/0]",col3="0"},
...
item={col0="26446",col1="stan",col2="bash",col3="0"},
item={col0="28152",col1="stan",col2="bash",col3="1"}]}
(gdb)
|
(Note that the MI output here includes a "Title" column that
does not appear in command-line info os; this column is useful
for MI clients that want to enumerate the types of data, such as in a
popup menu, but is needless clutter on the command line, and
info os omits it.)
-add-inferior Command
-add-inferior |
Creates a new inferior (see section 4.9 Debugging Multiple Inferiors and Programs). The created inferior is not associated with any executable. Such association may be established with the `-file-exec-and-symbols' command (see section 27.19 GDB/MI File Commands). The command response has a single field, `thread-group', whose value is the identifier of the thread group corresponding to the new inferior.
-add-inferior ^done,thread-group="i3" |
-interpreter-exec Command
-interpreter-exec interpreter command |
Execute the specified command in the given interpreter.
The corresponding command is `interpreter-exec'.
(gdb) -interpreter-exec console "break main" &"During symbol reading, couldn't parse type; debugger out of date?.\n" &"During symbol reading, bad structure-type format.\n" ~"Breakpoint 1 at 0x8074fc6: file ../../src/gdb/main.c, line 743.\n" ^done (gdb) |
-inferior-tty-set Command
-inferior-tty-set /dev/pts/1 |
Set terminal for future runs of the program being debugged.
The corresponding command is `set inferior-tty' /dev/pts/1.
(gdb) -inferior-tty-set /dev/pts/1 ^done (gdb) |
-inferior-tty-show Command
-inferior-tty-show |
Show terminal for future runs of program being debugged.
The corresponding command is `show inferior-tty'.
(gdb) -inferior-tty-set /dev/pts/1 ^done (gdb) -inferior-tty-show ^done,inferior_tty_terminal="/dev/pts/1" (gdb) |
-enable-timings Command
-enable-timings [yes | no] |
Toggle the printing of the wallclock, user and system times for an MI command as a field in its output. This command is to help frontend developers optimize the performance of their code. No argument is equivalent to `yes'.
No equivalent.
(gdb)
-enable-timings
^done
(gdb)
-break-insert main
^done,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",
addr="0x080484ed",func="main",file="myprog.c",
fullname="/home/nickrob/myprog.c",line="73",thread-groups=["i1"],
times="0"},
time={wallclock="0.05185",user="0.00800",system="0.00000"}
(gdb)
-enable-timings no
^done
(gdb)
-exec-run
^running
(gdb)
*stopped,reason="breakpoint-hit",disp="keep",bkptno="1",thread-id="0",
frame={addr="0x080484ed",func="main",args=[{name="argc",value="1"},
{name="argv",value="0xbfb60364"}],file="myprog.c",
fullname="/home/nickrob/myprog.c",line="73"}
(gdb)
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter describes annotations in . Annotations were designed to interface to graphical user interfaces or other similar programs which want to interact with at a relatively high level.
The annotation mechanism has largely been superseded by GDB/MI (see section 27. The GDB/MI Interface).
28.1 What is an Annotation? What annotations are; the general syntax. 28.2 The Server Prefix Issuing a command without affecting user state. 28.3 Annotation for Input Annotations marking 's need for input. 28.4 Errors Annotations for error messages. 28.5 Invalidation Notices Some annotations describe things now invalid. 28.6 Running the Program Whether the program is running, how it stopped, etc. 28.7 Displaying Source Annotations describing source code.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Annotations start with a newline character, two `control-z' characters, and the name of the annotation. If there is no additional information associated with this annotation, the name of the annotation is followed immediately by a newline. If there is additional information, the name of the annotation is followed by a space, the additional information, and a newline. The additional information cannot contain newline characters.
Any output not beginning with a newline and two `control-z' characters denotes literal output from . Currently there is no need for to output a newline followed by two `control-z' characters, but if there was such a need, the annotations could be extended with an `escape' annotation which means those three characters as output.
The annotation level, which is specified using the `--annotate' command line option (see section 2.1.2 Choosing Modes), controls how much information prints together with its prompt, values of expressions, source lines, and other types of output. Level 0 is for no annotations, level 1 is for use when is run as a subprocess of GNU Emacs, level 3 is the maximum annotation suitable for programs that control , and level 2 annotations have been made obsolete (see section `Limitations of the Annotation Interface' in GDB's Obsolete Annotations).
set annotate level
set annotate sets the level of
annotations to the specified level.
show annotate
This chapter describes level 3 annotations.
A simple example of starting up with annotations is:
$ gdb --annotate=3 GNU gdb 6.0 Copyright 2003 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-pc-linux-gnu" ^Z^Zpre-prompt () ^Z^Zprompt quit ^Z^Zpost-prompt $ |
Here `quit' is input to ; the rest is output from . The three lines beginning `^Z^Z' (where `^Z' denotes a `control-z' character) are annotations; the rest is output from .
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you prefix a command with `server ' then it will not affect the command history, nor will it affect 's notion of which command to repeat if RET is pressed on a line by itself. This means that commands can be run behind a user's back by a front-end in a transparent manner.
The server prefix does not affect the recording of values into
the value history; to print a value without recording it into the
value history, use the output command instead of the
print command.
Using this prefix also disables confirmation requests (see confirmation requests).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When prompts for input, it annotates this fact so it is possible to know when to send output, when the output from a given command is over, etc.
Different kinds of input each have a different input type. Each
input type has three annotations: a pre- annotation, which
denotes the beginning of any prompt which is being output, a plain
annotation, which denotes the end of the prompt, and then a post-
annotation which denotes the end of any echo which may (or may not) be
associated with the input. For example, the prompt input type
features the following annotations:
^Z^Zpre-prompt ^Z^Zprompt ^Z^Zpost-prompt |
The input types are
prompt
commands
commands
command. The annotations are repeated for each command which is input.
overload-choice
query
prompt-for-continue
set height 0 to disable
prompting. This is because the counting of lines is buggy in the
presence of annotations.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
^Z^Zquit |
This annotation occurs right before responds to an interrupt.
^Z^Zerror |
This annotation occurs right before responds to an error.
Quit and error annotations indicate that any annotations which was
in the middle of may end abruptly. For example, if a
value-history-begin annotation is followed by a error, one
cannot expect to receive the matching value-history-end. One
cannot expect not to receive it either, however; an error annotation
does not necessarily mean that is immediately returning all the way
to the top level.
A quit or error annotation may be preceded by
^Z^Zerror-begin |
Any output between that and the quit or error annotation is the error message.
Warning messages are not yet annotated.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following annotations say that certain pieces of state may have changed.
^Z^Zframes-invalid
The frames (for example, output from the backtrace command) may
have changed.
^Z^Zbreakpoints-invalid
The breakpoints may have changed. For example, the user just added or deleted a breakpoint.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When the program starts executing due to a command such as
step or continue,
^Z^Zstarting |
is output. When the program stops,
^Z^Zstopped |
is output. Before the stopped annotation, a variety of
annotations describe how the program stopped.
^Z^Zexited exit-status
^Z^Zsignalled
^Z^Zsignalled, the
annotation continues:
intro-text ^Z^Zsignal-name name ^Z^Zsignal-name-end middle-text ^Z^Zsignal-string string ^Z^Zsignal-string-end end-text |
where name is the name of the signal, such as SIGILL or
SIGSEGV, and string is the explanation of the signal, such
as Illegal Instruction or Segmentation fault.
intro-text, middle-text, and end-text are for the
user's benefit and have no particular format.
^Z^Zsignal
signalled, but is
just saying that the program received the signal, not that it was
terminated with it.
^Z^Zbreakpoint number
^Z^Zwatchpoint number
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following annotation is used instead of displaying source code:
^Z^Zsource filename:line:character:middle:addr |
where filename is an absolute file name indicating which source file, line is the line number within that file (where 1 is the first line in the file), character is the character position within the file (where 0 is the first character in the file) (for most debug formats this will necessarily point to the beginning of a line), middle is `middle' if addr is in the middle of the line, or `beg' if addr is at the beginning of the line, and addr is the address in the target program associated with the source which is being displayed. addr is in the form `0x' followed by one or more lowercase hex digits (note that this does not depend on the language).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter documents 's just-in-time (JIT) compilation interface. A JIT compiler is a program or library that generates native executable code at runtime and executes it, usually in order to achieve good performance while maintaining platform independence.
Programs that use JIT compilation are normally difficult to debug because portions of their code are generated at runtime, instead of being loaded from object files, which is where normally finds the program's symbols and debug information. In order to debug programs that use JIT compilation, has an interface that allows the program to register in-memory symbol files with at runtime.
If you are using to debug a program that uses this interface, then it should work transparently so long as you have not stripped the binary. If you are developing a JIT compiler, then the interface is documented in the rest of this chapter. At this time, the only known client of this interface is the LLVM JIT.
Broadly speaking, the JIT interface mirrors the dynamic loader interface. The JIT compiler communicates with by writing data into a global variable and calling a fuction at a well-known symbol. When attaches, it reads a linked list of symbol files from the global variable to find existing code, and puts a breakpoint in the function so that it can find out about additional code.
29.1 JIT Declarations Relevant C struct declarations 29.2 Registering Code Steps to register code 29.3 Unregistering Code Steps to unregister code 29.4 Custom Debug Info Emit debug information in a custom format
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These are the relevant struct declarations that a C program should include to implement the interface:
typedef enum
{
JIT_NOACTION = 0,
JIT_REGISTER_FN,
JIT_UNREGISTER_FN
} jit_actions_t;
struct jit_code_entry
{
struct jit_code_entry *next_entry;
struct jit_code_entry *prev_entry;
const char *symfile_addr;
uint64_t symfile_size;
};
struct jit_descriptor
{
uint32_t version;
/* This type should be jit_actions_t, but we use uint32_t
to be explicit about the bitwidth. */
uint32_t action_flag;
struct jit_code_entry *relevant_entry;
struct jit_code_entry *first_entry;
};
/* GDB puts a breakpoint in this function. */
void __attribute__((noinline)) __jit_debug_register_code() { };
/* Make sure to specify the version statically, because the
debugger may check the version before we can set it. */
struct jit_descriptor __jit_debug_descriptor = { 1, 0, 0, 0 };
|
If the JIT is multi-threaded, then it is important that the JIT synchronize any modifications to this global data properly, which can easily be done by putting a global mutex around modifications to these structures.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To register code with , the JIT should follow this protocol:
action_flag to JIT_REGISTER and call
__jit_debug_register_code.
When is attached and the breakpoint fires, uses the
relevant_entry pointer so it doesn't have to walk the list looking for
new code. However, the linked list must still be maintained in order to allow
to attach to a running process and still find the symbol files.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If code is freed, then the JIT should use the following protocol:
relevant_entry field of the descriptor at the code entry.
action_flag to JIT_UNREGISTER and call
__jit_debug_register_code.
If the JIT frees or recompiles code without unregistering it, then and the JIT will leak the memory used for the associated symbol files.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Generating debug information in platform-native file formats (like ELF or COFF) may be an overkill for JIT compilers; especially if all the debug info is used for is displaying a meaningful backtrace. The issue can be resolved by having the JIT writers decide on a debug info format and also provide a reader that parses the debug info generated by the JIT compiler. This section gives a brief overview on writing such a parser. More specific details can be found in the source file `gdb/jit-reader.in', which is also installed as a header at `includedir/gdb/jit-reader.h' for easy inclusion.
The reader is implemented as a shared object (so this functionality is
not available on platforms which don't allow loading shared objects at
runtime). Two commands, jit-reader-load and
jit-reader-unload are provided, to be used to load and unload
the readers from a preconfigured directory. Once loaded, the shared
object is used the parse the debug information emitted by the JIT
compiler.
29.4.1 Using JIT Debug Info Readers How to use supplied readers correctly 29.4.2 Writing JIT Debug Info Readers Creating a debug-info reader
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Readers can be loaded and unloaded using the jit-reader-load
and jit-reader-unload commands.
jit-reader-load reader
Only one reader can be active at a time; trying to load a second
reader when one is already loaded will result in
reporting an error. A new JIT reader can be loaded by first unloading
the current one using jit-reader-unload and then invoking
jit-reader-load.
jit-reader-unload
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
As mentioned, a reader is essentially a shared object conforming to a certain ABI. This ABI is described in `jit-reader.h'.
`jit-reader.h' defines the structures, macros and functions required to write a reader. It is installed (along with ), in `includedir/gdb' where includedir is the system include directory.
Readers need to be released under a GPL compatible license. A reader
can be declared as released under such a license by placing the macro
GDB_DECLARE_GPL_COMPATIBLE_READER in a source file.
The entry point for readers is the symbol gdb_init_reader,
which is expected to be a function with the prototype
extern struct gdb_reader_funcs *gdb_init_reader (void); |
struct gdb_reader_funcs contains a set of pointers to callback
functions. These functions are executed to read the debug info
generated by the JIT compiler (read), to unwind stack frames
(unwind) and to create canonical frame IDs
(get_Frame_id). It also has a callback that is called when the
reader is being unloaded (destroy). The struct looks like this
struct gdb_reader_funcs
{
/* Must be set to GDB_READER_INTERFACE_VERSION. */
int reader_version;
/* For use by the reader. */
void *priv_data;
gdb_read_debug_info *read;
gdb_unwind_frame *unwind;
gdb_get_frame_id *get_frame_id;
gdb_destroy_reader *destroy;
};
|
The callbacks are provided with another set of callbacks by
to do their job. For read, these callbacks are
passed in a struct gdb_symbol_callbacks and for unwind
and get_frame_id, in a struct gdb_unwind_callbacks.
struct gdb_symbol_callbacks has callbacks to create new object
files and new symbol tables inside those object files. struct
gdb_unwind_callbacks has callbacks to read registers off the current
frame and to write out the values of the registers in the previous
frame. Both have a callback (target_read) to read bytes off the
target's address space.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Therefore, traditional debugging model is too intrusive to reproduce some bugs. In order to reduce the interference with the program, we can reduce the number of operations performed by debugger. The In-Process Agent, a shared library, is running within the same process with inferior, and is able to perform some debugging operations itself. As a result, debugger is only involved when necessary, and performance of debugging can be improved accordingly. Note that interference with program can be reduced but can't be removed completely, because the in-process agent will still stop or slow down the program.
The in-process agent can interpret and execute Agent Expressions (see section F. The GDB Agent Expression Mechanism) during performing debugging operations. The agent expressions can be used for different purposes, such as collecting data in tracepoints, and condition evaluation in breakpoints.
You can control whether the in-process agent is used as an aid for debugging with the following commands:
set agent on
set agent off
show agent
30.1 In-Process Agent Protocol
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The in-process agent is able to communicate with both and GDBserver (see section 30. In-Process Agent). This section documents the protocol used for communications between or GDBserver and the IPA. In general, or GDBserver sends commands (see section 30.1.2 IPA Protocol Commands) and data to in-process agent, and then in-process agent replies back with the return result of the command, or some other information. The data sent to in-process agent is composed of primitive data types, such as 4-byte or 8-byte type, and composite types, which are called objects (see section 30.1.1 IPA Protocol Objects).
30.1.1 IPA Protocol Objects 30.1.2 IPA Protocol Commands
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The commands sent to and results received from agent may contain some complex data types called objects.
The in-process agent is running on the same machine with or GDBserver, so it doesn't have to handle as much differences between two ends as remote protocol (see section E. Remote Serial Protocol) tries to handle. However, there are still some differences of two ends in two processes:
Here are the IPA Protocol Objects:
The following table describes important attributes of each IPA protocol object:
| Size | Description | |
| agent expression object | ||
| length | 4 | length of bytes code |
| byte code | length | contents of byte code |
| tracepoint action for collecting memory | ||
| 'M' | 1 | type of tracepoint action |
| addr | 8 | if basereg is `-1', addr is the | address of the lowest byte to collect, otherwise addr is the offset of basereg for memory collecting.
| len | 8 | length of memory for collecting |
| basereg | 4 | the register number containing the starting | memory address for collecting.
| tracepoint action for collecting registers | ||
| 'R' | 1 | type of tracepoint action |
| tracepoint action for collecting static trace data | ||
| 'L' | 1 | type of tracepoint action |
| tracepoint action for expression evaluation | ||
| 'X' | 1 | type of tracepoint action |
| agent expression | length of | agent expression object |
| tracepoint object | ||
| number | 4 | number of tracepoint |
| address | 8 | address of tracepoint inserted on |
| type | 4 | type of tracepoint |
| enabled | 1 | enable or disable of tracepoint |
| step_count | 8 | step |
| pass_count | 8 | pass |
| numactions | 4 | number of tracepoint actions |
| hit count | 8 | hit count |
| trace frame usage | 8 | trace frame usage |
| compiled_cond | 8 | compiled condition |
| orig_size | 8 | orig size |
| condition | 4 if condition is NULL otherwise length of | agent expression objectzero if condition is NULL, otherwise is agent expression object |
| actions | variable | numactions number of tracepoint action object |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The spaces in each command are delimiters to ease reading this commands specification. They don't exist in real commands.
Replies:
Replies:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Your bug reports play an essential role in making reliable.
Reporting a bug may help you by bringing a solution to your problem, or it may not. But in any case the principal function of a bug report is to help the entire community by making the next version of work better. Bug reports are your contribution to the maintenance of .
In order for a bug report to serve its purpose, you must include the information that enables us to fix the bug.
31.1 Have You Found a Bug? Have you found a bug? 31.2 How to Report Bugs How to report bugs
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you are not sure whether you have found a bug, here are some guidelines:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A number of companies and individuals offer support for GNU products. If you obtained from a support organization, we recommend you contact that organization first.
You can find contact information for many support companies and individuals in the file `etc/SERVICE' in the GNU Emacs distribution.
In any event, we also recommend that you submit bug reports for to .
The fundamental principle of reporting bugs usefully is this: report all the facts. If you are not sure whether to state a fact or leave it out, state it!
Often people omit facts because they think they know what causes the problem and assume that some details do not matter. Thus, you might assume that the name of the variable you use in an example does not matter. Well, probably it does not, but one cannot be sure. Perhaps the bug is a stray memory reference which happens to fetch from the location where that name is stored in memory; perhaps, if the name were different, the contents of that location would fool the debugger into doing the right thing despite the bug. Play it safe and give a specific, complete example. That is the easiest thing for you to do, and the most helpful.
Keep in mind that the purpose of a bug report is to enable us to fix the bug. It may be that the bug has been reported previously, but neither you nor we can know that unless your bug report is complete and self-contained.
Sometimes people give a few sketchy facts and ask, "Does this ring a bell?" Those bug reports are useless, and we urge everyone to refuse to respond to them except to chide the sender to report bugs properly.
To enable us to fix the bug, you should include all these things:
show
version.
Without this, we will not know whether there is any point in looking for the bug in the current version of .
If we were to try to guess the arguments, we would probably guess wrong and then we might not encounter the bug.
Of course, if the bug is that gets a fatal signal, then we will certainly notice it. But if the bug is incorrect output, we might not notice unless it is glaringly wrong. You might as well not give us a chance to make a mistake.
Even if the problem you experience is a fatal signal, you should still say so explicitly. Suppose something strange is going on, such as, your copy of is out of synch, or you have encountered a bug in the C library on your system. (This has happened!) Your copy might crash and ours would not. If you told us to expect a crash, then when ours fails to crash, we would know that the bug was not happening for us. If you had not told us to expect a crash, then we would not be able to draw any conclusion from our observations.
To collect all this information, you can use a session recording program
such as script, which is available on many Unix systems.
Just run your session inside script and then
include the `typescript' file with your bug report.
Another way to record a session is to run inside Emacs and then save the entire buffer to a file.
The line numbers in our development sources will not match those in your sources. Your line numbers would convey no useful information to us.
Here are some things that are not necessary:
Often people who encounter a bug spend a lot of time investigating which changes to the input file will make the bug go away and which changes will not affect it.
This is often time consuming and not very useful, because the way we will find the bug is by running a single example under the debugger with breakpoints, not by pure deduction from a series of examples. We recommend that you save your time for something else.
Of course, if you can find a simpler example to report instead of the original one, that is a convenience for us. Errors in the output will be easier to spot, running under the debugger will take less time, and so on.
However, simplification is not vital; if you do not want to do this, report the bug anyway and send us the entire test case you used.
A patch for the bug does help us if it is a good one. But do not omit the necessary information, such as the test case, on the assumption that a patch is all we need. We might see problems with your patch and decide to fix the problem another way, or we might not understand it at all.
Sometimes with a program as complicated as it is very hard to construct an example that will make the program follow a certain path through the code. If you do not send us the example, we will not be able to construct one, so we will not be able to verify that the bug is fixed.
And if we cannot understand what bug you are trying to fix, or why your patch should be an improvement, we will not install it. A test case will help us to understand.
Such guesses are usually wrong. Even we cannot guess right about such things without first using the debugger to find the facts.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The project mourns the loss of the following long-time contributors:
Fred Fish
Michael Snyder
Beyond their technical contributions to the project, they were also enjoyable members of the Free Software Community. We will miss them.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The 4 release includes an already-formatted reference card, ready for printing with PostScript or Ghostscript, in the `gdb' subdirectory of the main source directory(14). If you can use PostScript or Ghostscript with your printer, you can print the reference card immediately with `refcard.ps'.
The release also includes the source for the reference card. You can format it, using TeX, by typing:
make refcard.dvi |
The reference card is designed to print in landscape mode on US "letter" size paper; that is, on a sheet 11 inches wide by 8.5 inches high. You will need to specify this form of printing as an option to your DVI output program.
All the documentation for comes as part of the machine-readable
distribution. The documentation is written in Texinfo format, which is
a documentation system that uses a single source file to produce both
on-line information and a printed manual. You can use one of the Info
formatting commands to create the on-line version of the documentation
and TeX (or texi2roff) to typeset the printed version.
includes an already formatted copy of the on-line Info
version of this manual in the `gdb' subdirectory. The main Info
file is `gdb-/gdb/gdb.info', and it refers to
subordinate files matching `gdb.info*' in the same directory. If
necessary, you can print out these files, or read them with any editor;
but they are easier to read using the info subsystem in GNU
Emacs or the standalone info program, available as part of the
GNU Texinfo distribution.
If you want to format these Info files yourself, you need one of the
Info formatting programs, such as texinfo-format-buffer or
makeinfo.
If you have makeinfo installed, and are in the top level
source directory (`gdb-', in the case of
version ), you can make the Info file by typing:
cd gdb make gdb.info |
If you want to typeset and print copies of this manual, you need TeX, a program to print its DVI output files, and `texinfo.tex', the Texinfo definitions file.
TeX is a typesetting program; it does not print files directly, but produces output files called DVI files. To print a typeset document, you need a program to print DVI files. If your system has TeX installed, chances are it has such a program. The precise command to use depends on your system; lpr -d is common; another (for PostScript devices) is dvips. The DVI print command may require a file name without any extension or a `.dvi' extension.
TeX also requires a macro definitions file called `texinfo.tex'. This file tells TeX how to typeset a document written in Texinfo format. On its own, TeX cannot either read or typeset a Texinfo file. `texinfo.tex' is distributed with GDB and is located in the `gdb-version-number/texinfo' directory.
If you have TeX and a DVI printer program installed, you can typeset and print this manual. First switch to the `gdb' subdirectory of the main source directory (for example, to `gdb-/gdb') and type:
make gdb.dvi |
Then give `gdb.dvi' to your DVI printing program.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
C.1 Requirements for Building Requirements for building C.2 Invoking the `configure' Script Invoking the `configure' script C.3 Compiling in Another Directory Compiling in another directory C.4 Specifying Names for Hosts and Targets Specifying names for hosts and targets C.5 `configure' Options Summary of options for configure C.6 System-wide configuration and settings Having a system-wide init file
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Building requires various tools and packages to be available. Other packages will be used only if they are found.
Expat is used for:
The `zlib' library is likely included with your operating system distribution; if it is not, you can get the latest version from http://zlib.net.
iconv implementation. If you are
on a GNU system, then this is provided by the GNU C Library. Some
other systems also provide a working iconv.
If is using the iconv program which is installed
in a non-standard place, you will need to tell where to find it.
This is done with `--with-iconv-bin' which specifies the
directory that contains the iconv program.
On systems without iconv, you can install GNU Libiconv. If you
have previously installed Libiconv, you can use the
`--with-libiconv-prefix' option to configure.
's top-level `configure' and `Makefile' will
arrange to build Libiconv if a directory named `libiconv' appears
in the top-most source directory. If Libiconv is built this way, and
if the operating system does not provide a suitable iconv
implementation, then the just-built library will automatically be used
by . One easy way to set this up is to download GNU
Libiconv, unpack it, and then rename the directory holding the
Libiconv source code to `libiconv'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
make to
build the gdb program.
The distribution includes all the source code you need for in a single directory, whose name is usually composed by appending the version number to `gdb'.
For example, the version distribution is in the `gdb-' directory. That directory contains:
gdb-/configure (and supporting files)
gdb-/gdb
gdb-/bfd
gdb-/include
gdb-/libiberty
gdb-/opcodes
gdb-/readline
gdb-/glob
gdb-/mmalloc
The simplest way to configure and build is to run `configure' from the `gdb-version-number' source directory, which in this example is the `gdb-' directory.
First switch to the `gdb-version-number' source directory if you are not already in it; then run `configure'. Pass the identifier for the platform on which will run as an argument.
For example:
cd gdb- ./configure host make |
where host is an identifier such as `sun4' or `decstation', that identifies the platform where will run. (You can often leave off host; `configure' tries to guess the correct value by examining your system.)
Running `configure host' and then running make builds the
`bfd', `readline', `mmalloc', and `libiberty'
libraries, then gdb itself. The configured source files, and the
binaries, are left in the corresponding source directories.
`configure' is a Bourne-shell (/bin/sh) script; if your
system does not recognize this automatically when you run a different
shell, you may need to run sh on it explicitly:
sh configure host |
If you run `configure' from a directory that contains source directories for multiple libraries or programs, such as the `gdb-' source directory for version , `configure' creates configuration files for every directory level underneath (unless you tell it not to, with the `--norecursion' option).
You should run the `configure' script from the top directory in the source tree, the `gdb-version-number' directory. If you run `configure' from one of the subdirectories, you will configure only that subdirectory. That is usually not what you want. In particular, if you run the first `configure' from the `gdb' subdirectory of the `gdb-version-number' directory, you will omit the configuration of `bfd', `readline', and other sibling directories of the `gdb' subdirectory. This leads to build errors about missing include files such as `bfd/bfd.h'.
You can install anywhere; it has no hardwired paths.
However, you should make sure that the shell on your path (named by
the `SHELL' environment variable) is publicly readable. Remember
that uses the shell to start your program--some systems refuse to
let debug child processes whose programs are not readable.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you want to run versions for several host or target machines,
you need a different gdb compiled for each combination of
host and target. `configure' is designed to make this easy by
allowing you to generate each configuration in a separate subdirectory,
rather than in the source directory. If your make program
handles the `VPATH' feature (GNU make does), running
make in each of these directories builds the gdb
program specified there.
To build gdb in a separate directory, run `configure'
with the `--srcdir' option to specify where to find the source.
(You also need to specify a path to find `configure'
itself from your working directory. If the path to `configure'
would be the same as the argument to `--srcdir', you can leave out
the `--srcdir' option; it is assumed.)
For example, with version , you can build in a separate directory for a Sun 4 like this:
cd gdb- mkdir ../gdb-sun4 cd ../gdb-sun4 ../gdb-/configure sun4 make |
When `configure' builds a configuration using a remote source directory, it creates a tree for the binaries with the same structure (and using the same names) as the tree under the source directory. In the example, you'd find the Sun 4 library `libiberty.a' in the directory `gdb-sun4/libiberty', and itself in `gdb-sun4/gdb'.
Make sure that your path to the `configure' script has just one instance of `gdb' in it. If your path to `configure' looks like `../gdb-/gdb/configure', you are configuring only one subdirectory of , not the whole package. This leads to build errors about missing include files such as `bfd/bfd.h'.
One popular reason to build several configurations in separate directories is to configure for cross-compiling (where runs on one machine--the host---while debugging programs that run on another machine--the target). You specify a cross-debugging target by giving the `--target=target' option to `configure'.
When you run make to build a program or library, you must run
it in a configured directory--whatever directory you were in when you
called `configure' (or one of its subdirectories).
The Makefile that `configure' generates in each source
directory also runs recursively. If you type make in a source
directory such as `gdb-' (or in a separate configured
directory configured with `--srcdir=dirname/gdb-'), you
will build all the required libraries, and then build GDB.
When you have multiple hosts or targets configured in separate
directories, you can run make on them in parallel (for example,
if they are NFS-mounted on each of the hosts); they will not interfere
with each other.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The specifications used for hosts and targets in the `configure' script are based on a three-part naming scheme, but some short predefined aliases are also supported. The full naming scheme encodes three pieces of information in the following pattern:
architecture-vendor-os |
For example, you can use the alias sun4 as a host argument,
or as the value for target in a --target=target
option. The equivalent full name is `sparc-sun-sunos4'.
The `configure' script accompanying does not provide
any query facility to list all supported host and target names or
aliases. `configure' calls the Bourne shell script
config.sub to map abbreviations to full names; you can read the
script, if you wish, or you can use it to test your guesses on
abbreviations--for example:
% sh config.sub i386-linux i386-pc-linux-gnu % sh config.sub alpha-linux alpha-unknown-linux-gnu % sh config.sub hp9k700 hppa1.1-hp-hpux % sh config.sub sun4 sparc-sun-sunos4.1.1 % sh config.sub sun3 m68k-sun-sunos4.1.1 % sh config.sub i986v Invalid configuration `i986v': machine `i986v' not recognized |
config.sub is also distributed in the source
directory (`gdb-', for version ).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Here is a summary of the `configure' options and arguments that are most often useful for building . `configure' also has several other options not listed here. See Info file `configure.info', node `What Configure Does', for a full explanation of `configure'.
configure [--help]
[--prefix=dir]
[--exec-prefix=dir]
[--srcdir=dirname]
[--norecursion] [--rm]
[--target=target]
host
|
You may introduce options with a single `-' rather than `--' if you prefer; but you may abbreviate option names if you use `--'.
--help
--prefix=dir
--exec-prefix=dir
--srcdir=dirname
make, or another
make that implements the VPATH feature.
--norecursion
--target=target
There is no convenient way to generate a list of all available targets.
host ...
There is no convenient way to generate a list of all available hosts.
There are many other options available as well, but they are generally needed for special purposes only.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
can be configured to have a system-wide init file; this file will be read and executed at startup (see section What does during startup).
Here is the corresponding configure option:
--with-system-gdbinit=file
If has been configured with the option `--prefix=$prefix', it may be subject to relocation. Two possible cases:
If the configured location of the system-wide init file (as given by the
`--with-system-gdbinit' option at configure time) is in the
data-directory (as specified by `--with-gdb-datadir' at configure
time) or in one of its subdirectories, then will look for the
system-wide init file in the directory specified by the
`--data-directory' command-line option.
Note that the system-wide init file is only read once, during
initialization. If the data-directory is changed after has
started with the set data-directory command, the file will not be
reread.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In addition to commands intended for users, includes a number of commands intended for developers, that are not documented elsewhere in this manual. These commands are provided here for reference. (For commands that turn on debugging messages, see 22.9 Optional Messages about Internal Happenings.)
maint agent [-at location,] expression
maint agent-eval [-at location,] expression
globa +
globb will include bytecodes to record four bytes of memory at each
of the addresses of globa and globb, while discarding
the result of the addition, while an evaluation expression will do the
addition and return the sum.
If -at is given, generate remote agent bytecode for location.
If not, generate remote agent bytecode for current frame PC address.
maint agent-printf format,expr,...
maint info breakpoints
breakpoint
watchpoint
longjmp
longjmp calls.
longjmp resume
longjmp.
until
until command.
finish
finish command.
shlib events
maint info bfds
bfd object that is known to
. See section `BFD' in The Binary File Descriptor Library.
set displaced-stepping
show displaced-stepping
set displaced-stepping on
set displaced-stepping off
set displaced-stepping auto
maint check-symtabs
maint cplus first_component name
maint cplus namespace
maint demangle name
maint deprecate command [replacement]
maint undeprecate command
maint dump-me
SIGQUIT signal.
maint internal-error [message-text]
maint internal-warning [message-text]
internal_error
or internal_warning and hence behave as though an internal error
or internal warning has been detected. In addition to reporting the
internal problem, these functions give the user the opportunity to
either quit or create a core file of the current
session.
These commands take an optional parameter message-text that is used as the text of the error or warning message.
Here's an example of using internal-error:
() maint internal-error testing, 1, 2 .../maint.c:121: internal-error: testing, 1, 2 A problem internal to GDB has been detected. Further debugging may prove unreliable. Quit this debugging session? (y or n) n Create a core file? (y or n) n () |
maint set internal-error action [ask|yes|no]
maint show internal-error action
maint set internal-warning action [ask|yes|no]
maint show internal-warning action
maint packet text
maint print architecture [file]
maint print c-tdesc
maint print dummy-frames
() b add
...
() print add(2,3)
Breakpoint 2, add (a=2, b=3) at ...
58 return (a + b);
The program being debugged stopped while in a function called from GDB.
...
() maint print dummy-frames
0x1a57c80: pc=0x01014068 fp=0x0200bddc sp=0x0200bdd6
top=0x0200bdd4 id={stack=0x200bddc,code=0x101405c}
call_lo=0x01014000 call_hi=0x01014001
()
|
Takes an optional file parameter.
maint print registers [file]
maint print raw-registers [file]
maint print cooked-registers [file]
maint print register-groups [file]
maint print remote-registers [file]
The command maint print raw-registers includes the contents of
the raw register cache; the command maint print
cooked-registers includes the (cooked) value of all registers,
including registers which aren't available on the target nor visible
to user; the command maint print register-groups includes the
groups that each register is a member of; and the command maint
print remote-registers includes the remote target's register numbers
and offsets in the `G' packets. See section `Registers' in Internals.
These commands take an optional parameter, a file name to which to write the information.
maint print reggroups [file]
The register groups info looks like this:
() maint print reggroups Group Type general user float user all user vector user system user save internal restore internal |
flushregs
maint print objfiles
maint print section-scripts [regexp]
.debug_gdb_section section.
If regexp is specified, only print scripts loaded by object files
matching regexp.
For each script, this command prints its name as specified in the objfile,
and the full path if known.
See section 23.2.3.2 The .debug_gdb_scripts section.
maint print statistics
maint print target-stack
This command prints a short description of each layer that was pushed on the target stack, starting from the top layer down to the bottom one.
maint print type expr
maint set dwarf2 always-disassemble
maint show dwarf2 always-disassemble
info address when using DWARF debugging
information.
The default is off, which means that should try to
describe a variable's location in an easily readable format. When
on, will instead display the DWARF location
expression in an assembly-like format. Note that some locations are
too complex for to describe simply; in this case you will
always see the disassembly form.
Here is an example of the resulting disassembly:
(gdb) info addr argc
Symbol "argc" is a complex DWARF expression:
1: DW_OP_fbreg 0
|
For more information on these expressions, see the DWARF standard.
maint set dwarf2 max-cache-age
maint show dwarf2 max-cache-age
In object files with inter-compilation-unit references, such as those produced by the GCC option `-feliminate-dwarf2-dups', the DWARF 2 reader needs to frequently refer to previously read compilation units. This setting controls how long a compilation unit will remain in the cache if it is not referenced. A higher limit means that cached compilation units will be stored in memory longer, and more total memory will be used. Setting it to zero disables caching, which will slow down startup, but reduce memory consumption.
maint set profile
maint show profile
Profiling will be disabled until you use the `maint set profile' command to enable it. When you enable profiling, the system will begin collecting timing and execution count data; when you disable profiling or exit , the results will be written to a log file. Remember that if you use profiling, will overwrite the profiling log file (often called `gmon.out'). If you have a record of important profiling data in a `gmon.out' file, be sure to move it to a safe location.
Configuring with `--enable-profiling' arranges for to be compiled with the `-pg' compiler option.
maint set show-debug-regs
maint show show-debug-regs
ON to enable, OFF to disable. If
enabled, the debug registers values are shown when inserts or
removes a hardware breakpoint or watchpoint, and when the inferior
triggers a hardware-assisted breakpoint or watchpoint.
maint set show-all-tib
maint show show-all-tib
maint space
maint time
maint translate-address [section] addr
info address command (see section 16. Examining the Symbol Table), except that this
command also allows to find symbols in other sections.
If section was not specified, the section in which the symbol was found is also printed. For dynamically linked executables, the name of executable or shared library containing the symbol is printed as well.
The following command is useful for non-interactive invocations of , such as in the test suite.
set watchdog nsec
show watchdog
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There may be occasions when you need to know something about the protocol--for example, if there is only one serial port to your target machine, you might want your program to do something special if it recognizes a packet meant for .
In the examples below, `->' and `<-' are used to indicate transmitted and received data, respectively.
All commands and responses (other than acknowledgments and notifications, see E.9 Notification Packets) are sent as a packet. A packet is introduced with the character `$', the actual packet-data, and the terminating character `#' followed by a two-digit checksum:
|
The two-digit checksum is computed as the modulo 256 sum of all characters between the leading `$' and the trailing `#' (an eight bit unsigned checksum).
Implementors should note that prior to 5.0 the protocol specification also included an optional two-digit sequence-id:
|
That sequence-id was appended to the acknowledgment. has never output sequence-ids. Stubs that handle packets added since 5.0 must not accept sequence-id.
When either the host or the target machine receives a packet, the first response expected is an acknowledgment: either `+' (to indicate the package was received correctly) or `-' (to request retransmission):
-> |
The `+'/`-' acknowledgments can be disabled once a connection is established. See section E.11 Packet Acknowledgment, for details.
The host () sends commands, and the target (the debugging stub incorporated in your program) sends a response. In the case of step and continue commands, the response is only sent when the operation has completed, and the target has again stopped all threads in all attached processes. This is the default all-stop mode behavior, but the remote protocol also supports 's non-stop execution mode; see E.10 Remote Protocol Support for Non-Stop Mode, for details.
packet-data consists of a sequence of characters with the exception of `#' and `$' (see `X' packet for additional exceptions).
Fields within the packet should be separated using `,' `;' or `:'. Except where otherwise noted all numbers are represented in HEX with leading zeros suppressed.
Implementors should note that prior to 5.0, the character `:' could not appear as the third character in a packet (as it would potentially conflict with the sequence-id).
Binary data in most packets is encoded either as two hexadecimal digits per byte of binary data. This allowed the traditional remote protocol to work over connections which were only seven-bit clean. Some packets designed more recently assume an eight-bit clean connection, and use a more efficient encoding to send and receive binary data.
The binary data representation uses 7d (ASCII `}')
as an escape character. Any escaped byte is transmitted as the escape
character followed by the original character XORed with 0x20.
For example, the byte 0x7d would be transmitted as the two
bytes 0x7d 0x5d. The bytes 0x23 (ASCII `#'),
0x24 (ASCII `$'), and 0x7d (ASCII
`}') must always be escaped. Responses sent by the stub
must also escape 0x2a (ASCII `*'), so that it
is not interpreted as the start of a run-length encoded sequence
(described next).
Response data can be run-length encoded to save space.
Run-length encoding replaces runs of identical characters with one
instance of the repeated character, followed by a `*' and a
repeat count. The repeat count is itself sent encoded, to avoid
binary characters in data: a value of n is sent as
n+29. For a repeat count greater or equal to 3, this
produces a printable ASCII character, e.g. a space (ASCII
code 32) for a repeat count of 3. (This is because run-length
encoding starts to win for counts 3 or more.) Thus, for example,
`0* ' is a run-length encoding of "0000": the space character
after `*' means repeat the leading 0 32 - 29 =
3 more times.
The printable characters `#' and `$' or with a numeric value greater than 126 must not be used. Runs of six repeats (`#') or seven repeats (`$') can be expanded using a repeat count of only five (`"'). For example, `00000000' can be encoded as `0*"00'.
The error response returned for some packets includes a two character error number. That number is not well defined.
For any command not supported by the stub, an empty response (`$#00') should be returned. That way it is possible to extend the protocol. A newer can tell if a packet is supported based on that response.
At a minimum, a stub is required to support the `g' and `G' commands for register access, and the `m' and `M' commands for memory access. Stubs that only control single-threaded targets can implement run control with the `c' (continue), and `s' (step) commands. Stubs that support multi-threading targets should support the `vCont' command. All other commands are optional.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following table provides a complete list of all currently defined commands and their corresponding response data. See section E.13 File-I/O Remote Protocol Extension, for details about the File I/O extension of the remote protocol.
Each packet's description has a template showing the packet's overall syntax, followed by an explanation of the packet's meaning. We include spaces in some of the templates for clarity; these are not part of the packet's syntax. No packet uses spaces to separate its components. For example, a template like `foo bar baz' describes a packet beginning with the three ASCII bytes `foo', followed by a bar, followed directly by a baz. does not transmit a space character between the `foo' and the bar, or between the bar and the baz.
Several packets and replies include a thread-id field to identify a thread. Normally these are positive numbers with a target-specific interpretation, formatted as big-endian hex strings. A thread-id can also be a literal `-1' to indicate all threads, or `0' to pick any thread.
In addition, the remote protocol supports a multiprocess feature in which the thread-id syntax is extended to optionally include both process and thread ID fields, as `ppid.tid'. The pid (process) and tid (thread) components each have the format described above: a positive number with target-specific interpretation formatted as a big-endian hex string, literal `-1' to indicate all processes or threads (respectively), or `0' to indicate an arbitrary process or thread. Specifying just a process, as `ppid', is equivalent to `ppid.-1'. It is an error to specify all processes but a specific thread, such as `p-1.tid'. Note that the `p' prefix is not used for those packets and replies explicitly documented to include a process ID, rather than a thread-id.
The multiprocess thread-id syntax extensions are only used if both and the stub report support for the `multiprocess' feature using `qSupported'. See multiprocess extensions, for more information.
Note that all packet forms beginning with an upper- or lower-case letter, other than those described here, are reserved for future use.
Here are the packet descriptions.
Reply:
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
argv[] array passed into program. arglen
specifies the number of bytes in the hex encoded byte stream
arg. See gdbserver for more details.
Reply:
JTC: When does the transport layer state change? When it's received, or after the ACK is transmitted. In either case, there are problems if the command or the acknowledgment packet is dropped.
Stan: If people really wanted to add something like this, and get it working for the first time, they ought to modify ser-unix.c to send some kind of out-of-band message to a specially-setup stub and have the switch happen "in between" packets, so that from remote protocol's point of view, nothing actually happened.
Don't use this packet. Use the `Z' and `z' packets instead (see insert breakpoint or watchpoint packet).
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
This packet is deprecated for multi-threading support. See vCont packet.
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
This packet is deprecated for multi-threading support. See vCont packet.
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
Don't use this packet; instead, define a general set packet (see section E.4 General Query Packets).
detach command.
The second form, including a process ID, is used when multiprocess protocol extensions are enabled (see multiprocess extensions), to detach only a specific process. The pid is specified as a big-endian hex string.
Reply:
Reply:
DEPRECATED_REGISTER_RAW_SIZE and gdbarch_register_name. The
specification of several standard `g' packets is specified below.
When reading registers from a trace frame (see section Using the Collected Data), the stub may also return a string of literal `x''s in place of the register data digits, to indicate that the corresponding register has not been collected, thus its value is unavailable. For example, for an architecture with 4 registers of 4 bytes each, the following reply indicates to that registers 0 and 2 have not been collected, while registers 1 and 3 have been collected, and both have zero value:
-> |
Reply:
Reply:
FIXME: There is no description of how to operate when a specific thread context has been selected (i.e. does 'k' kill only that thread?).
The stub need not use any particular size or alignment when gathering data from memory for the response; even if addr is word-aligned and length is a multiple of the word size, the stub is free to use byte accesses, or not. For this reason, this packet may not be suitable for accessing memory-mapped I/O devices.
Reply:
Reply:
Reply:
Reply:
Don't use this packet; use the `R' packet instead.
The `R' packet has no reply.
This packet is deprecated for multi-threading support. See vCont packet.
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
This packet is deprecated for multi-threading support. See vCont packet.
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
Reply:
This packet is only available in extended mode (see extended mode).
Reply:
Currently supported actions are:
The optional argument addr normally associated with the `c', `C', `s', and `S' packets is not supported in `vCont'.
The `t' action is only relevant in non-stop mode (see section E.10 Remote Protocol Support for Non-Stop Mode) and may be ignored by the stub otherwise. A stop reply should be generated for any affected thread not already stopped. When a thread is stopped by means of a `t' action, the corresponding stop reply should indicate that the thread has stopped with signal `0', regardless of whether the target uses some other signal as an implementation detail.
The stub must support `vCont' if it reports support for multiprocess extensions (see multiprocess extensions). Note that in this case `vCont' actions can be specified to apply to all threads in a process by using the `ppid.-1' form of the thread-id.
Reply: See section E.3 Stop Reply Packets, for the reply specifications.
Reply:
Reply:
Reply:
Reply:
This packet is only available in extended mode (see extended mode).
Reply:
Reply:
Each breakpoint and watchpoint packet type is documented separately.
Implementation notes: A remote target shall return an empty string for an unrecognized breakpoint or watchpoint packet type. A remote target shall support either both or neither of a given `Ztype...' and `ztype...' packet pair. To avoid potential problems with duplicate packets, the operations should be implemented in an idempotent way.
A memory breakpoint is implemented by replacing the instruction at addr with a software breakpoint or trap instruction. The kind is target-specific and typically indicates the size of the breakpoint in bytes that should be inserted. E.g., the ARM and MIPS can insert either a 2 or 4 byte breakpoint. Some architectures have additional meanings for kind; cond_list is an optional list of conditional expressions in bytecode form that should be evaluated on the target's side. These are the conditions that should be taken into consideration when deciding if the breakpoint trigger should be reported back to GDBN.
The cond_list parameter is comprised of a series of expressions, concatenated without separators. Each expression has the following form:
The optional cmd_list parameter introduces commands that may be run on the target, rather than being reported back to . The parameter starts with a numeric flag persist; if the flag is nonzero, then the breakpoint may remain active and the commands continue to be run even when disconnects from the target. Following this flag is a series of expressions concatenated with no separators. Each expression has the following form:
see E.5 Architecture-Specific Protocol Details.
Implementation note: It is possible for a target to copy or move code that contains memory breakpoints (e.g., when implementing overlays). The behavior of this packet, in the presence of such a target, is not defined.
Reply:
A hardware breakpoint is implemented using a mechanism that is not dependant on being able to modify the target's memory. kind and cond_list have the same meaning as in `Z0' packets.
Implementation note: A hardware breakpoint is not affected by code movement.
Reply:
Reply:
Reply:
Reply:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `C', `c', `S', `s', `vCont', `vAttach', `vRun', `vStopped', and `?' packets can receive any of the below as a reply. Except for `?' and `vStopped', that reply is only returned when the target halts. In the below the exact meaning of signal number is defined by the header `include/gdb/signals.h' in the source code.
As in the description of request packets, we include spaces in the reply templates for clarity; these are not part of the reply packet's syntax. No stop reply packet uses spaces to separate its components.
The currently defined stop reasons are:
The second form of the response, including the process ID of the exited process, can be used only when has reported support for multiprocess protocol extensions; see multiprocess extensions. The pid is formatted as a big-endian hex string.
The second form of the response, including the process ID of the terminated process, can be used only when has reported support for multiprocess protocol extensions; see multiprocess extensions. The pid is formatted as a big-endian hex string.
`parameter...' is a list of parameters as defined for this very system call.
The target replies with this packet when it expects to call a host system call on behalf of the target. replies with an appropriate `F' packet and keeps up waiting for the next reply packet from the target. The latest `C', `c', `S' or `s' action is expected to be continued. See section E.13 File-I/O Remote Protocol Extension, for more details.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Packets starting with `q' are general query packets; packets starting with `Q' are general set packets. General query and set packets are a semi-unified form for retrieving and sending information to and from the stub.
The initial letter of a query or set packet is followed by a name indicating what sort of thing the packet applies to. For example, may use a `qSymbol' packet to exchange symbol definitions with the stub. These packet names follow some conventions:
The name of a query or set packet should be separated from any parameters by a `:'; the parameters themselves should be separated by `,' or `;'. Stubs must be careful to match the full packet name, and check for a separator or the end of the packet, in case two packet names share a common prefix. New packets should not begin with `qC', `qP', or `qL'(15).
Like the descriptions of the other packets, each description here has a template showing the packet's overall syntax, followed by an explanation of the packet's meaning. We include spaces in some of the templates for clarity; these are not part of the packet's syntax. No packet uses spaces to separate its components.
Here are the currently defined query and set packets:
Reply:
0xffffffff is used to ensure leading zeros affect the CRC.
Note: This is the same CRC used in validating separate debug files (see section Debugging Information in Separate Files). However the algorithm is slightly different. When validating separate debug files, the CRC is computed taking the least significant bit of each byte first, and the final result is inverted to detect trailing zeros.
Reply:
This packet is only available in extended mode (see extended mode).
Reply:
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported). This should only be done on targets that actually support disabling address space randomization.
NOTE: This packet replaces the `qL' query (see below).
Reply:
In response to each query, the target will reply with a list of one or more thread IDs, separated by commas. will respond to each reply with a request for more thread ids (using the `qs' form of the query), until the target responds with `l' (lower-case ell, for last). Refer to thread-id syntax, for the format of the thread-id fields.
thread-id is the thread ID associated with the thread for which to fetch the TLS address. See thread-id syntax.
offset is the (big endian, hex encoded) offset associated with the thread local variable. (This offset is obtained from the debug information associated with the variable.)
lm is the (big endian, hex encoded) OS/ABI-specific encoding of the load module associated with the thread local storage. For example, a GNU/Linux system will pass the link map address of the shared object associated with the thread local storage under consideration. Other operating environments may choose to represent the load module differently, so the precise meaning of this parameter will vary.
Reply:
thread-id is the thread ID associated with the thread.
Reply:
Don't use this packet; use the `qfThreadInfo' query instead (see above).
Reply:
remote.c:parse_threadlist_response().
Reply:
Text section by xxx from its original address.
Relocate the Data section by yyy from its original address.
If the object file format provides segment information (e.g. ELF
`PT_LOAD' program headers), will relocate entire
segments by the supplied offsets.
Note: while a Bss offset may be included in the response,
ignores this and instead applies the Data offset
to the Bss section.
Don't use this packet; use the `qThreadExtraInfo' query instead (see below).
Reply: see remote.c:remote_unpack_thread_info_response().
Reply:
This packet is not probed by default; the remote stub must request it,
by supplying an appropriate `qSupported' response (see qSupported).
Use of this packet is controlled by the set non-stop command;
see section 5.5.2 Non-Stop Mode.
Reply:
Use of this packet is controlled by the set remote pass-signals
command (see section set remote pass-signals).
This packet is not probed by default; the remote stub must request it,
by supplying an appropriate `qSupported' response (see qSupported).
In some cases, the remote stub may need to decide whether to deliver a signal to the program or not without involvement. One example of that is while detaching -- the program's threads may have stopped for signals that haven't yet had a chance of being reported to , and so the remote stub can use the signal list specified by this packet to know whether to deliver or ignore those pending signals.
This does not influence whether to deliver a signal as requested by a resumption packet (see vCont packet).
Signals are numbered identically to continue packets and stop replies (see section E.3 Stop Reply Packets). Each signal list item should be strictly greater than the previous item. Multiple `QProgramSignals' packets do not combine; any earlier `QProgramSignals' list is completely replaced by the new list.
Reply:
Use of this packet is controlled by the set remote program-signals
command (see section set remote program-signals).
This packet is not probed by default; the remote stub must request it,
by supplying an appropriate `qSupported' response (see qSupported).
Reply:
(Note that the qRcmd packet's name is separated from the
command by a `,', not a `:', contrary to the naming
conventions above. Please don't use this packet as a model for new
packets.)
Reply:
Reply:
Reply:
The allowed forms for each feature (either a gdbfeature in the `qSupported' packet, or a stubfeature in the response) are:
Whenever the stub receives a `qSupported' request, the supplied set of features should override any previous request. This allows to put the stub in a known state, even if the stub had previously been communicating with a different version of .
The following values of gdbfeature (for the packet sent by ) are defined:
Stubs should ignore any unknown values for gdbfeature. Any which sends a `qSupported' packet supports receiving packets of unlimited length (earlier versions of may reject overly long responses). Additional values for gdbfeature may be defined in the future to let the stub take advantage of new features in , e.g. incompatible improvements in the remote protocol--the `multiprocess' feature is an example of such a feature. The stub's reply should be independent of the gdbfeature entries sent by ; first describes all the features it supports, and then the stub replies with all the features it supports.
Similarly, will silently ignore unrecognized stub feature responses, as long as each response uses one of the standard forms.
Some features are flags. A stub which supports a flag feature should respond with a `+' form response. Other features require values, and the stub should respond with an `=' form response.
Each feature has a default value, which will use if `qSupported' is not available or if the feature is not mentioned in the `qSupported' response. The default values are fixed; a stub is free to omit any feature responses that match the defaults.
Not all features can be probed, but for those which can, the probing mechanism is useful: in some cases, a stub's internal architecture may not allow the protocol layer to know some information about the underlying target in advance. This is especially common in stubs which may be configured for multiple targets.
These are the currently defined stub features and their properties:
| Feature Name | Value Required | Default | Probe Allowed |
| `PacketSize' | Yes | `-' | No |
| `qXfer:auxv:read' | No | `-' | Yes |
| `qXfer:btrace:read' | No | `-' | Yes |
| `qXfer:features:read' | No | `-' | Yes |
| `qXfer:libraries:read' | No | `-' | Yes |
| `qXfer:memory-map:read' | No | `-' | Yes |
| `qXfer:sdata:read' | No | `-' | Yes |
| `qXfer:spu:read' | No | `-' | Yes |
| `qXfer:spu:write' | No | `-' | Yes |
| `qXfer:siginfo:read' | No | `-' | Yes |
| `qXfer:siginfo:write' | No | `-' | Yes |
| `qXfer:threads:read' | No | `-' | Yes |
| `qXfer:traceframe-info:read' | No | `-' | Yes |
| `qXfer:uib:read' | No | `-' | Yes |
| `qXfer:fdpic:read' | No | `-' | Yes |
| `Qbtrace:off' | Yes | `-' | Yes |
| `Qbtrace:bts' | Yes | `-' | Yes |
| `QNonStop' | No | `-' | Yes |
| `QPassSignals' | No | `-' | Yes |
| `QStartNoAckMode' | No | `-' | Yes |
| `multiprocess' | No | `-' | No |
| `ConditionalBreakpoints' | No | `-' | No |
| `ConditionalTracepoints' | No | `-' | No |
| `ReverseContinue' | No | `-' | No |
| `ReverseStep' | No | `-' | No |
| `TracepointSource' | No | `-' | No |
| `QAgent' | No | `-' | No |
| `QAllow' | No | `-' | No |
| `QDisableRandomization' | No | `-' | No |
| `EnableDisableTracepoints' | No | `-' | No |
| `QTBuffer:size' | No | `-' | No |
| `tracenz' | No | `-' | No |
| `BreakpointCommands' | No | `-' | No |
These are the currently defined stub features, in more detail:
Reply:
sym_name (hex encoded) is the name of a symbol whose value the target has previously requested.
sym_value (hex) is the value for symbol sym_name. If cannot supply a value for sym_name, then this field will be empty.
Reply:
See section E.6 Tracepoint Packets.
info threads display. Some
examples of possible thread extra info strings are `Runnable', or
`Blocked on Mutex'.
Reply:
(Note that the qThreadExtraInfo packet's name is separated from
the command by a `,', not a `:', contrary to the naming
conventions above. Please don't use this packet as a model for new
packets.)
Here are the specific requests of this form defined so far. All `qXfer:object:read:...' requests use the same reply formats, listed below.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Return a description of the current branch trace. See section E.19 Branch Trace Format. The annex part of the generic `qXfer' packet may have one of the following values:
all
new
This packet is not probed by default; the remote stub must request it by supplying an appropriate `qSupported' response (see qSupported).
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Targets which maintain a list of libraries in the program's memory do not need to implement this packet; it is designed for platforms where the operating system manages the list of loaded libraries.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
This packet is optional for better performance on SVR4 targets. uses memory read packets to read the SVR4 library list otherwise.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Read contents of the extra collected static tracepoint marker information. The annex part of the generic `qXfer' packet must be empty (see qXfer read). See section Tracepoint Action Lists.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
spufs file on the target system. The
annex specifies which file to read; it must be of the form
`id/name', where id specifies an SPU context ID
in the target process, and name identifes the spufs file
in that context to be accessed.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Return a description of the current traceframe's contents. See section E.18 Traceframe Info Format. The annex part of the generic `qXfer' packet must be empty (see qXfer read).
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Return the unwind information block for pc. This packet is used on OpenVMS/ia64 to ask the kernel unwind information.
This packet is not probed by default.
loadmaps on the target system. The
annex, either `exec' or `interp', specifies which loadmap,
executable loadmap or interpreter loadmap to read.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Reply:
errno value.
Here are the specific requests of this form defined so far. All `qXfer:object:write:...' requests use the same reply formats, listed below.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
spufs file on the target system. The
annex specifies which file to write; it must be of the form
`id/name', where id specifies an SPU context ID
in the target process, and name identifes the spufs file
in that context to be accessed.
This packet is not probed by default; the remote stub must request it, by supplying an appropriate `qSupported' response (see qSupported).
Reply:
errno value.
This query is used, for example, to know whether the remote process
should be detached or killed when a session is ended with
the quit command.
Reply:
Reply:
Reply:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes how the remote protocol is applied to specific target architectures. Also see G.4 Standard Target Features, for details of XML target descriptions for each architecture.
E.5.1 ARM-specific Protocol Details E.5.2 MIPS-specific Protocol Details
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
E.5.1.1 ARM Breakpoint Kinds
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These breakpoint kinds are defined for the `Z0' and `Z1' packets.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
E.5.2.1 MIPS Register Packet Format E.5.2.2 MIPS Breakpoint Kinds
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following g/G packets have previously been defined.
In the below, some thirty-two bit registers are transferred as
sixty-four bits. Those registers should be zero/sign extended (which?)
to fill the space allocated. Register bytes are transferred in target
byte order. The two nibbles within a register byte are transferred
most-significant -- least-significant.
sr). The ordering is the same
as MIPS32.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These breakpoint kinds are defined for the `Z0' and `Z1' packets.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Here we describe the packets uses to implement tracepoints (see section 13. Tracepoints).
Replies:
In the series of action packets for a given tracepoint, at most one can have an `S' before its first action. If such a packet is sent, it and the following packets define "while-stepping" actions. Any prior packets define ordinary actions -- that is, those taken when the tracepoint is first hit. If no action packet has an `S', then all the packets in the series specify ordinary tracepoint actions.
The `action...' portion of the packet is a series of actions, concatenated without separators. Each action has one of the following forms:
Any number of actions may be packed together in a single `QTDP' packet, as long as the packet does not exceed the maximum packet length (400 bytes, for many stubs). There may be only one `R' action per tracepoint, and it must precede any `M' or `X' actions. Any registers referred to by `M' and `X' actions must be collected by a preceding `R' action. (The "while-stepping" actions are treated as if they were attached to a separate tracepoint, as far as these restrictions are concerned.)
Replies:
start is the offset of the bytes within the overall source string, while slen is the total length of the source string. This is intended for handling source strings that are longer than will fit in a single packet.
The available string types are `at' for the location, `cond' for the conditional, and `cmd' for an action command. sends a separate packet for each command in the action list, in the same order in which the commands are stored in the list.
The target does not need to do anything with source strings except report them back as part of the replies to the `qTfP'/`qTsP' query packets.
Although this packet is optional, and will only send it if the target replies with `TracepointSource' See section E.4 General Query Packets, it makes both disconnected tracing and trace files much easier to use. Otherwise the user must be careful that the tracepoints in effect while looking at trace frames are identical to the ones in effect during the trace run; even a small discrepancy could cause `tdump' not to work, or a particular trace frame not be found.
A successful reply from the stub indicates that the stub has found the requested frame. The response is a series of parts, concatenated without separators, describing the frame we selected. Each part has one of the following forms:
Replies:
uses this to mark read-only regions of memory, like those containing program code. Since these areas never change, they should still have the same contents they did when the tracepoint was hit, so there's no reason for the stub to refuse to provide their contents.
The reply has the form:
1 if the trace is presently
running, or 0 if not. It is followed by semicolon-separated
optional fields that an agent may use to report additional status.
If the trace is not running, the agent may report any of several explanations as one of the optional fields:
Additional optional fields supply statistical and other information. Although not required, they are extremely useful for users monitoring the progress of a trace run. If a trace has stopped, and these numbers are reported, they must reflect the state of the just-stopped trace.
1 means that the
trace buffer is circular and old trace frames will be discarded if
necessary to make room, 0 means that the trace buffer is linear
and may fill up.
1 means that
tracing will continue after disconnects, 0 means
that the trace run will stop.
Replies:
while-stepping steps are not counted as separate hits, but the
steps' space consumption is added into the usage number.
Replies:
qTfP to get the first piece
of data, and multiple qTsP to get additional pieces. Replies
to these packets generally take the form of the QTDP packets
that define tracepoints. (FIXME add detailed syntax)
qTfV to get the first vari of data,
and multiple qTsV to get additional variables. Replies to
these packets follow the syntax of the QTDV packets that define
trace state variables.
qTfSTM to get the
first piece of data, and multiple qTsSTM to get additional
pieces. Replies to these packets take the following form:
Reply:
address is encoded in hex. id and extra are strings encoded in hex.
In response to each query, the target will reply with a list of one or more markers, separated by commas. will respond to each reply with a request for more markers (using the `qs' form of the query), until the target responds with `l' (lower-case ell, for last).
qTsSTM packets that list static
tracepoint markers.
l indicates that no bytes are
available.
-1 tells the target to
use whatever size it prefers.
user, notes, and tstop, the
text fields are arbitrary strings, hex-encoded.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In response to several of the tracepoint packets, the target may also respond with a number of intermediate `qRelocInsn' request packets before the final result packet, to have handle this relocation operation. If a packet supports this mechanism, its documentation will explicitly say so. See for example the above descriptions for the `QTStart' and `QTDP' packets. The format of the request is:
This requests to copy instruction at address from to address to, possibly adjusted so that executing the instruction at to has the same effect as executing it at from. writes the adjusted instruction to target memory starting at to.
Replies:
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Host I/O packets allow to perform I/O operations on the far side of a remote link. For example, Host I/O is used to upload and download files to a remote target with its own filesystem. Host I/O uses the same constant values and data structure layout as the target-initiated File-I/O protocol. However, the Host I/O packets are structured differently. The target-initiated protocol relies on target memory to store parameters and buffers. Host I/O requests are initiated by , and the target's memory is not involved. See section E.13 File-I/O Remote Protocol Extension, for more details on the target-initiated protocol.
The Host I/O request packets all encode a single operation along with its arguments. They have this format:
The valid responses to Host I/O packets are:
These are the supported Host I/O operations:
The data read should be returned as a binary attachment on success. If zero bytes were read, the response should include an empty binary attachment (i.e. a trailing semicolon). The return value is the number of target bytes read; the binary attachment may be longer if some characters were escaped.
write system calls, there is no
separate count argument; the length of data in the
packet is used. `vFile:write' returns the number of bytes written,
which may be shorter than the length of data, or -1 if an
error occurred.
The data read should be returned as a binary attachment on success. If zero bytes were read, the response should include an empty binary attachment (i.e. a trailing semicolon). The return value is the number of target bytes read; the binary attachment may be longer if some characters were escaped.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When a program on the remote target is running, may
attempt to interrupt it by sending a `Ctrl-C', BREAK or
a BREAK followed by g,
control of which is specified via 's `interrupt-sequence'.
The precise meaning of BREAK is defined by the transport
mechanism and may, in fact, be undefined. does not
currently define a BREAK mechanism for any of the network
interfaces except for TCP, in which case sends the
telnet BREAK sequence.
`Ctrl-C', on the other hand, is defined and implemented for all
transport mechanisms. It is represented by sending the single byte
0x03 without any of the usual packet overhead described in
the Overview section (see section E.1 Overview). When a 0x03 byte is
transmitted as part of a packet, it is considered to be packet data
and does not represent an interrupt. E.g., an `X' packet
(see X packet), used for binary downloads, may include an unescaped
0x03 as part of its packet.
BREAK followed by g is also known as Magic SysRq g.
When Linux kernel receives this sequence from serial port,
it stops execution and connects to gdb.
Stubs are not required to recognize these interrupt mechanisms and the precise meaning associated with receipt of the interrupt is implementation defined. If the target supports debugging of multiple threads and/or processes, it should attempt to interrupt all currently-executing threads and processes. If the stub is successful at interrupting the running program, it should send one of the stop reply packets (see section E.3 Stop Reply Packets) to as a result of successfully stopping the program in all-stop mode, and a stop reply for each stopped thread in non-stop mode. Interrupts received while the program is stopped are discarded.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The remote serial protocol includes notifications, packets that require no acknowledgment. Both the GDB and the stub may send notifications (although the only notifications defined at present are sent by the stub). Notifications carry information without incurring the round-trip latency of an acknowledgment, and so are useful for low-impact communications where occasional packet loss is not a problem.
A notification packet has the form `% data # checksum', where data is the content of the notification, and checksum is a checksum of data, computed and formatted as for ordinary packets. A notification's data never contains `$', `%' or `#' characters. Upon receiving a notification, the recipient sends no `+' or `-' to acknowledge the notification's receipt or to report its corruption.
Every notification's data begins with a name, which contains no colon characters, followed by a colon character.
Recipients should silently ignore corrupted notifications and notifications they do not understand. Recipients should restart timeout periods on receipt of a well-formed notification, whether or not they understand it.
Senders should only send the notifications described here when this protocol description specifies that they are permitted. In the future, we may extend the protocol to permit existing notifications in new contexts; this rule helps older senders avoid confusing newer recipients.
(Older versions of ignore bytes received until they see the `$' byte that begins an ordinary packet, so new stubs may transmit notifications without fear of confusing older clients. There are no notifications defined for to send at the moment, but we assume that most older stubs would ignore them, as well.)
Each notification is comprised of three parts:
The purpose of an asynchronous notification mechanism is to report to that something interesting happened in the remote stub.
The remote stub may send notification name:event at any time, but acknowledges the notification when appropriate. The notification event is pending before acknowledges. Only one notification at a time may be pending; if additional events occur before has acknowledged the previous notification, they must be queued by the stub for later synchronous transmission in response to ack packets from . Because the notification mechanism is unreliable, the stub is permitted to resend a notification if it believes may not have received it.
Specifically, notifications may appear when is not otherwise reading input from the stub, or when is expecting to read a normal synchronous response or a `+'/`-' acknowledgment to a packet it has sent. Notification packets are distinct from any other communication from the stub so there is no ambiguity.
After receiving a notification, shall acknowledge it by sending a ack packet as a regular, synchronous request to the stub. Such acknowledgment is not required to happen immediately, as is permitted to send other, unrelated packets to the stub first, which the stub should process normally.
Upon receiving a ack packet, if the stub has other queued events to report to , it shall respond by sending a normal event. shall then send another ack packet to solicit further responses; again, it is permitted to send other, unrelated packets as well which the stub should process normally.
If the stub receives a ack packet and there are no additional event to report, the stub shall return an `OK' response. At this point, has finished processing a notification and the stub has completed sending any queued events. won't accept any new notifications until the final `OK' is received . If further notification events occur, the stub shall send a new notification, shall accept the notification, and the process shall be repeated.
The process of asynchronous notification can be illustrated by the following example:
<- |
The following notifications are defined:
| Notification | Ack | Event | Description |
| Stop | vStopped | reply. The reply has the form of a stop reply, as described in E.3 Stop Reply Packets. Refer to E.10 Remote Protocol Support for Non-Stop Mode, for information on how these notifications are acknowledged by . | Report an asynchronous stop event in non-stop mode. |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
's remote protocol supports non-stop debugging of multi-threaded programs, as described in 5.5.2 Non-Stop Mode. If the stub supports non-stop mode, it should report that to by including `QNonStop+' in its `qSupported' response (see qSupported).
typically sends a `QNonStop' packet only when establishing a new connection with the stub. Entering non-stop mode does not alter the state of any currently-running threads, but targets must stop all threads in any already-attached processes when entering all-stop mode. uses the `?' packet as necessary to probe the target state after a mode change.
In non-stop mode, when an attached process encounters an event that would otherwise be reported with a stop reply, it uses the asynchronous notification mechanism (see section E.9 Notification Packets) to inform . In contrast to all-stop mode, where all threads in all processes are stopped when a stop reply is sent, in non-stop mode only the thread reporting the stop event is stopped. That is, when reporting a `S' or `T' response to indicate completion of a step operation, hitting a breakpoint, or a fault, only the affected thread is stopped; any other still-running threads continue to run. When reporting a `W' or `X' response, all running threads belonging to other attached processes continue to run.
In non-stop mode, the target shall respond to the `?' packet as follows. First, any incomplete stop reply notification/`vStopped' sequence in progress is abandoned. The target must begin a new sequence reporting stop events for all stopped threads, whether or not it has previously reported those events to . The first stop reply is sent as a synchronous reply to the `?' packet, and subsequent stop replies are sent as responses to `vStopped' packets using the mechanism described above. The target must not send asynchronous stop reply notifications until the sequence is complete. If all threads are running when the target receives the `?' packet, or if the target is not attached to any process, it shall respond `OK'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default, when either the host or the target machine receives a packet, the first response expected is an acknowledgment: either `+' (to indicate the package was received correctly) or `-' (to request retransmission). This mechanism allows the remote protocol to operate over unreliable transport mechanisms, such as a serial line.
In cases where the transport mechanism is itself reliable (such as a pipe or TCP connection), the `+'/`-' acknowledgments are redundant. It may be desirable to disable them in that case to reduce communication overhead, or for other reasons. This can be accomplished by means of the `QStartNoAckMode' packet; see QStartNoAckMode.
When in no-acknowledgment mode, neither the stub nor shall send or expect `+'/`-' protocol acknowledgments. The packet and response format still includes the normal checksum, as described in E.1 Overview, but the checksum may be ignored by the receiver.
If the stub supports `QStartNoAckMode' and prefers to operate in
no-acknowledgment mode, it should report that to
by including `QStartNoAckMode+' in its response to `qSupported';
see qSupported.
If also supports `QStartNoAckMode' and it has not been
disabled via the set remote noack-packet off command
(see section 20.4 Remote Configuration),
may then send a `QStartNoAckMode' packet to the stub.
Only then may the stub actually turn off packet acknowledgments.
sends a final `+' acknowledgment of the stub's `OK'
response, which can be safely ignored by the stub.
Note that set remote noack-packet command only affects negotiation
between and the stub when subsequent connections are made;
it does not affect the protocol acknowledgment state for any current
connection.
Since `+'/`-' acknowledgments are enabled by default when a
new connection is established,
there is also no protocol request to re-enable the acknowledgments
for the current connection, once disabled.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Example sequence of a target being re-started. Notice how the restart does not get any direct output:
-> |
Example sequence of a target being stepped by a single instruction:
-> |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The File I/O remote protocol extension (short: File-I/O) allows the target to use the host's file system and console I/O to perform various system calls. System calls on the target system are translated into a remote protocol packet to the host system, which then performs the needed actions and returns a response packet to the target system. This simulates file system operations even on targets that lack file systems.
The protocol is defined to be independent of both the host and target systems. It uses its own internal representation of datatypes and values. Both and the target's stub are responsible for translating the system-dependent value representations into the internal protocol representations when data is transmitted.
The communication is synchronous. A system call is possible only when is waiting for a response from the `C', `c', `S' or `s' packets. While handles the request for a system call, the target is stopped to allow deterministic access to the target's memory. Therefore File-I/O is not interruptible by target signals. On the other hand, it is possible to interrupt File-I/O by a user interrupt (`Ctrl-C') within .
The target's request to perform a host system call does not finish the latest `C', `c', `S' or `s' action. That means, after finishing the system call, the target returns to continuing the previous activity (continue, step). No additional continue or step request from is required.
() continue <- target requests 'system call X' target is stopped, executes system call -> returns result ... target continues, returns to wait for the target <- target hits breakpoint and sends a Txx packet |
The protocol only supports I/O on the console and to regular files on the host file system. Character or block special devices, pipes, named pipes, sockets or any other communication method on the host system are not supported by this protocol.
File I/O is not supported in non-stop mode.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The File-I/O protocol uses the F packet as the request as well
as reply packet. Since a File-I/O system call can only occur when
is waiting for a response from the continuing or stepping target,
the File-I/O request is a reply that has to expect as a result
of a previous `C', `c', `S' or `s' packet.
This F packet contains all information needed to allow
to call the appropriate host system call:
At this point, has to perform the following actions.
m packet request. This additional communication has to be
expected by the target implementation and is handled as any other m
packet.
M or X packet. This packet has to be expected
by the target implementation and is handled as any other M or X
packet.
Eventually replies with another F packet which contains all
necessary information for the target to continue. This at least contains
errno, if has been changed by the system call.
After having done the needed type and value coercion, the target continues the latest continue or step action.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
F Request Packet
The F request packet has the following format:
call-id is the identifier to indicate the host system call to be called. This is just the name of the function.
parameter... are the parameters to the system call. Parameters are hexadecimal integer values, either the actual values in case of scalar datatypes, pointers to target buffer space in case of compound datatypes and unspecified memory areas, or pointer/length pairs in case of string parameters. These are appended to the call-id as a comma-delimited list. All values are transmitted in ASCII string representation, pointer/length pairs separated by a slash.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
F Reply Packet
The F reply packet has the following format:
retcode is the return code of the system call as hexadecimal value.
errno is the errno set by the call, in protocol-specific
representation.
This parameter can be omitted if the call was successful.
Ctrl-C flag is only sent if the user requested a break. In this case, errno must be sent as well, even if the call was successful. The Ctrl-C flag itself consists of the character `C':
F0,0,C |
or, if the call was interrupted before the host call has been performed:
F-1,4,C |
assuming 4 is the protocol-specific representation of EINTR.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If the `Ctrl-C' flag is set in the
reply packet (see section E.13.4 The F Reply Packet),
the target should behave as if it had
gotten a break message. The meaning for the target is "system call
interrupted by SIGINT". Consequentially, the target should actually stop
(as with a break message) and return to with a T02
packet.
It's important for the target to know in which state the system call was interrupted. There are two possible cases:
These two states can be distinguished by the target by the value of the
returned errno. If it's the protocol representation of EINTR, the system
call hasn't been performed. This is equivalent to the EINTR handling
on POSIX systems. In any other case, the target may presume that the
system call has been finished -- successfully or not -- and should behave
as if the break message arrived right after the system call.
must behave reliably. If the system call has not been called
yet, may send the F reply immediately, setting EINTR as
errno in the packet. If the system call on the host has been finished
before the user requests a break, the full action must be finished by
. This requires sending M or X packets as necessary.
The F packet may only be sent when either nothing has happened
or the full action has been completed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
By default and if not explicitly closed by the target system, the file
descriptors 0, 1 and 2 are connected to the console. Output
on the console is handled as any other file output operation
(write(1, ...) or write(2, ...)). Console input is handled
by so that after the target read request from file descriptor
0 all following typing is buffered until either one of the following
conditions is met:
read
system call is treated as finished.
If the user has typed more characters than fit in the buffer given to
the read call, the trailing characters are buffered in until
either another read(0, ...) is requested by the target, or debugging
is stopped at the user's request.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
open close read write lseek rename unlink stat/fstat gettimeofday isatty system
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int open(const char *pathname, int flags); int open(const char *pathname, int flags, mode_t mode); |
flags is the bitwise OR of the following values:
O_CREAT
O_EXCL
O_CREAT, if the file already exists it is
an error and open() fails.
O_TRUNC
O_RDWR or O_WRONLY is given) it will be
truncated to zero length.
O_APPEND
O_RDONLY
O_WRONLY
O_RDWR
Other bits are silently ignored.
mode is the bitwise OR of the following values:
S_IRUSR
S_IWUSR
S_IRGRP
S_IWGRP
S_IROTH
S_IWOTH
Other bits are silently ignored.
open returns the new file descriptor or -1 if an error
occurred.
EEXIST
O_CREAT and O_EXCL were used.
EISDIR
EACCES
ENAMETOOLONG
ENOENT
ENODEV
EROFS
EFAULT
ENOSPC
EMFILE
ENFILE
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int close(int fd); |
close returns zero on success, or -1 if an error occurred.
EBADF
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int read(int fd, void *buf, unsigned int count); |
EBADF
EFAULT
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int write(int fd, const void *buf, unsigned int count); |
EBADF
EFAULT
EFBIG
ENOSPC
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
long lseek (int fd, long offset, int flag); |
flag is one of:
SEEK_SET
SEEK_CUR
SEEK_END
EBADF
ESPIPE
EINVAL
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int rename(const char *oldpath, const char *newpath); |
EISDIR
EEXIST
EBUSY
EINVAL
ENOTDIR
EFAULT
EACCES
ENAMETOOLONG
oldpath or newpath was too long.
ENOENT
EROFS
ENOSPC
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int unlink(const char *pathname); |
EACCES
EPERM
EBUSY
EFAULT
ENAMETOOLONG
ENOENT
ENOTDIR
EROFS
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int stat(const char *pathname, struct stat *buf); int fstat(int fd, struct stat *buf); |
EBADF
ENOENT
ENOTDIR
EFAULT
EACCES
ENAMETOOLONG
EINTR
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int gettimeofday(struct timeval *tv, void *tz); |
EINVAL
EFAULT
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int isatty(int fd); |
EINTR
Note that the isatty call is treated as a special case: it returns
1 to the target if the file descriptor is attached
to the console, 0 otherwise. Implementing through system calls
would require implementing ioctl and would be more complex than
needed.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
int system(const char *command); |
system
return value by calling WEXITSTATUS(retval). In case
`/bin/sh' could not be executed, 127 is returned.
EINTR
takes over the full task of calling the necessary host calls
to perform the system call. The return value of system on
the host is simplified before it's returned
to the target. Any termination signal information from the child process
is discarded, and the return value consists
entirely of the exit status of the called command.
Due to security concerns, the system call is by default refused
by . The user has to allow this call explicitly with the
set remote system-call-allowed 1 command.
set remote system-call-allowed
system calls in the File I/O
protocol for the remote target. The default is zero (disabled).
show remote system-call-allowed
system calls are allowed in the File I/O
protocol.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Integral Datatypes Pointer Values Memory Transfer struct stat struct timeval
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The integral datatypes used in the system calls are int,
unsigned int, long, unsigned long,
mode_t, and time_t.
int, unsigned int, mode_t and time_t are
implemented as 32 bit values in this protocol.
long and unsigned long are implemented as 64 bit types.
See section Limits, for corresponding MIN and MAX values (similar to those in `limits.h') to allow range checking on host and target.
time_t datatypes are defined as seconds since the Epoch.
All integral datatypes transferred as part of a memory read or write of a
structured datatype e.g. a struct stat have to be given in big endian
byte order.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Pointers to target data are transmitted as they are. An exception is made for pointers to buffers for which the length isn't transmitted as part of the function call, namely strings. Strings are transmitted as a pointer/length pair, both as hex values, e.g.
|
which is a pointer to data of length 18 bytes at position 0x1aaf.
The length is defined as the full string length in bytes, including
the trailing null byte. For example, the string "hello world"
at address 0x123456 is transmitted as
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Structured data which is transferred using a memory read or write (for
example, a struct stat) is expected to be in a protocol-specific format
with all scalar multibyte datatypes being big endian. Translation to
this representation needs to be done both by the target before the F
packet is sent, and by before
it transfers memory to the target. Transferred pointers to structured
data should point to the already-coerced data at any time.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The buffer of type struct stat used by the target and
is defined as follows:
struct stat {
unsigned int st_dev; /* device */
unsigned int st_ino; /* inode */
mode_t st_mode; /* protection */
unsigned int st_nlink; /* number of hard links */
unsigned int st_uid; /* user ID of owner */
unsigned int st_gid; /* group ID of owner */
unsigned int st_rdev; /* device type (if inode device) */
unsigned long st_size; /* total size, in bytes */
unsigned long st_blksize; /* blocksize for filesystem I/O */
unsigned long st_blocks; /* number of blocks allocated */
time_t st_atime; /* time of last access */
time_t st_mtime; /* time of last modification */
time_t st_ctime; /* time of last change */
};
|
The integral datatypes conform to the definitions given in the appropriate section (see Integral Datatypes, for details) so this structure is of size 64 bytes.
The values of several fields have a restricted meaning and/or range of values.
st_dev
st_ino
st_mode
st_uid
st_gid
st_rdev
st_atime
st_mtime
st_ctime
The target gets a struct stat of the above representation and is
responsible for coercing it to the target representation before
continuing.
Note that due to size differences between the host, target, and protocol
representations of struct stat members, these members could eventually
get truncated on the target.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The buffer of type struct timeval used by the File-I/O protocol
is defined as follows:
struct timeval {
time_t tv_sec; /* second */
long tv_usec; /* microsecond */
};
|
The integral datatypes conform to the definitions given in the appropriate section (see Integral Datatypes, for details) so this structure is of size 8 bytes.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following values are used for the constants inside of the protocol. and target are responsible for translating these values before and after the call as needed.
Open Flags mode_t Values Errno Values Lseek Flags Limits
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
All values are given in hexadecimal representation.
O_RDONLY 0x0 O_WRONLY 0x1 O_RDWR 0x2 O_APPEND 0x8 O_CREAT 0x200 O_TRUNC 0x400 O_EXCL 0x800 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
All values are given in octal representation.
S_IFREG 0100000 S_IFDIR 040000 S_IRUSR 0400 S_IWUSR 0200 S_IXUSR 0100 S_IRGRP 040 S_IWGRP 020 S_IXGRP 010 S_IROTH 04 S_IWOTH 02 S_IXOTH 01 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
All values are given in decimal representation.
EPERM 1 ENOENT 2 EINTR 4 EBADF 9 EACCES 13 EFAULT 14 EBUSY 16 EEXIST 17 ENODEV 19 ENOTDIR 20 EISDIR 21 EINVAL 22 ENFILE 23 EMFILE 24 EFBIG 27 ENOSPC 28 ESPIPE 29 EROFS 30 ENAMETOOLONG 91 EUNKNOWN 9999 |
EUNKNOWN is used as a fallback error value if a host system returns
any error value not in the list of supported error numbers.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
SEEK_SET 0 SEEK_CUR 1 SEEK_END 2 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
All values are given in decimal representation.
INT_MIN -2147483648 INT_MAX 2147483647 UINT_MAX 4294967295 LONG_MIN -9223372036854775808 LONG_MAX 9223372036854775807 ULONG_MAX 18446744073709551615 |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Example sequence of a write call, file descriptor 3, buffer is at target address 0x1234, 6 bytes should be written:
<- |
Example sequence of a read call, file descriptor 3, buffer is at target address 0x1234, 6 bytes should be read:
<- |
Example sequence of a read call, call fails on the host due to invalid
file descriptor (EBADF):
<- |
Example sequence of a read call, user presses Ctrl-c before syscall on host is called:
<- |
Example sequence of a read call, user presses Ctrl-c after syscall on host is called:
<- |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On some platforms, a dynamic loader (e.g. `ld.so') runs in the same process as your application to manage libraries. In this case, can use the loader's symbol table and normal memory operations to maintain a list of shared libraries. On other platforms, the operating system manages loaded libraries. can not retrieve the list of currently loaded libraries through memory operations, so it uses the `qXfer:libraries:read' packet (see qXfer library list read) instead. The remote stub queries the target's operating system and reports which libraries are loaded.
The `qXfer:libraries:read' packet returns an XML document which lists loaded libraries and their offsets. Each library has an associated name and one or more segment or section base addresses, which report where the library was loaded in memory.
For the common case of libraries that are fully linked binaries, the library should have a list of segments. If the target supports dynamic linking of a relocatable object file, its library XML element should instead include a list of allocated sections. The segment or section bases are start addresses, not relocation offsets; they do not depend on the library's link-time base addresses.
must be linked with the Expat library to support XML library lists. See Expat.
A simple memory map, with one loaded library relocated by a single offset, looks like this:
<library-list>
<library name="/lib/libc.so.6">
<segment address="0x10000000"/>
</library>
</library-list>
|
Another simple memory map, with one loaded library with three allocated sections (.text, .data, .bss), looks like this:
<library-list>
<library name="sharedlib.o">
<section address="0x10000000"/>
<section address="0x20000000"/>
<section address="0x30000000"/>
</library>
</library-list>
|
The format of a library list is described by this DTD:
<!-- library-list: Root element with versioning --> <!ELEMENT library-list (library)*> <!ATTLIST library-list version CDATA #FIXED "1.0"> <!ELEMENT library (segment*, section*)> <!ATTLIST library name CDATA #REQUIRED> <!ELEMENT segment EMPTY> <!ATTLIST segment address CDATA #REQUIRED> <!ELEMENT section EMPTY> <!ATTLIST section address CDATA #REQUIRED> |
In addition, segments and section descriptors cannot be mixed within a single library element, and you must supply at least one segment or section for each library.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On SVR4 platforms can use the symbol table of a dynamic loader (e.g. `ld.so') and normal memory operations to maintain a list of shared libraries. Still a special library list provided by this packet is more efficient for the remote protocol.
The `qXfer:libraries-svr4:read' packet returns an XML document which lists loaded libraries and their SVR4 linker parameters. For each library on SVR4 target, the following parameters are reported:
name, the absolute file name from the l_name field of
struct link_map.
lm with address of struct link_map used for TLS
(Thread Local Storage) access.
l_addr, the displacement as read from the field l_addr of
struct link_map. For prelinked libraries this is not an absolute
memory address. It is a displacement of absolute memory address against
address the file was prelinked to during the library load.
l_ld, which is memory address of the PT_DYNAMIC segment
Additionally the single main-lm attribute specifies address of
struct link_map used for the main executable. This parameter is used
for TLS access and its presence is optional.
must be linked with the Expat library to support XML SVR4 library lists. See Expat.
A simple memory map, with two loaded libraries (which do not use prelink), looks like this:
<library-list-svr4 version="1.0" main-lm="0xe4f8f8">
<library name="/lib/ld-linux.so.2" lm="0xe4f51c" l_addr="0xe2d000"
l_ld="0xe4eefc"/>
<library name="/lib/libc.so.6" lm="0xe4fbe8" l_addr="0x154000"
l_ld="0x152350"/>
</library-list-svr>
|
The format of an SVR4 library list is described by this DTD:
<!-- library-list-svr4: Root element with versioning --> <!ELEMENT library-list-svr4 (library)*> <!ATTLIST library-list-svr4 version CDATA #FIXED "1.0"> <!ATTLIST library-list-svr4 main-lm CDATA #IMPLIED> <!ELEMENT library EMPTY> <!ATTLIST library name CDATA #REQUIRED> <!ATTLIST library lm CDATA #REQUIRED> <!ATTLIST library l_addr CDATA #REQUIRED> <!ATTLIST library l_ld CDATA #REQUIRED> |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To be able to write into flash memory, needs to obtain a memory map from the target. This section describes the format of the memory map.
The memory map is obtained using the `qXfer:memory-map:read' (see qXfer memory map read) packet and is an XML document that lists memory regions.
must be linked with the Expat library to support XML memory maps. See Expat.
The top-level structure of the document is shown below:
<?xml version="1.0"?>
<!DOCTYPE memory-map
PUBLIC "+//IDN gnu.org//DTD GDB Memory Map V1.0//EN"
"http://sourceware.org/gdb/gdb-memory-map.dtd">
<memory-map>
region...
</memory-map>
|
Each region can be either:
<memory type="ram" start="addr" length="length"/> |
<memory type="rom" start="addr" length="length"/> |
<memory type="flash" start="addr" length="length"> <property name="blocksize">blocksize</property> </memory> |
Regions must not overlap. assumes that areas of memory not covered by the memory map are RAM, and uses the ordinary `M' and `X' packets to write to addresses in such ranges.
The formal DTD for memory map format is given below:
<!-- ................................................... -->
<!-- Memory Map XML DTD ................................ -->
<!-- File: memory-map.dtd .............................. -->
<!-- .................................... .............. -->
<!-- memory-map.dtd -->
<!-- memory-map: Root element with versioning -->
<!ELEMENT memory-map (memory | property)>
<!ATTLIST memory-map version CDATA #FIXED "1.0.0">
<!ELEMENT memory (property)>
<!-- memory: Specifies a memory region,
and its type, or device. -->
<!ATTLIST memory type CDATA #REQUIRED
start CDATA #REQUIRED
length CDATA #REQUIRED
device CDATA #IMPLIED>
<!-- property: Generic attribute tag -->
<!ELEMENT property (#PCDATA | property)*>
<!ATTLIST property name CDATA #REQUIRED>
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To efficiently update the list of threads and their attributes, issues the `qXfer:threads:read' packet (see qXfer threads read) and obtains the XML document with the following structure:
<?xml version="1.0"?>
<threads>
<thread id="id" core="0">
... description ...
</thread>
</threads>
|
Each `thread' element must have the `id' attribute that identifies the thread (see thread-id syntax). The `core' attribute, if present, specifies which processor core the thread was last executing on. The content of the of `thread' element is interpreted as human-readable auxilliary information.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To be able to know which objects in the inferior can be examined when inspecting a tracepoint hit, needs to obtain the list of memory ranges, registers and trace state variables that have been collected in a traceframe.
This list is obtained using the `qXfer:traceframe-info:read' (see qXfer traceframe info read) packet and is an XML document.
must be linked with the Expat library to support XML traceframe info discovery. See Expat.
The top-level structure of the document is shown below:
<?xml version="1.0"?>
<!DOCTYPE traceframe-info
PUBLIC "+//IDN gnu.org//DTD GDB Memory Map V1.0//EN"
"http://sourceware.org/gdb/gdb-traceframe-info.dtd">
<traceframe-info>
block...
</traceframe-info>
|
Each traceframe block can be either:
<memory start="addr" length="length"/> |
The formal DTD for the traceframe info format is given below:
<!ELEMENT traceframe-info (memory)* >
<!ATTLIST traceframe-info version CDATA #FIXED "1.0">
<!ELEMENT memory EMPTY>
<!ATTLIST memory start CDATA #REQUIRED
length CDATA #REQUIRED>
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In order to display the branch trace of an inferior thread, needs to obtain the list of branches. This list is represented as list of sequential code blocks that are connected via branches. The code in each block has been executed sequentially.
This list is obtained using the `qXfer:btrace:read' (see qXfer btrace read) packet and is an XML document.
must be linked with the Expat library to support XML traceframe info discovery. See Expat.
The top-level structure of the document is shown below:
<?xml version="1.0"?>
<!DOCTYPE btrace
PUBLIC "+//IDN gnu.org//DTD GDB Branch Trace V1.0//EN"
"http://sourceware.org/gdb/gdb-btrace.dtd">
<btrace>
block...
</btrace>
|
<block begin="begin" end="end"/> |
The formal DTD for the branch trace format is given below:
<!ELEMENT btrace (block)* >
<!ATTLIST btrace version CDATA #FIXED "1.0">
<!ELEMENT block EMPTY>
<!ATTLIST block begin CDATA #REQUIRED
end CDATA #REQUIRED>
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In some applications, it is not feasible for the debugger to interrupt the program's execution long enough for the developer to learn anything helpful about its behavior. If the program's correctness depends on its real-time behavior, delays introduced by a debugger might cause the program to fail, even when the code itself is correct. It is useful to be able to observe the program's behavior without interrupting it.
Using GDB's trace and collect commands, the user can
specify locations in the program, and arbitrary expressions to evaluate
when those locations are reached. Later, using the tfind
command, she can examine the values those expressions had when the
program hit the trace points. The expressions may also denote objects
in memory -- structures or arrays, for example -- whose values GDB
should record; while visiting a particular tracepoint, the user may
inspect those objects as if they were in memory at that moment.
However, because GDB records these values without interacting with the
user, it can do so quickly and unobtrusively, hopefully not disturbing
the program's behavior.
When GDB is debugging a remote target, the GDB agent code running on the target computes the values of the expressions itself. To avoid having a full symbolic expression evaluator on the agent, GDB translates expressions in the source language into a simpler bytecode language, and then sends the bytecode to the agent; the agent then executes the bytecode, and records the values for GDB to retrieve later.
The bytecode language is simple; there are forty-odd opcodes, the bulk of which are the usual vocabulary of C operands (addition, subtraction, shifts, and so on) and various sizes of literals and memory reference operations. The bytecode interpreter operates strictly on machine-level values -- various sizes of integers and floating point numbers -- and requires no information about types or symbols; thus, the interpreter's internal data structures are simple, and each bytecode requires only a few native machine instructions to implement it. The interpreter is small, and strict limits on the memory and time required to evaluate an expression are easy to determine, making it suitable for use by the debugging agent in real-time applications.
F.1 General Bytecode Design Overview of the interpreter. F.2 Bytecode Descriptions What each one does. F.3 Using Agent Expressions How agent expressions fit into the big picture. F.4 Varying Target Capabilities How to discover what the target can do. F.5 Rationale Why we did it this way.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The agent represents bytecode expressions as an array of bytes. Each
instruction is one byte long (thus the term bytecode). Some
instructions are followed by operand bytes; for example, the goto
instruction is followed by a destination for the jump.
The bytecode interpreter is a stack-based machine; most instructions pop their operands off the stack, perform some operation, and push the result back on the stack for the next instruction to consume. Each element of the stack may contain either a integer or a floating point value; these values are as many bits wide as the largest integer that can be directly manipulated in the source language. Stack elements carry no record of their type; bytecode could push a value as an integer, then pop it as a floating point value. However, GDB will not generate code which does this. In C, one might define the type of a stack element as follows:
union agent_val {
LONGEST l;
DOUBLEST d;
};
|
LONGEST and DOUBLEST are typedef names for
the largest integer and floating point types on the machine.
By the time the bytecode interpreter reaches the end of the expression,
the value of the expression should be the only value left on the stack.
For tracing applications, trace bytecodes in the expression will
have recorded the necessary data, and the value on the stack may be
discarded. For other applications, like conditional breakpoints, the
value may be useful.
Separate from the stack, the interpreter has two registers:
pc
start
goto and if_goto instructions.
There are no instructions to perform side effects on the running program, or call the program's functions; we assume that these expressions are only used for unobtrusive debugging, not for patching the running code.
Most bytecode instructions do not distinguish between the various sizes of values, and operate on full-width values; the upper bits of the values are simply ignored, since they do not usually make a difference to the value computed. The exceptions to this rule are:
refn)
ext instruction
exists for this purpose.
ext n)
If the interpreter is unable to evaluate an expression completely for some reason (a memory location is inaccessible, or a divisor is zero, for example), we say that interpretation "terminates with an error". This means that the problem is reported back to the interpreter's caller in some helpful way. In general, code using agent expressions should assume that they may attempt to divide by zero, fetch arbitrary memory locations, and misbehave in other ways.
Even complicated C expressions compile to a few bytecode instructions;
for example, the expression x + y * z would typically produce
code like the following, assuming that x and y live in
registers, and z is a global variable holding a 32-bit
int:
reg 1 reg 2 const32 address of z ref32 ext 32 mul add end |
In detail, these mean:
reg 1
x) onto the
stack.
reg 2
y).
const32 address of z
z onto the stack.
ref32
z with z's value.
ext 32
z is a signed integer.
mul
y * z.
add
x + y * z.
end
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Each bytecode description has the following form:
add (0x02): a b => a+b
Pop the top two stack items, a and b, as integers; push their sum, as an integer.
In this example, add is the name of the bytecode, and
(0x02) is the one-byte value used to encode the bytecode, in
hexadecimal. The phrase "a b => a+b" shows
the stack before and after the bytecode executes. Beforehand, the stack
must contain at least two values, a and b; since the top of
the stack is to the right, b is on the top of the stack, and
a is underneath it. After execution, the bytecode will have
popped a and b from the stack, and replaced them with a
single value, a+b. There may be other values on the stack below
those shown, but the bytecode affects only those shown.
Here is another example:
const8 (0x22) n: => n
In this example, the bytecode const8 takes an operand n
directly from the bytecode stream; the operand follows the const8
bytecode itself. We write any such operands immediately after the name
of the bytecode, before the colon, and describe the exact encoding of
the operand in the bytecode stream in the body of the bytecode
description.
For the const8 bytecode, there are no stack items given before
the =>; this simply means that the bytecode consumes no values
from the stack. If a bytecode consumes no values, or produces no
values, the list on either side of the => may be empty.
If a value is written as a, b, or n, then the bytecode treats it as an integer. If a value is written is addr, then the bytecode treats it as an address.
We do not fully describe the floating point operations here; although this design can be extended in a clean way to handle floating point values, they are not of immediate interest to the customer, so we avoid describing them, to save time.
float (0x01): =>
Prefix for floating-point bytecodes. Not implemented yet.
add (0x02): a b => a+b
sub (0x03): a b => a-b
mul (0x04): a b => a*b
div_signed (0x05): a b => a/b
div_unsigned (0x06): a b => a/b
rem_signed (0x07): a b => a modulo b
rem_unsigned (0x08): a b => a modulo b
lsh (0x09): a b => a<<b
rsh_signed (0x0a): a b => (signed)a>>b
rsh_unsigned (0x0b): a b => a>>b
log_not (0x0e): a => !a
bit_and (0x0f): a b => a&b
and.
bit_or (0x10): a b => a|b
or.
bit_xor (0x11): a b => a^b
or.
bit_not (0x12): a => ~a
equal (0x13): a b => a=b
less_signed (0x14): a b => a<b
less_unsigned (0x15): a b => a<b
ext (0x16) n: a => a, sign-extended from n bits
The number of source bits to preserve, n, is encoded as a single
byte unsigned integer following the ext bytecode.
zero_ext (0x2a) n: a => a, zero-extended from n bits
The number of source bits to preserve, n, is encoded as a single
byte unsigned integer following the zero_ext bytecode.
ref8 (0x17): addr => a
ref16 (0x18): addr => a
ref32 (0x19): addr => a
ref64 (0x1a): addr => a
refn, fetch an n-bit value from addr, using the
natural target endianness. Push the fetched value as an unsigned
integer.
Note that addr may not be aligned in any particular way; the
refn bytecodes should operate correctly for any address.
If attempting to access memory at addr would cause a processor exception of some sort, terminate with an error.
ref_float (0x1b): addr => d
ref_double (0x1c): addr => d
ref_long_double (0x1d): addr => d
l_to_d (0x1e): a => d
d_to_l (0x1f): d => a
dup (0x28): a => a a
swap (0x2b): a b => b a
pop (0x29): a =>
pick (0x32) n: a ... b => a ... b a
dup; if n is one, it copies
the item under the top item, etc. If n exceeds the number of
items on the stack, terminate with an error.
rot (0x33): a b c => c b a
if_goto (0x20) offset: a =>
pc register to start + offset.
Thus, an offset of zero denotes the beginning of the expression.
The offset is stored as a sixteen-bit unsigned value, stored
immediately following the if_goto bytecode. It is always stored
most significant byte first, regardless of the target's normal
endianness. The offset is not guaranteed to fall at any particular
alignment within the bytecode stream; thus, on machines where fetching a
16-bit on an unaligned address raises an exception, you should fetch the
offset one byte at a time.
goto (0x21) offset: =>
pc register to start + offset.
The offset is stored in the same way as for the if_goto bytecode.
const8 (0x22) n: => n
const16 (0x23) n: => n
const32 (0x24) n: => n
const64 (0x25) n: => n
ext bytecode.
The constant n is stored in the appropriate number of bytes
following the constb bytecode. The constant n is
always stored most significant byte first, regardless of the target's
normal endianness. The constant is not guaranteed to fall at any
particular alignment within the bytecode stream; thus, on machines where
fetching a 16-bit on an unaligned address raises an exception, you
should fetch n one byte at a time.
reg (0x26) n: => a
The register number n is encoded as a 16-bit unsigned integer
immediately following the reg bytecode. It is always stored most
significant byte first, regardless of the target's normal endianness.
The register number is not guaranteed to fall at any particular
alignment within the bytecode stream; thus, on machines where fetching a
16-bit on an unaligned address raises an exception, you should fetch the
register number one byte at a time.
getv (0x2c) n: => v
The variable number n is encoded as a 16-bit unsigned integer
immediately following the getv bytecode. It is always stored most
significant byte first, regardless of the target's normal endianness.
The variable number is not guaranteed to fall at any particular
alignment within the bytecode stream; thus, on machines where fetching a
16-bit on an unaligned address raises an exception, you should fetch the
register number one byte at a time.
setv (0x2d) n: => v
getv.
trace (0x0c): addr size =>
trace_quick (0x0d) size: addr => addr
trace opcode.
This bytecode is equivalent to the sequence dup const8 size
trace, but we provide it anyway to save space in bytecode strings.
trace16 (0x30) size: addr => addr
trace_quick16, for consistency.
tracev (0x2e) n: => a
getv.
tracenz (0x2f) addr size =>
printf (0x34) numargs string =>
printf).
The value of numargs is the number of arguments to expect on the
stack, while string is the format string, prefixed with a
two-byte length. The last byte of the string must be zero, and is
included in the length. The format string includes escaped sequences
just as it appears in C source, so for instance the format string
"\t%d\n" is six characters long, and the output will consist of
a tab character, a decimal number, and a newline. At the top of the
stack, above the values to be printed, this bytecode will pop a
"function" and "channel". If the function is nonzero, then the
target may treat it as a function and call it, passing the channel as
a first argument, as with the C function fprintf. If the
function is zero, then the target may simply call a standard formatted
print function of its choice. In all, this bytecode pops 2 +
numargs stack elements, and pushes nothing.
end (0x27): =>
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Agent expressions can be used in several different ways by , and the debugger can generate different bytecode sequences as appropriate.
One possibility is to do expression evaluation on the target rather than the host, such as for the conditional of a conditional tracepoint. In such a case, compiles the source expression into a bytecode sequence that simply gets values from registers or memory, does arithmetic, and returns a result.
Another way to use agent expressions is for tracepoint data
collection. generates a different bytecode sequence for
collection; in addition to bytecodes that do the calculation,
adds trace bytecodes to save the pieces of
memory that were used.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some targets don't support floating-point, and some would rather not
have to deal with long long operations. Also, different targets
will have different stack sizes, and different bytecode buffer lengths.
Thus, GDB needs a way to ask the target about itself. We haven't worked out the details yet, but in general, GDB should be able to send the target a packet asking it to describe itself. The reply should be a packet whose length is explicit, so we can add new information to the packet in future revisions of the agent, without confusing old versions of GDB, and it should contain a version number. It should contain at least the following information:
long long is supported
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some of the design decisions apparent above are arguable.
Speed isn't important, but agent code size is; using LONGEST brings in a bunch of support code to do things like division, etc. So this is a serious concern.
First, note that you don't need different bytecodes for different operand sizes. You can generate code without knowing how big the stack elements actually are on the target. If the target only supports 32-bit ints, and you don't send any 64-bit bytecodes, everything just works. The observation here is that the MIPS and the Alpha have only fixed-size registers, and you can still get C's semantics even though most instructions only operate on full-sized words. You just need to make sure everything is properly sign-extended at the right times. So there is no need for 32- and 64-bit variants of the bytecodes. Just implement everything using the largest size you support.
GDB should certainly check to see what sizes the target supports, so the user can get an error earlier, rather than later. But this information is not necessary for correctness.
> or <= operators?
less_ opcodes with log_not, and swap the order
of the operands, yielding all four asymmetrical comparison operators.
For example, (x <= y) is ! (x > y), which is ! (y <
x).
log_not?
ext?
zero_ext?
log_not is equivalent to const8 0 equal; it's used in half
the relational operators.
ext n is equivalent to const8 s-n lsh const8
s-n rsh_signed, where s is the size of the stack elements;
it follows refm and reg bytecodes when the value
should be signed. See the next bulleted item.
zero_ext n is equivalent to constm mask
log_and; it's used whenever we push the value of a register, because we
can't assume the upper bits of the register aren't garbage.
ref operators?
ref operators, and we
need the ext bytecode anyway for accessing bitfields.
ref operators?
ref operators again, and
const32 address ref32 is only one byte longer.
refn operators have to support unaligned fetches?
In particular, structure bitfields may be several bytes long, but follow no alignment rules; members of packed structures are not necessarily aligned either.
In general, there are many cases where unaligned references occur in correct C code, either at the programmer's explicit request, or at the compiler's discretion. Thus, it is simpler to make the GDB agent bytecodes work correctly in all circumstances than to make GDB guess in each case whether the compiler did the usual thing.
goto ops PC-relative?
goto ops?
Suppose we have multiple branch ops with different offset sizes. As I generate code left-to-right, all my jumps are forward jumps (there are no loops in expressions), so I never know the target when I emit the jump opcode. Thus, I have to either always assume the largest offset size, or do jump relaxation on the code after I generate it, which seems like a big waste of time.
I can imagine a reasonable expression being longer than 256 bytes. I can't imagine one being longer than 64k. Thus, we need 16-bit offsets. This kind of reasoning is so bogus, but relaxation is pathetic.
The other approach would be to generate code right-to-left. Then I'd always know my offset size. That might be fun.
When we add side-effects, we should add this.
reg bytecode take a 16-bit register number?
Intel's IA-64 architecture has 128 general-purpose registers, and 128 floating-point registers, and I'm sure it has some random control registers.
trace and trace_quick?
x->y->z, the agent must record the values of x and
x->y as well as the value of x->y->z.
trace bytecodes make the interpreter less general?
trace bytecodes, they don't get in
its way.
trace_quick consume its arguments the way everything else does?
trace_quick is a kludge to save space; it
only exists so we needn't write dup const8 SIZE trace
before every memory reference. Therefore, it's okay for it not to
consume its arguments; it's meant for a specific context in which we
know exactly what it should do with the stack. If we're going to have a
kludge, it should be an effective kludge.
trace16 exist?
dup const16
size trace in those cases.
Whatever we decide to do with trace16, we should at least leave
opcode 0x30 reserved, to remain compatible with the customer who added
it.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
One of the challenges of using to debug embedded systems is that there are so many minor variants of each processor architecture in use. It is common practice for vendors to start with a standard processor core -- ARM, PowerPC, or MIPS, for example --- and then make changes to adapt it to a particular market niche. Some architectures have hundreds of variants, available from dozens of vendors. This leads to a number of problems:
set architecture command can be error-prone.
To address these problems, the remote protocol allows a target system to not only identify itself to , but to actually describe its own features. This lets support processor variants it has never seen before -- to the extent that the descriptions are accurate, and that understands them.
must be linked with the Expat library to support XML target descriptions. See Expat.
G.1 Retrieving Descriptions How descriptions are fetched from a target. G.2 Target Description Format The contents of a target description. G.3 Predefined Target Types Standard types available for target descriptions. G.4 Standard Target Features Features knows about.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Target descriptions can be read from the target automatically, or specified by the user manually. The default behavior is to read the description from the target. retrieves it via the remote protocol using `qXfer' requests (see section qXfer). The annex in the `qXfer' packet will be `target.xml'. The contents of the `target.xml' annex are an XML document, of the form described in G.2 Target Description Format.
Alternatively, you can specify a file to read for the target description. If a file is set, the target will not be queried. The commands to specify a file are:
set tdesc filename path
unset tdesc filename
show tdesc filename
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A target description annex is an XML
document which complies with the Document Type Definition provided in
the sources in `gdb/features/gdb-target.dtd'. This
means you can use generally available tools like xmllint to
check that your feature descriptions are well-formed and valid.
However, to help people unfamiliar with XML write descriptions for
their targets, we also describe the grammar here.
Target descriptions can identify the architecture of the remote target and (for some architectures) provide information about custom register sets. They can also identify the OS ABI of the remote target. can use this information to autoconfigure for your target, or to warn you if you connect to an unsupported target.
Here is a simple target description:
<target version="1.0"> <architecture>i386:x86-64</architecture> </target> |
This minimal description only says that the target uses the x86-64 architecture.
A target description has the following overall form, with [ ] marking optional elements and ... marking repeatable elements. The elements are explained further below.
<?xml version="1.0"?> <!DOCTYPE target SYSTEM "gdb-target.dtd"> <target version="1.0"> [architecture] [osabi] [compatible] [feature...] </target> |
The description is generally insensitive to whitespace and line breaks, under the usual common-sense rules. The XML version declaration and document type declaration can generally be omitted ( does not require them), but specifying them may be useful for XML validation tools. The `version' attribute for `<target>' may also be omitted, but we recommend including it; if future versions of use an incompatible revision of `gdb-target.dtd', they will detect and report the version mismatch.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It can sometimes be valuable to split a target description up into several different annexes, either for organizational purposes, or to share files between different possible target descriptions. You can divide a description into multiple files by replacing any element of the target description with an inclusion directive of the form:
<xi:include href="document"/> |
When encounters an element of this form, it will retrieve the named XML document, and replace the inclusion directive with the contents of that document. If the current description was read using `qXfer', then so will be the included document; document will be interpreted as the name of an annex. If the current description was read from a file, will look for document as a file in the same directory where it found the original description.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
An `<architecture>' element has this form:
<architecture>arch</architecture> |
arch is one of the architectures from the set accepted by
set architecture (see section Specifying a Debugging Target).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This optional field was introduced in version 7.0. Previous versions of ignore it.
An `<osabi>' element has this form:
<osabi>abi-name</osabi> |
abi-name is an OS ABI name from the same selection accepted by
set osabi (see section Configuring the Current ABI).
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This optional field was introduced in version 7.0. Previous versions of ignore it.
A `<compatible>' element has this form:
<compatible>arch</compatible> |
arch is one of the architectures from the set accepted by
set architecture (see section Specifying a Debugging Target).
A `<compatible>' element is used to specify that the target
is able to run binaries in some other than the main target architecture
given by the `<architecture>' element. For example, on the
Cell Broadband Engine, the main architecture is powerpc:common
or powerpc:common64, but the system is able to run binaries
in the spu architecture as well. The way to describe this
capability with `<compatible>' is as follows:
<architecture>powerpc:common</architecture> <compatible>spu</compatible> |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Each `<feature>' describes some logical portion of the target system. Features are currently used to describe available CPU registers and the types of their contents. A `<feature>' element has this form:
<feature name="name"> [type...] reg... </feature> |
Each feature's name should be unique within the description. The name of a feature does not matter unless has some special knowledge of the contents of that feature; if it does, the feature should have its standard name. See section G.4 Standard Target Features.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Any register's value is a collection of bits which must interpret. The default interpretation is a two's complement integer, but other types can be requested by name in the register description. Some predefined types are provided by (see section G.3 Predefined Target Types), and the description can define additional composite types.
Each type element must have an `id' attribute, which gives a unique (within the containing `<feature>') name to the type. Types must be defined before they are used.
Some targets offer vector registers, which can be treated as arrays of scalar elements. These types are written as `<vector>' elements, specifying the array element type, type, and the number of elements, count:
<vector id="id" type="type" count="count"/> |
If a register's value is usefully viewed in multiple ways, define it with a union type containing the useful representations. The `<union>' element contains one or more `<field>' elements, each of which has a name and a type:
<union id="id"> <field name="name" type="type"/> ... </union> |
If a register's value is composed from several separate values, define it with a structure type. There are two forms of the `<struct>' element; a `<struct>' element must either contain only bitfields or contain no bitfields. If the structure contains only bitfields, its total size in bytes must be specified, each bitfield must have an explicit start and end, and bitfields are automatically assigned an integer type. The field's start should be less than or equal to its end, and zero represents the least significant bit.
<struct id="id" size="size"> <field name="name" start="start" end="end"/> ... </struct> |
If the structure contains no bitfields, then each field has an explicit type, and no implicit padding is added.
<struct id="id"> <field name="name" type="type"/> ... </struct> |
If a register's value is a series of single-bit flags, define it with a flags type. The `<flags>' element has an explicit size and contains one or more `<field>' elements. Each field has a name, a start, and an end. Only single-bit flags are supported.
<flags id="id" size="size"> <field name="name" start="start" end="end"/> ... </flags> |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Each register is represented as an element with this form:
<reg name="name"
bitsize="size"
[regnum="num"]
[save-restore="save-restore"]
[type="type"]
[group="group"]/>
|
The components are as follows:
p and P
packets, and registers appear in the g and G packets
in order of increasing register number.
yes or no. The default is
yes, which is appropriate for most registers except for
some system control registers; this is not related to the target's
ABI.
int
and float. int is an integer type of the correct size
for bitsize, and float is a floating point type (in the
architecture's normal floating point format) of the correct size for
bitsize. The default is int.
general, float, or vector. If no
group is specified, will not display the register
in info registers.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Type definitions in the self-description can build up composite types from basic building blocks, but can not define fundamental types. Instead, standard identifiers are provided by for the fundamental types. The currently supported types are:
int8
int16
int32
int64
int128
uint8
uint16
uint32
uint64
uint128
code_ptr
data_ptr
ieee_single
ieee_double
arm_fpa_ext
i387_ext
i386_eflags
i386_mxcsr
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A target description must contain either no registers or all the target's registers. If the description contains no registers, then will assume a default register layout, selected based on the architecture. If the description contains any registers, the default layout will not be used; the standard registers must be described in the target description, in such a way that can recognize them.
This is accomplished by giving specific names to feature elements which contain standard registers. will look for features with those names and verify that they contain the expected registers; if any known feature is missing required registers, or if any required feature is missing, will reject the target description. You can add additional registers to any of the standard features -- will display them just as if they were added to an unrecognized feature.
This section lists the known features and their expected contents. Sample XML documents for these features are included in the source tree, in the directory `gdb/features'.
Names recognized by should include the name of the company or organization which selected the name, and the overall architecture to which the feature applies; so e.g. the feature containing ARM core registers is named `org.gnu.gdb.arm.core'.
The names of registers are not case sensitive for the purpose of recognizing standard features, but will only display registers using the capitalization used in the description.
G.4.1 AArch64 Features G.4.2 ARM Features G.4.3 i386 Features G.4.4 MIPS Features G.4.5 M68K Features G.4.6 PowerPC Features G.4.7 TMS320C6x Features
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `org.gnu.gdb.aarch64.core' feature is required for AArch64 targets. It should contain registers `x0' through `x30', `sp', `pc', and `cpsr'.
The `org.gnu.gdb.aarch64.fpu' feature is optional. If present, it should contain registers `v0' through `v31', `fpsr', and `fpcr'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `org.gnu.gdb.arm.core' feature is required for non-M-profile ARM targets. It should contain registers `r0' through `r13', `sp', `lr', `pc', and `cpsr'.
For M-profile targets (e.g. Cortex-M3), the `org.gnu.gdb.arm.core' feature is replaced by `org.gnu.gdb.arm.m-profile'. It should contain registers `r0' through `r13', `sp', `lr', `pc', and `xpsr'.
The `org.gnu.gdb.arm.fpa' feature is optional. If present, it should contain registers `f0' through `f7' and `fps'.
The `org.gnu.gdb.xscale.iwmmxt' feature is optional. If present, it should contain at least registers `wR0' through `wR15' and `wCGR0' through `wCGR3'. The `wCID', `wCon', `wCSSF', and `wCASF' registers are optional.
The `org.gnu.gdb.arm.vfp' feature is optional. If present, it should contain at least registers `d0' through `d15'. If they are present, `d16' through `d31' should also be included. will synthesize the single-precision registers from halves of the double-precision registers.
The `org.gnu.gdb.arm.neon' feature is optional. It does not need to contain registers; it instructs to display the VFP double-precision registers as vectors and to synthesize the quad-precision registers from pairs of double-precision registers. If this feature is present, `org.gnu.gdb.arm.vfp' must also be present and include 32 double-precision registers.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `org.gnu.gdb.i386.core' feature is required for i386/amd64 targets. It should describe the following registers:
The register sets may be different, depending on the target.
The `org.gnu.gdb.i386.sse' feature is optional. It should describe registers:
The `org.gnu.gdb.i386.avx' feature is optional and requires the `org.gnu.gdb.i386.sse' feature. It should describe the upper 128 bits of YMM registers:
The `org.gnu.gdb.i386.linux' feature is optional. It should describe a single register, `orig_eax'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `org.gnu.gdb.mips.cpu' feature is required for MIPS targets. It should contain registers `r0' through `r31', `lo', `hi', and `pc'. They may be 32-bit or 64-bit depending on the target.
The `org.gnu.gdb.mips.cp0' feature is also required. It should contain at least the `status', `badvaddr', and `cause' registers. They may be 32-bit or 64-bit depending on the target.
The `org.gnu.gdb.mips.fpu' feature is currently required, though it may be optional in a future version of . It should contain registers `f0' through `f31', `fcsr', and `fir'. They may be 32-bit or 64-bit depending on the target.
The `org.gnu.gdb.mips.dsp' feature is optional. It should contain registers `hi1' through `hi3', `lo1' through `lo3', and `dspctl'. The `dspctl' register should be 32-bit and the rest may be 32-bit or 64-bit depending on the target.
The `org.gnu.gdb.mips.linux' feature is optional. It should contain a single register, `restart', which is used by the Linux kernel to control restartable syscalls.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
`org.gnu.gdb.m68k.core'
`org.gnu.gdb.coldfire.core'
`org.gnu.gdb.fido.core'
`org.gnu.gdb.coldfire.fp'
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `org.gnu.gdb.power.core' feature is required for PowerPC targets. It should contain registers `r0' through `r31', `pc', `msr', `cr', `lr', `ctr', and `xer'. They may be 32-bit or 64-bit depending on the target.
The `org.gnu.gdb.power.fpu' feature is optional. It should contain registers `f0' through `f31' and `fpscr'.
The `org.gnu.gdb.power.altivec' feature is optional. It should contain registers `vr0' through `vr31', `vscr', and `vrsave'.
The `org.gnu.gdb.power.vsx' feature is optional. It should contain registers `vs0h' through `vs31h'. will combine these registers with the floating point registers (`f0' through `f31') and the altivec registers (`vr0' through `vr31') to present the 128-bit wide registers `vs0' through `vs63', the set of vector registers for POWER7.
The `org.gnu.gdb.power.spe' feature is optional. It should contain registers `ev0h' through `ev31h', `acc', and `spefscr'. SPE targets should provide 32-bit registers in `org.gnu.gdb.power.core' and provide the upper halves in `ev0h' through `ev31h'. will combine these to present registers `ev0' through `ev31' to the user.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The `org.gnu.gdb.tic6x.gp' feature is optional. It should contain registers `A16' through `A31' and `B16' through `B31'.
The `org.gnu.gdb.tic6x.c6xp' feature is optional. It should contain registers `TSR', `ILC' and `RILC'.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
H.1 Process list
Users of often wish to obtain information about the state of the operating system running on the target--for example the list of processes, or the list of open files. This section describes the mechanism that makes it possible. This mechanism is similar to the target features mechanism (see section G. Target Descriptions), but focuses on a different aspect of target.
Operating system information is retrived from the target via the remote protocol, using `qXfer' requests (see qXfer osdata read). The object name in the request should be `osdata', and the annex identifies the data to be fetched.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When requesting the process list, the annex field in the `qXfer' request should be `processes'. The returned data is an XML document. The formal syntax of this document is defined in `gdb/features/osdata.dtd'.
An example document is:
<?xml version="1.0"?>
<!DOCTYPE target SYSTEM "osdata.dtd">
<osdata type="processes">
<item>
<column name="pid">1</column>
<column name="user">root</column>
<column name="command">/sbin/init</column>
<column name="cores">1,2,3</column>
</item>
</osdata>
|
Each item should include a column whose name is `pid'. The value of that column should identify the process on the target. The `user' and `command' columns are optional, and will be displayed by . The `cores' column, if present, should contain a comma-separated list of cores that this process is running on. Target may provide additional columns, which currently ignores.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The trace file comes in three parts: a header, a textual description section, and a trace frame section with binary data.
The header has the form \x7fTRACE0\n. The first byte is
0x7f so as to indicate that the file contains binary data,
while the 0 is a version number that may have different values
in the future.
The description section consists of multiple lines of ASCII text
separated by newline characters (0xa). The lines may include a
variety of optional descriptive or context-setting information, such
as tracepoint definitions or register set size. will
ignore any line that it does not recognize. An empty line marks the end
of this section.
The trace frame section consists of a number of consecutive frames. Each frame begins with a two-byte tracepoint number, followed by a four-byte size giving the amount of data in the frame. The data in the frame consists of a number of blocks, each introduced by a character indicating its type (at least register, memory, and trace state variable). The data in this section is raw binary, not a hexadecimal or other encoding; its endianness matches the target's endianness.
R bytes
g packet in the remote protocol. Note that these are the
actual bytes, in target order and register order, not a
hexadecimal encoding.
M address length bytes...
V number value
Future enhancements of the trace file format may include additional types of blocks.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
.gdb_index section format
This section documents the index section that is created by save
gdb-index (see section 18.4 Index Files Speed Up). The index section is
DWARF-specific; some knowledge of DWARF is assumed in this
description.
The mapped index file format is designed to be directly
mmapable on any architecture. In most cases, a datum is
represented using a little-endian 32-bit integer value, called an
offset_type. Big endian machines must byte-swap the values
before using them. Exceptions to this rule are noted. The data is
laid out such that alignment is always respected.
A mapped index consists of several areas, laid out in order.
offset_type
unless otherwise noted:
will only read version 4, 5, or 6 indices
by specifying set use-deprecated-index-sections on.
GDB has a workaround for potentially broken version 7 indices so it is
currently not flagged as deprecated.
.debug_info section. The second
element in each pair is the length of that CU. References to a CU
elsewhere in the map are done using a CU index, which is just the
0-based index into this table. Note that if there are type CUs, then
conceptually CUs and type CUs form a single list for the purposes of
CU indices.
DW_AT_high_pc, the value is one byte beyond the end.
offset_type value.
Each slot in the hash table consists of a pair of offset_type
values. The first value is the offset of the symbol's name in the
constant pool. The second value is the offset of the CU vector in the
constant pool.
If both values are 0, then this slot in the hash table is empty. This is ok because while 0 is a valid constant pool index, it cannot be a valid index for both a string and a CU vector.
The hash value for a table entry is computed by applying an
iterative hash function to the symbol's name. Starting with an
initial value of r = 0, each (unsigned) character `c' in
the string is incorporated into the hash using the formula depending on the
index version:
r = r * 67 + c - 113.
r = r * 67 + tolower (c) - 113.
The terminating `\0' is not incorporated into the hash.
The step size used in the hash table is computed via
((hash * 17) & (size - 1)) | 1, where `hash' is the hash
value, and `size' is the size of the hash table. The step size
is used to find the next candidate slot when handling a hash
collision.
The names of C++ symbols in the hash table are canonicalized. We don't currently have a simple description of the canonicalization algorithm; if you intend to create new index sections, you must read the code.
A CU vector in the constant pool is a sequence of offset_type
values. The first value is the number of CU indices in the vector.
Each subsequent value is the index and symbol attributes of a CU in
the CU list. This element in the hash table is used to indicate which
CUs define the symbol and how the symbol is used.
See below for the format of each CU index+attributes entry.
A string in the constant pool is zero-terminated.
Attributes were added to CU index values in .gdb_index version 7.
If a symbol has multiple uses within a CU then there is one
CU index+attributes value for each use.
The format of each CU index+attributes entry is as follows (bit 0 = LSB):
offset_type value is backwards compatible
with previous versions of the index.
The determination of whether a symbol is global or static is complicated. The authorative reference is the file `dwarf2read.c' in sources.
This pseudo-code describes the computation of a symbol's kind and global/static attributes in the index.
is_external = get_attribute (die, DW_AT_external);
language = get_attribute (cu_die, DW_AT_language);
switch (die->tag)
{
case DW_TAG_typedef:
case DW_TAG_base_type:
case DW_TAG_subrange_type:
kind = TYPE;
is_static = 1;
break;
case DW_TAG_enumerator:
kind = VARIABLE;
is_static = (language != CPLUS && language != JAVA);
break;
case DW_TAG_subprogram:
kind = FUNCTION;
is_static = ! (is_external || language == ADA);
break;
case DW_TAG_constant:
kind = VARIABLE;
is_static = ! is_external;
break;
case DW_TAG_variable:
kind = VARIABLE;
is_static = ! is_external;
break;
case DW_TAG_namespace:
kind = TYPE;
is_static = 0;
break;
case DW_TAG_class_type:
case DW_TAG_interface_type:
case DW_TAG_structure_type:
case DW_TAG_union_type:
case DW_TAG_enumeration_type:
kind = TYPE;
is_static = (language != CPLUS && language != JAVA);
break;
default:
assert (0);
}
|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Copyright (C) 2007 Free Software Foundation, Inc. http://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. |
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based on the Program.
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
one line to give the program's name and a brief idea of what it does. Copyright (C) year name of author This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. |
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
program Copyright (C) year name of author This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. |
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see http://www.gnu.org/licenses/.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read http://www.gnu.org/philosophy/why-not-lgpl.html.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. http://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. |
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
The "publisher" means any person or entity that distributes copies of the Document to the public.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements."
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.
"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site.
"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.
"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.
An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) year your name. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''. |
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this:
with the Invariant Sections being list their titles, with
the Front-Cover Texts being list, and with the Back-Cover Texts
being list.
|
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| Jump to: | !
"
#
$
-
.
/
:
<
?
_
{
A B C D E F G H I J K L M N O P Q R S T U V W X Z |
|---|
| Jump to: | !
"
#
$
-
.
/
:
<
?
_
{
A B C D E F G H I J K L M N O P Q R S T U V W X Z |
|---|
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| Jump to: | !
#
$
-
:
@
^
_
A B C D E F G H I J K L M N O P Q R S T U V W X |
|---|
| Jump to: | !
#
$
-
:
@
^
_
A B C D E F G H I J K L M N O P Q R S T U V W X |
|---|
| [Top] | [Contents] | [Index] | [ ? ] |
On
DOS/Windows systems, the home directory is the one pointed to by the
HOME environment variable.
The completer can be confused by certain kinds of invalid expressions. Also, it only examines the static type of the expression, not the dynamic type.
Currently, only GNU/Linux.
Note that some side effects are easier to undo than others. For instance, memory and registers are relatively easy, but device I/O is hard. Some targets may be able undo things like device I/O, and some may not.
The contract between and the reverse executing target requires only that the target do something reasonable when tells it to execute backwards, and then report the results back to . Whatever the target reports back to , will report back to the user. assumes that the memory and registers that the target reports are in a consistant state, but accepts whatever it is given.
Unless the code is too heavily optimized.
Note that embedded programs (the so-called "free-standing"
environment) are not required to have a main function as the
entry point. They could even have multiple entry points.
The only restriction is that your editor (say ex), recognizes the
following command-line syntax:
ex +number file |
`b' cannot be used because these format letters are also
used with the x command, where `b' stands for "byte";
see Examining Memory.
This is a way of removing
one word from the stack, on machines where stacks grow downward in
memory (most machines, nowadays). This assumes that the innermost
stack frame is selected; setting $sp is not allowed when other
stack frames are selected. To pop entire frames off the stack,
regardless of machine architecture, use return;
see Returning from a Function.
In non-stop mode, it is moderately rare for a running thread to modify the stack of a stopped thread in a way that would interfere with a backtrace, and caching of stack reads provides a significant speed up of remote backtraces.
This is the minimum. Recent versions of support `-gdwarf-3' and `-gdwarf-4'; we recommend always choosing the most recent version of DWARF.
If you want to specify a local system root using a directory that happens to be named `remote:', you need to use some equivalent variant of the name like `./remote:'.
If you choose a port number that
conflicts with another service, gdbserver prints an error message
and exits.
In `gdb-/gdb/refcard.ps' of the version release.
The `qP' and `qL' packets predate these conventions, and have arguments without any terminator for the packet name; we suspect they are in widespread use in places that are difficult to upgrade. The `qC' packet has no arguments, but some existing stubs (e.g. RedBoot) are known to not check for the end of the packet.
| [Top] | [Contents] | [Index] | [ ? ] |
tfind n
tdump
save tracepoints filename
:: and .
gdbserver Program
gdbserver
gdbserver
gdbserver
gdbserver
gdbserver
gdbserver
gdbserver
.debug_gdb_scripts section
F Request Packet
F Reply Packet
.gdb_index section format
| [Top] | [Contents] | [Index] | [ ? ] |
Summary of
1. A Sample Session
2. Getting In and Out of
3. Commands
4. Running Programs Under
5. Stopping and Continuing
6. Running programs backward
7. Recording Inferior's Execution and Replaying It
8. Examining the Stack
9. Examining Source Files
10. Examining Data
11. Debugging Optimized Code
12. C Preprocessor Macros
13. Tracepoints
14. Debugging Programs That Use Overlays
15. Using with Different Languages
16. Examining the Symbol Table
17. Altering Execution
18. Files
19. Specifying a Debugging Target
20. Debugging Remote Programs
21. Configuration-Specific Information
22. Controlling
23. Extending
24. Command Interpreters
25. Text User Interface
26. Using under GNU Emacs
27. The GDB/MI Interface
28. Annotations
29. JIT Compilation Interface
30. In-Process Agent
31. Reporting Bugs in
A. In Memoriam
B. Formatting Documentation
C. Installing
D. Maintenance Commands
E. Remote Serial Protocol
F. The GDB Agent Expression Mechanism
G. Target Descriptions
H. Operating System Information
I. Trace File Format
J..gdb_indexsection format
K. GNU GENERAL PUBLIC LICENSE
L. GNU Free Documentation License
Concept Index
Command, Variable, and Function Index
| [Top] | [Contents] | [Index] | [ ? ] |
| Button | Name | Go to | From 1.2.3 go to |
|---|---|---|---|
| [ < ] | Back | previous section in reading order | 1.2.2 |
| [ > ] | Forward | next section in reading order | 1.2.4 |
| [ << ] | FastBack | previous or up-and-previous section | 1.1 |
| [ Up ] | Up | up section | 1.2 |
| [ >> ] | FastForward | next or up-and-next section | 1.3 |
| [Top] | Top | cover (top) of document | |
| [Contents] | Contents | table of contents | |
| [Index] | Index | concept index | |
| [ ? ] | About | this page |