Tuesday, September 8, 2015

Number of Processors, Linux, and Make

[Much of this post is from here, http://www.binarytides.com/linux-cpu-information/]

For those who use Windows, there is an environment variable that comes pre-loaded, NUMBER_OF_PROCESSORS.

This variable contains the number of processors that a program might want to know to allow for parallel operations.

But in this day and age, the number of actual processors and the number reported may differ by a factor of two. The technology that allows this is called hyperthreading. Hyperthreading has been around for more than ten years.

A CPU chip may have two physical cores, but with hyperthreading, the operating system is presented with four processors.

Under Windows,  NUMBER_OF_PROCESSORS is the hyperthreading number.

Under Linux, we can run a set of commands and get both.

# Get the number of actual processors, not hyperthreaded processors
NUMBEROFPROCESSORS=`cat /proc/cpuinfo | grep 'core id' | wc -l`
echo "Number of Processors="$NUMBEROFPROCESSORS

# Get the number of hyperthreaded processors
NUMBEROFHTTPROCESSORS=`cat /proc/cpuinfo | grep processor | wc -l`
echo "Number of Hyperthreaded Processors="$(($NUMBEROFHTTPROCESSORS-$NUMBEROFPROCESSORS))

The environment variable NUMBEROFPROCESSORS is created by the output of the file /proc/cpuinfo being piped to grep, which looks for the string processor or core id. The result is then piped to wc which counts the number of lines grep found.

My results are as follows. I'm running on an Intel(c) Core(tm)2 T9400. It does not support hyperthreading.

Number of Processors=2
Number of Hyperthreaded Processors=0

We can use this information with the -j argument for make.

# Get the number of actual processors, not hyperthreaded processors
NUMBEROFPROCESSORS=`cat /proc/cpuinfo | grep processor | wc -l`
make -j$NUMBEROFPROCESSORS

Now make will use the number of processors (with hyperthreading). We don't have to change the make script each time we get more cores on our VM or laptop.

It is mentioned (can't find the URL) that -j has no effect when running under MS-DOS. Don't know if this has changed. The version of Windows is not mentioned. nmake, supplied with Visual Studio, is said to be available for multi-core (parallel) operation. StackOverflow discussion here.

Using -j, in general, assumes that the build is not a highly dependent build. Lots of leaves in the build tree.

More details on cpuinfo can be found here.

Nice discussion on the why's and wherefore's of processors, physical ids, and cores here.

Busy - Sluggish

Now that you have enabled make to use all the available processors, be prepared for your system to get sluggish.

On a build machine you want to build fast. On a development machine you want to do something else, but you just told make to use all the available CPUs. Ctrl-C might not even be effective, for a while.

Your mileage may vary. Forewarned is forearmed.

Number of Processors + 1

There are many discussions that state the value for the make argument -j should be the number of processors +1, even +2. The idea is that make (and compiling) is an I/O intensive activity. While waiting for one compile to finish, another can be prepared.

At this URL, Agostino Sarubbo, has shown that one should use the number of processors and no more.

The reason for this would be a subject of a separate blog. Exercise for the reader. :)

References

All the in-line links listed in one place.






Thursday, August 27, 2015

IAR, Vybrid, Low Power Timer, and Getting Started Example

This is a short blog on a tiny issue I found working with the FreeScale Vybrid Tower evaluation board with the IAR IDE 'Getting Started' example.

The Getting Started example has two interrupts, one for a periodic timer using the Lower Power Timer (0) and a hardware interrupt for a button, SW1.

There are two IRQs, one for the timer, and one for the button.

The button IRQ outputs the number of times the button has been pushed each time the button is pushed. The timer IRQ blinks a blue LED, off for 1/2 second and on for 1/2 second.

I decided to tweak the example. I want to report the value of the timer counter when the button is pushed. I should see a value from 0 to 500, as the periodic timer is set to 1kHz.

I used the existing IAR symbols, LPTMR0->CNR, to read the timer counter.

Each time I pushed the button the value for CNR was 0.

The Low Power Timer documentation in the FreeScale Vybrid reference manual, pg. 1910-1913.

On page 1913, 41.4.5 LPTMR Counter, it states

The CNR cannot be initialized, but can be read at any time. On each read of the CNR, software must first write to the CNR with any value. This will synchronize and register the current value of the CNR into a temporary register. The contents of the temporary register are returned on each read of the CNR.

Thus one must first write to CNR, then read. The write does not change CNR, it just enables the timer counter to be latched into the CNR register when read.

LPTMR0->CNR = 1;
printf("Timer: %d\n:, LPTMR0->CNR);

Unfortunately, the first line of code gives a compiler error. The error states that CNR cannot be modified.

The structure defined in the Getting Started example for the Vybrid tower that defines the LPTMR registers (MVF50GS10MK50.h), CSR, PSR, CMR and CNR. The registers CSR, PSR, and CMR are defined _IO uint32_t. _IO means read/write.

CNR is defined _I uint32_t as read only. Wrong.

The CNR definition must be changed to _IO uint32_t in order to read CNR.

/** LPTMR - Register Layout Typedef */
typedef struct {
  __IO uint32_t CSR;                               /**< Low Power
Timer Control Status Register, offset: 0x0 */
  __IO uint32_t PSR;                               /**< Low Power
Timer Prescale Register, offset: 0x4 */
  __IO uint32_t CMR;                               /**< Low Power
Timer Compare Register, offset: 0x8 */
  __IO  uint32_t CNR;        /* 
__I  uint32_t */   /**< Low Power
Timer Counter Register, offset: 0xC */
} LPTMR_Type;



Oops.

As I said, a tiny issue.

Thursday, August 20, 2015

Yet Another Make Tutorial - VI

Last post we created a framework to build a library, a unit test program, and a production program. Let's make a few changes to introduce the building of a library.

[This is a continuation of posts  III, and III, IV, and V. Make files are found here.]

Building a library requires a different program than clang, clang++, gcc, or g++. The program is called ar, for archive.

The standard arguments are:

ar rvf <libraryname>.a <object files>

Note: gcc (clang) requires, when using the -l switch to build a program, that the library name starts with the three letters lib, and the three letters lib are not supplied with the -l switch. The .a extension is also assumed.

For example:

# Create libmylibrary.
ar rvf libmylibrary.a iseven.o isnumber.o

#Use libmylibrary.a with gcc
gcc -o aprogram -lmylibrary

The ar program will be used for our $(TARGET): $(OBJS_C) $(OBJS_CXX) command in our make file in the lib directory.

The $(TARGET): $(OBJS_C) $(OBJS_CXX) command in the target and unitest directories will have have -lmylibrary added.

In addition, gcc (or clang) needs to know the path to the library. That is supplied with the -L switch (uppercase L). -L../lib/target

Duplicating two hard coded library names is not good coding practice.

In addition, the dependency list has to have the library added.

$(TARGET): $(OBJS_C) $(OBJS_CXX) ../lib/target/libmylibrary.a

Now there are three hard coded locations for the library.

Let's solve this problem using make variables.

One variable can hold the -L path and one variable can hold the -l library name.

But we have a problem, if the variables are defined in the lib directory make file, the variables are not seen by the unitest and target make files as the lib make file is a child of the parent make file. Variables don't flow up from the child make files.

If we define parts of the -L variable and the -l variable in the parent, pass them as variables to the child make file. We only change to the parent make file is the name of the library.

# makefile17

LIB := lib
UNITEST := unitest
TARGET := target
MYLIBRARY := mylibrary

.PHONY: all clean

all: build_unitest build_target

clean:
  $(MAKE) -C $(TARGET) -f makefile17target clean
  $(MAKE) -C $(UNITEST) -f makefile17unitest clean
  $(MAKE) -C $(LIB) -f makefile17lib clean

build_target: build_lib 
  $(MAKE) -C $(TARGET) -f makefile17target LIBRARY=$(MYLIBRARY) LIBDIR=$(LIB)

build_unitest: build_lib
  $(MAKE) -C $(UNITEST) -f makefile17unitest LIBRARY=$(MYLIBRARY) LIBDIR=$(LIB)

build_lib:
  $(MAKE) -C $(LIB) -f makefile17lib LIBRARY=$(MYLIBRARY)


We've added the variable MYLIBRARYWe have added arguments to the make commands.

The argument of the form

<name>=<string>

defines <name> as a variable with the contents of <string>. <name> is a variable that is then available when make is run.

For the build_lib target, makefile17lib will have LIBRARY defined as mylibrary.

We will have a convention in directory names. The directory under lib where libmylibrary.a will be built is target. Yes, it is hard coded. A variable could be created, LIB_DIR_TARGET, but target will be fine for this tutorial.

Below are the other Make16 make files updated to use the new arguments. [Not complete makexx17 files]


# Makefile17lib

# Build the library

LIB_NAME := lib$(LIBRARY).a
TARGET_DIR := target

.PHONY: all clean

all: $(TARGET_DIR)/$(LIB_NAME)

clean:

$(TARGET_DIR)/$(LIB_NAME) :
mkdir -p $(TARGET_DIR)
cd $(TARGET_DIR)
touch $(TARGET_DIR)/$(LIB_NAME)



# makefile17unitest

LIB_NAME := lib$(LIBRARY).a


.PHONY: all clean

all: unitest

clean:

unitest: ../$(LIBDIR)/target/$(LIB_NAME)
echo Build unitest



# makefile17target

# Build the target program

LIB_NAME := lib$(LIBRARY).a

.PHONY: all clean

all: aprogram

clean:

aprogram: ../$(LIBDIR)/target/$(LIB_NAME)
echo Build target


The make files still don't do much, but the framework is coming into shape. We parameterized the library name.

Next post is to add some code. Add the make file commands from makefile15 to build a library, a unit test, and a program.

Next blog  VII.

Saturday, August 15, 2015

Yet Another Make File Tutorial - V

The past four tutorials, III, and III, and IV have created a good make file for a sub-directory of sources and kept the build directories clean and neat, as well as configure make to run a bit faster.

[Again the source for these tutorials is found here.]

Note: These make files are NOT POSIX compliant.

These next tutorial(s) will build a production quality set of make files that handle libraries, unit tests, and the production program. There will be more discussion about compilers, linkers, unit tests, libraries as well as make files. Using make files also solves the requirement of having one button builds for build tools such a Jenkins. These next tutorials will set up a framework for production and unit test programs.

Unit tests are the sanity checkers for programmers. They make you feel good because they prove that you haven't messed up with your last set of changes. But building unit tests and that program you want to ship for $$s with the same code takes some planning.

A unit test program has a main(). Your program has a main(). You can't have two main() functions in the same program.

Solution: A library or libraries for your code that isn't main(). Each library gets a directory. Each unit test gets a directory, and the production program gets a directory. Thus three directories, lib, unitest, and target.

Each directory will need a make file. A master make file is required to 'run' each of the other make files.

This tutorial will introduce new make syntax and features.

Our first make file is not a make file that compiles code. It calls other make files. Some of the more experienced readers will see we are starting down the path to make file recursion, where make calls make.

There is an article detailing this topic here. The argument is to create a single make file instead of series of recursively called make files.

Let's start.

The new directory Make16 is where we start.

# makefile16

LIB := lib
UNITEST := unitest
TARGET := target

.PHONY: all clean

all: build_unitest build_target

clean:
$(MAKE) -C $(TARGET) -f makefile16target clean
$(MAKE) -C $(UNITEST) -f makefile16unitest clean
$(MAKE) -C $(LIB) -f makefile16lib clean

build_target: build_lib 
$(MAKE) -C $(TARGET) -f makefile16target

build_unitest: build_lib
$(MAKE) -C $(UNITEST) -f makefile16unitest

build_lib:
$(MAKE) -C $(LIB) -f makefile16lib    

[Links for more details here.]

The -C switch changes the working directory to the next argument. When using -C the option -w is automatically turned on. The -w option outputs Entering <directory> and Leaving <directory>. This helps with debugging.

The $(MAKE) variable is a special variable of the Gnu Make. As has been said before, this tutorial is NOT writing POSIX make files.

The make files for lib, unitest, and target are below. These are just shells. The make files have no source code. make still outputs its messages. There are no errors and the framework is shown correct.

# Makefile16lib

# Build the library

.PHONY: all clean

all: alibrary

clean:


alibrary:

================

# makefile16unitest

.PHONY: all clean

all: unitest

clean:

unitest:

===============

# makefile16target

# Build the target program

.PHONY: all clean

all: aprogram

clean:

aprogram:


To run enter

make -f makefile16

or

make -f makefile16 clean


If you don't want all of the Entering and Leaving messages, add the -s switch.

make -s -f makefile16

No output appears.

Next blog we'll add more details about the library. VI.


Thursday, August 13, 2015

Yet Another Makefile Tutorial - IV

The three previous tutorials on make files, I, II, and III, discussed the contents of a make file.

This tutorial will discuss the program make itself, some switches, and internals to help with performance.

make has a lot of built in defaults. The defaults can allow one to create a quick make file. No dependencies have to be created. The simple dependency patterns are already defined. For example.

A directory containing *.c files.

$(SRC_LIST) := $(wildcard *.c)
$(OBJ_LIST) := $(SRC_LIST:.c=.o)

aprogram : $(OBJ_LIST)
    gcc $^ -o $@

That's it. make has default dependencies for %o:%c and gcc.

make --help

will show all the arguments supported by make.

-d  Print a lot of debugging information.

The debugging referred to is about patterns and rules. It is not about variables. [You have to use $(info ...) for variables as discussed in II.]

Create an empty make file and debug it.

touch makefile
make -d

A partial output from make is below.

GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Reading makefiles...
Reading makefile `makefile'...
Updating makefiles....
 Considering target file `makefile'.
  Looking for an implicit rule for `makefile'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.o'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.c'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.cc'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.C'.
  Trying pattern rule with stem `makefile'.
. . . . . . . .

Examining the complete listing the following file extensions are found.

c, cc, C, o, cpp, p, f, F, m, r, s, S, mod, sh, v, y, l, and w.

There are many more implicit patterns and rules than one needs for a simple set of C or C++ files.

Processing all the implicit patterns does slowdown make.

-r --no-builtin-rules  Disable built-in implicit rules.

make -d -r

The entire debug output is below.

GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Reading makefiles...
Reading makefile `makefile'...
Updating makefiles....
 Considering target file `makefile'.
  Looking for an implicit rule for `makefile'.
  No implicit rule found for `makefile'.
  Finished prerequisites of target file `makefile'.
 No need to remake target `makefile'.

Let's go back to the Make15 directory and use -d and -r.

make -d -f makefile15

2689 lines of output for 4 files!

Now

make -d -r -f makefile15

Only 152 lines of output.

For a project with just 4 files, the difference between using -r and not using it is probably isn't noticeable, but with a large project using only one or two patterns, -r can save time.

There is a pseudo target that has a similar feature as -r, .SUFFIXES:

Create a make file with just .SUFFIXES:

echo .SUFFIXES: > makefile
makefile -d

Similar short output, but not as short as -r.

Two methods of saving time with make, -r or .SUFFIXES:

Saving time is what make is all about.

A page discussing many of the implicit rules.
https://www.gnu.org/software/make/manual/html_node/Catalogue-of-Rules.html

Next post on make files is here, V.

-----------------------------------

P.S. Is there a way to find out ALL the implicit rules and patterns?

Try

make-p

http://stackoverflow.com/questions/16842930/why-does-gnu-make-define-implicit-pattern-and-implicit-suffix-rules

More details on .SUFFIXES:

https://www.gnu.org/software/make/manual/html_node/Suffix-Rules.html



Tuesday, August 11, 2015

Yet Another Make Tutorial - III

In the first Yet Another Make Tutorial a simple make file, makefile10a, was created to handle the organization of a parent directory, a source directory and an include directory. Yet Another Make Tutorial - II updated makefile10a to include sub-directories in the source directory, makefile13, and makefile14.

All files are found in a BitBucket archive here.

Both make files 13 and 14 dealt with only one source file extension *.c and only one compiler gcc.

Here we are going to expand to add C++ source files and clang.

clang is the up and coming compiler that is replacing gcc, part of the LLVM project. clang.llvm.org.

Pre-built images are available for Windows and Linux.

In makefile13, the source files were selected by the statement


SRC_LIST := $(shell find $(SRC_DIR) -name "*.c" -type f)

If we want to use this make file for both C and C++ files changes need to be made.

Typically the compiler options used for building C files are not the same as those used for C++ files, in fact gcc is used for C files and linkage, where g++ is used for C++ files.

We will need a second set of variables to handle C++ files.

We will also create a set of mixed source files, main.cpp, suba.cpp, subc.c, and subd.c.

These new files are found in the folder Make15 of the BitBucket repository.

Something that has been missing from the tutorials is the standard variable names used for the compiler, CC and CXX.

Let's add the CC variable, used to define the C compiler and CXX to define the C++ compiler. The default for make is cc for the C compiler and g++ for the C++ compiler. We will define both just to be clear.

CC = gcc
CXX = g++

Now add the new set of variables for the C++ source files and objects. The C and C++ source files will be mixed together in the same source tree.

SRC_LIST_CXX := $(shell find $(SRC_DIR) -name "*.cpp" -type f)
SRC_DIR_LIST_CXX := $(patsubst %/,%,$(sort $(dir $(SRC_LIST_CXX))))
OBJS_CXX := $(patsubst %.cpp,$(OBJ_DIR)/%.o,$(notdir $(SRC_LIST_CXX)))
DEPS_CXX := $(OBJS_CXX:.o=.d)

Remember to tell make where to find the *.cpp files via vpath.

vpath %.cpp $(SRC_DIR_LIST_CPP)


These updates result in makefile15.

#makefile15
SRC_DIR := src
OBJ_DIR := obj
INC_DIR := inc
SRC_LIST_C := $(shell find $(SRC_DIR) -name "*.c" -type f)
SRC_DIR_LIST_C := $(patsubst %/,%,$(sort $(dir $(SRC_LIST_C))))
OBJS_C := $(patsubst %.c,$(OBJ_DIR)/%.o,$(notdir $(SRC_LIST_C)))
DEPS_C := $(OBJS_C:.o=.d)
SRC_LIST_CXX := $(shell find $(SRC_DIR) -name "*.cpp" -type f)
SRC_DIR_LIST_CXX := $(patsubst %/,%,$(sort $(dir $(SRC_LIST_CXX))))
OBJS_CXX := $(patsubst %.cpp,$(OBJ_DIR)/%.o,$(notdir $(SRC_LIST_CXX)))
DEPS_CXX := $(OBJS_CXX:.o=.d)
TARGET_DIR := target
TARGET := $(TARGET_DIR)/aprogram
vpath %.c $(SRC_DIR_LIST_C)
vpath %.cpp $(SRC_DIR_LIST_CXX)
.PHONY: all clean

CC = gcc
CXX = g++

all: $(TARGET)

clean:
@rm -f $(OBJ_DIR)/*.o $(OBJ_DIR)/*.d
@rm -f $(TARGET)
@rmdir $(OBJ_DIR)
@rmdir $(TARGET_DIR)

$(TARGET): $(OBJS_C) $(OBJS_CXX)
gcc -lstdc++ $^ -o $@

$(OBJ_DIR)/%.o: %.c
$(CC) -c -MMD -I $(INC_DIR) $< -o $@

$(OBJ_DIR)/%.o: %.cpp
$(CXX) -c -MMD -I $(INC_DIR) $< -o $@

REQUIRED_DIRS := $(OBJ_DIR) $(TARGET_DIR)
_MKDIRS := $(shell for d in $(REQUIRED_DIRS); \
 do              \
 mkdir -p $$d;   \
 done)

-include $(DEPS_C)
-include $(DEPS_CXX)

Let's review each of the updates.

The new C++ variables have been discussed above.

The new vpath statement has been discussed above.

The new CC and CXX variables have been discussed above.

$(TARGET): $(OBJS_C) $(OBJS_CXX)
gcc $^ -o $@

The linkage statement adds $(OBJS_CXX). This makes $(TARGET) dependent on the C++ object files as well as the C object files.

$(OBJ_DIR)/%.o: %.cpp
$(CXX) -c -MMD -I $(INC_DIR) $< -o $@

The new dependent statement builds the C++ object files from the C++ sources, specifically *.cpp extensions.

-include $(DEPS_CXX)

Finally, the second -include statement to add the *.d dependencies for C++ files.

[Note: To update to use *.cc or *.cxx extensions one can changes the 'find' statement for  SRC_LIST_CXX and the OBJ_DIR dependency.]

If more than one extension is used, two sets of code mixed together, one can add an additional -name argument using the -o, the or operator.

SRC_LIST_CXX := $(shell find $(SRC_DIR)-type f -name "*.cpp" -o -name "*.cc" )

However, just adding -o -name "*.cc" is not all that is required to handle multiple extensions. Another post will go into the details. Note that a new pattern, another vpath, and the OBJS need updating.

Now lets change to use clang. Simple. Update CC and CXX.


CC = clang

CXX = clang++

Both C and C++ files are handled by clang.

Another step forward to more make know how.

Next blog on make performance, IV.

Monday, July 27, 2015

Using GDB with a target processor that does not have hardware breakpoints.

--This is an expansion of a previous blog GDB and Symbol Tables.--

GNU Debugger, GDB is one of the most popular debuggers today.

It has a lot of features. One of those features that doesn't have a lot discussion are the convenience variables and user-defined commands.

I work with firmware. GDB can be used with processors that are not the host processor where you write and compile your code. However, additional GDB commands must be used to provide this feature.

I found a very good use for both of these neglected features when working on firmware on legacy (old) hardware.

Motorla 68000

Microprocessors today support hardware breakpoints. The microprocessor provides these breakpoints as part of its structure, no additional support or tools required, JTAG enabled and all that rot.

But what happens when you have to work with legacy devices. The Motorola 68000 series microprocessor is such an example.

When the M68K was first introduced, the tool to use was an ICE, In-Circuit Emulator. A large connector, the same as the CPU, was inserted into the circuit board instead of the M68K, and an M68K was placed on top of this connector. The ICE device then controlled ALL the pins of the M68K. The ICE also had RAM that mirrored the memory space on the circuit board. In essence, a complete clone.

ICEs were expensive. One could still use an ICE today, if you can find one, and it is still working. :)

A simpler method was developed for the M68K, and later families of Motorola processors (today Freescale), BDM, Background Debug Mode.

BDM is a serial bus dedicated to debugging supported by a simple header.

Using BDM, one can read and write registers, read and write RAM, read from ROM and reflash programs, but it has very limited instruction debugging. This makes debugging complex programs difficult. BDM was a predecessor to JTAG.

Enter the convenience variables and user-defined commands.

GDB Server

To use GDB with any target hardware, one needs a GDB server to translate the GDB commands to those supported by the target hardware, along with sending the commands over a wire from the host to the target and back.

This is done in two ways, one is with a software GDB server, and one is with a hardware GDB server. With a software GDB server, a GDB server program is run on the host and it talks to the target via dedicated hardware. With a hardware GDB server, there is a hardware device that communicates with host, via Ethernet or RS-232 and has a built-in GDB server as well as its own set of commands. No software running on the host.

With either choice, you must inform GDB where the GDB server is located.

.GDBINIT

.gdbinit is the default file that GDB reads before it starts debugging your program. If the file doesn't exist, GDB doesn't complain about the missing file, but GDB won't talk to the hardware where you want to debug your program.

.gdbinit is a file that establishes the environment you want GDB to use with the target. You specify the DDR memory register values. You specify valid memory address ranges. You specify chip selects, etc. Anything that needs to be specified to be able to read and write to registers and memory of the target hardware.

Your program does all this, but GDB needs to talk to the hardware before your program starts.

In addition to configuring the remote target hardware, GDB needs to know where the hardware is.

target remote 192.168.0.10

This would be a typical command line added to .gdbinit, one of the first lines of the file.

One does not have use the file name .gdbinit. The command switch -x <filename> can be used to specify a file other than .gdbinit. On Linux systems, .gdbinit will be a hidden file. On Windows systems it is sometimes difficult to create a file with a period. [Notepad++ helps with this.]


Abatron BDI2000

Abatron, of Switzerland, produced a device, BDI2000, that supports the Motorola BDM and the CPU32 family of Motorola processors.

The Abatron BDI2000 is an example of a hardware GDB server.

Abatron BDI2000 User's Manual

[BTW, this device is no longer manufactured. Abatron has announced in 2015 that they will be closing their doors. There are several listings on eBay. ]

 The BDI2000 supports two native M68000 commands, ti and tc.

The ti command steps to the next machine instruction.

The tc command runs until there is a change in the flow of instructions. Essentially stop when there is a jump instruction.

The BDI2000 does not disassemble machine instructions. We use GDB to perform this task.

The Motorola 68332 starts at location 0. The first 32-bit value is top of the stack. The second 32-bit value is the address of the first instruction.

Upon 'reset' using the BDI2000, the 68332 program counter is at location 0x4.

How does one debug code written for this hardware?

How does one use GDB with such hardware?

You say "Can't you just use software breakpoints?". Yes, if the program is running out of RAM. If the program runs out of flash or EEProm, software breakpoints are not possible. A software break point requires the modification of the break location to replace the desired instruction with a BREAK interrupt instruction. Flash does not allow this substitution.

An alternative is to substitute a RAM chip for the Flash chip and download the program with each power on.

But let's say we don't want to change the hardware. We need to test and debug it the way it is.

Today with C/C++ compilers debugging in assembly is only for the most hard core programmers or difficult hardware bugs.

Convenience Variables and User-Defined Commands

Using convenience variables and user-defined commands, we are going to create break point commands, break on memory change, and other commands.

The GDB command that allows access to the native BDI2000 commands is monitor or mon for short.

To list the M68K registers, enter

 mon rd

GDB has the equivalent

  info reg

But mon rd has a very compact presentation. GDB presents each register on a separate line.

mon reset will perform a soft reset of the CPU.

The BDI2000 converts the GDB command si into ti and the ni into ti. The normal n and s commands do not work as they expect native hardware breakpoints or software breakpoints.

We will simulate s and n with user-defined commands. The user-defined commands are added to .gdbinit.

User-Define Commands

The format of a user-defined command is

define <command>
<body of command>
end
document <command>
<documentation>
end

The documentation written with the command is echoed when you type the GDB command help <command>.
All the user-defined commands are listed with the GDB command help user-defined.


Convenience Variables

A convenience variable is defined as

set <variable> = <expression>

Reset, Read, Info

Let's define three user-defined commands, mreset, mrd, and minfo.

define mreset
mon reset
flushregs
end
document mreset
Send the BDI2000 command 'reset'
'flushregs' is required as GDB does not
know what 'mon reset' did to the target
end

define mrd
mon rd
end
document mrd
Send the BDI2000 command 'rd'. Reads better than 'info reg'
end

define minfo
mon info
end
document minfo
Send the BDI2000 command 'info'
end

mreset is a short cut for the two commands mon reset and flushregs.

User-defined commands are very useful to define repetitive commands.

mrd is a repetitive command to use BDI2000 to output registers instead of GDB, as the output is more compact.

minfo is again another short cut command.

A more useful command is beefcore.

define beefcore
  mon mm 0xe00000 0xdeadbeef 0x10000
  mon mm 0xe10000 0xdeadbeef 0x10000
  mon mm 0xe20000 0xdeadbeef 0x10000
  mon mm 0xe30000 0xdeadbeef 0x10000
  mon mm 0xe40000 0xdeadbeef 0x10000
  mon mm 0xe50000 0xdeadbeef 0x10000
  mon mm 0xe60000 0xdeadbeef 0x10000
  mon mm 0xe70000 0xdeadbeef 0x10000
end
document beefcore
Fill RAM (0xe00000), 1MB, with 0xdeadbeef
end

Fills RAM with the same data, 0xdeadbeef.

Now that we done the easy commands, let's build more complicated ones.

dxi

First let's make a short cut dxi.

# dxi
# dxi <count>
# dxi <count> <startaddress>
define dxi
if $argc == 2
  x /$arg0i $arg1
end
if $argc == 1
  x /$arg0i $pc
end
if $argc == 0
  x /20i $pc
end
end
document dxi
Output $arg0 instructions from current PC
If $arg0 is not supplied, output 20 instructions
end

# starts a comment in .gdbinit.

The user-defined command supports if/then/else and while constructs.

Just like a function, arguments can be passed to a user-defined command. $argc is the variable with the number of arguments. Each argument is $argx where x is the argument number starting with zero.

The command dxi has three forms,

dxi

dxi <number>

dxi <number> <address>

$pc is the built-in GDB variable for the current program counter.

With no arguments, dni executes the GDB command x /20i $pc. Output 20 assembly instructions starting at the program counter.

dxi <number> changes the 20 to <number>. Notice this is a string substitution. <number> is not a number in the 'int' sense. It is a string that GDB interprets as a number.

Finally, dxi <number> <address> outputs <number> of assembly instructions starting at <address>.

dxi is a short cut command, but with some variability.

BMEM - Break on Change of Contents of An Address

Below are three command bmemc, bmems, and bmeml. bmemloop is used by all three other commands. This shows GDB user-defined commands support subroutines.

Each command executes an si command until the value at the specified address changes a specified number of times.

bmemc examines 8-bit values, bmems examines 16-bit values, and bmeml examines 32-bit values.

The logic of each command is the same.

Now the specific syntax of user-defined commands that is not well documented is shown.

#bmemloop ptr count
#internal function, called by bmemc, bmemw, and bmeml
define bmemloop
  if $argc == 2
     set $loop = $arg1
     set $bmemlooprd = *($arg0)
     set $bmemlooporg = *($arg0)
     while ($loop > 0)
       while ($bmemlooprd==$bmemlooporg)
         si
         set $bmemlooprd=*($arg0)
       end
       set $loop = $loop - 1
       if $loop == 0
         loop_break
       end
       set $bmemlooporg = *($arg0)
     end
  end
end
document bmemloop
First argument specified address to watch
Second argument specifies number of times to loop
  for each change of the specified address
bmemloop 10000 5 - Loop until address 10000 changes 5 times
end

define bmemc
  if $argc>=1
    disable disp 1
    set $bmemcx=(char*)($arg0)
    set $bmemccount = 1
    if $argc == 2
      set $bmemccount = $arg1
    end
    bmemloop $bmemcx $bmemccount
    enable disp 1
  end
end
document bmemc
bmemc runs until the byte at the address specified changes
end

define bmemw
  if $argc>=1
    disable disp 1
    set $bmemwx=(short*)($arg0)
    set $bmemwcount = 1
    if $argc == 2
      set $bmemwcount = $arg1
    end
    bmemloop $bmemwx $bmemwcount
    enable disp 1
  end
end
document bmemw
bmemw runs until the short (16-bit) at the address specified changes
end

define bmeml
  if $argc>=1
    disable disp 1
    set $bmemlx=(long*)($arg0)
    set $bmemlcount = 1
    if $argc == 2
      set $bmemlcount = $arg1
    end
    bmemloop $bmemlx $bmemlcount
    enable disp 1
  end
end
document bmeml
bmeml runs until the long (32-bit) at the address specified changes
end

Let's discuss some of the syntax.

set $bmemcx=(char*)($arg0)
set $bmemwx=(short*)($arg0)
set $bmemlx=(long*)($arg0)

These three lines each declare a convenience variable. Start with a dollar sign and create a name, start with a letter. The syntax that is not obvious is that the address specified as the first argument, arg0, of the command has to be casted to the appropriate type, a pointer of a specific width.

The syntax is the same as that of C.

The three convenience variables are now pointers.

We can pass the pointer to bmemloop, a subroutine, along with an optional count.

The enable and disable commands are used to suppress GDB output while the bmemloop is running. The si command outputs data each time it is executed.

To compare the value of two addresses we need pointers. bmemloop creates two convenience variables, bmemlooporg, the original value pointed to by arg0 and bmemlooprd, the value pointed to by arg0 after each si command.

bmemw 20000

This command will execute the si command until the 16-bit value at 20000 changes.

bmemc 0x30000 10

This command will execute the si command until the 8-bit value at 0x30000 changes 10 times.

Note that the address is hexadecimal. The cast to a pointer understands hexadecimal notation.

Performance

The commands bmemc, bmemw, and bmeml work, but they are slow. We are using the GDB interpreter to do something, that is today, done by hardware, but it does work.

Break on an Address - brk

Now let's create a break point user-defined command, brk.

# brk
# brk <dest_address>
# brk <dest_address> <count>
# brk <dest_address> <count> <offset>
define brk
  if $argc >= 1
    disable disp 1
    set $brkcount = 1
    if $argc >=2
      set $brkcount = $arg1
    end
    set $targ = (long)($arg0)
    if $argc == 3
      set $targ = $targ + $arg2
    end
    set $brkx = (unsigned char *)($targ)
    print $brkx
    if $brkcount > 0
      set $loop = 0
      while $brkx == $pc
        si
      end
      while ($loop < $brkcount)
        while ($brkx != $pc)
          si
        end
        set $loop = $loop +1
        if $loop == $brkcount
          loop_break
        end
        si
      end
    else
      print "Loop count cannot be negative"
    end
    enable disp 1
  end
  if $argc == 0
    si
  end
end
document brk
brk <dest_address> [count offset]
Send 'si' commands until <dest_address> reached
If count specified, send 'si' commands until <dest_address> is reached <count> times.
If offset specified, add <offset> to <dest_address> to make break address.
   When using <offset>, must specify count.
   brk $ReadSwitches 1 0x20 - break at $ReadSwitches+0x20 once.
end

The user-define command brk accepts one to three arguments. The first argument, mandatory, is the address we want to break on. The second optional argument is how many times we want to break before stopping. The third optional argument is an offset to add to the address to specify the break point address.

while ($brkx != $pc)

Execute si commands until the program counter is the same as the specified address.

Again the enable and disable commands are used.

brk is no speed demon, but it works.

The use of convenience variables and user-defined commands creates GDB commands that emulate break point commands.

See this bitbucket repository for all of the user-defined commands in one file.