Friday, January 26, 2018

Speculations of an Autonomous Vehicle World

Autonomous vehicles are just around the corner. Tesla has semi-autonomous vehicles for sale today. Waymo, Google, and Uber are all working on fully autonomous vehicles. Delphi, a supplier of auto parts, is selling the sensors necessary for autonomous vehicles to know about their environment.

What will happen to society, jobs, work, travel, and the environment as the autonomous vehicle becomes the norm? This is a speculation on a new world with autonomous vehicles.

In the beginning of the autonomous vehicle world only those early adopters will experience a new life style. Sitting in an autonomous vehicle will be like being in an airplane on the ground. The passenger doesn't care where the vehicle is going, just the vehicle gets to the destination safely and on time.

The world will not have 100% autonomous vehicles soon. It will start slow and build. As it builds, changes will occur in transportation, jobs, and leisure.

The first adoption of autonomous vehicles will be where the most savings of money and labor can be realized.  There will also be adoption by those that can afford new technology, but that will be as a status symbol, a toy. The initial commercial impact will be in long haul trucks, street sweepers, garbage trucks, snowplows, delivery vehicles, pizza delivery, and package delivery.

Already long haul trucks have been driven coast to coast autonomously. [ref] Volvo is advertising a self driving garbage truck. Self driving street sweeper is a direct spin off. Snowplows may be more difficult given the size of the blade and parked cars, but on limited access highways a distinct possibility. Pizza delivery can be by drone, but can also be done by small, go cart like vehicles. Punch in a code to unlock door when vehicle arrives. UPS is looking at deploying autonomous carts from a vehicle to deliver more than one package at a time in a limited area. The UPS truck could drive itself to the deployment location.

Those that drive trucks, short haul and long haul, delivery trucks, garbage trucks, street sweepers, snow plows, or other municipal vehicles will be the first to feel the job stress from autonomous vehicles. Drivers of taxis and buses will be affected as well.

Cities and towns will definitely do a cost analysis of employing autonomous vehicles to reduce costs. When current vehicles have reached their useful life, autonomous vehicles will available for purchase. Unions will push back. It may take decades to change the union contracts or wait for existing employees to retire. Removing the toll booth operators around NYC was technically feasible when the first FastPass was installed. It wasn't until 2017 that all human operator toll booths were removed.

When an autonomous delivery vehicle arrives with goods for a customer, does the customer unload the vehicle? Does the vehicle unload itself? Is there still a person with the delivery vehicle to unload goods?

Vacation travel will change. RVs will now be a living room on wheels. One will get to a destination in half the time. The vehicle drives 24 hours a day. Fueling a vehicle will need some thought. A robot gas station perhaps? How will humans tell the autonomous vehicle to pull over for a bathroom break or stop for lunch? Yes, RVs and buses have toilets, but not sedans.

How many less cars will on the road? Will there be less congestion? Will autonomous vehicles drive better resulting in less rush hour delays?

Who will benefit? Fleet owners of autonomous vehicles, body shops that customize interiors, software programmers, advanced auto mechanics, and training schools.

Who will lose? Truck drivers, taxi cab drivers, delivery drivers, public works employees, parking lot owners, parking lot attendees, automobile assembly line workers, car dealerships, auto mechanics, the Teamsters Union. Municipalities will have less revenue because there will be less cars to tax, less parking, so less parking meter fees. A law was proposed in Massachusetts that will tax autonomous vehicles for 'cruising' around without parking. The law essentially taxes the vehicle at different rates, with passengers, without passengers, etc. This is an attempt to recapture the lost taxes from less cars.

Ford, GM, and others are looking into subscription service for vehicles, not ownership. Cadillacs are already offers normal cars on a subscription basis. Subscription service is going to be required by the auto manufacturers to make up for lost revenue as the number of cars required drops, by some estimates as much as 75%. [ref].

Everyone will pay a monthly fee to be able to 'hail' an autonomous vehicle. What sort of premium services will be available? Will there be 'congestion' pricing, 'time of day' pricing?

Enter into your 'hail a ride' app how far you want to walk from your home, how far to want to walk to work, when you want to leave and arrive at your destination. A vehicle shows up as requested. It may have no passengers, it may be a van pool, it may be a bus. It all depends on your choices and level of service.

Will there be a bus as we know it? Buses run on fixed routes at fixed times to be predictable. It is used, mainly, by those that don't have vehicles today. Buses won't have to run on fixed routes, if they are autonomous. There may be fixed pick up and drop off points, but the buses don't have to follow a fixed route to get to each point. Either a kiosk or a phone app will be used to say which destination you want to go to and from which location you want to be picked up. Routes will be created dynamically based on the real time demand. Less total buses will be needed and can run 24 hours a day.

The displacement of workers due to work sent overseas took decades and affected mainly manufacturing. Autonomous vehicles will affect the entire transportation infrastructure and all the people that work in it and will be rolled out in less than a decade.

Houses will be built without garages. Housing density go up as off street parking will not be required. What other planning and zoning changes will occur? Will there be a surge in customization of vehicles? Boutique fleet owners will provide unusual, exotic rides for special occasions. The living room sofa ride will now be a reality. Will traveling in luxury take on a new meaning? A TV, Internet, lounger, mini-bar, microwave, refrigerator are all items that can be found in a vehicle today. What will be found in the luxury autonomous car of tomorrow? Multiple day rides in an autonomous vehicles will include a bed. Autonomous vehicles may be fueled or charged while they roll, just like midair refueling of military aircraft. Definitely a time saver.

Will there be an entire generation that doesn't know how to drive? The department of motor vehicles will have less cars to register. You will go to DMV to get a state identification card, not a driver's license.

The autonomous vehicle will be a boon to those who are blind and handicapped, or those who cannot drive for some reason. The mobility of seniors will greatly increase.

The challenges today for autonomous vehicles is snow obscures visibility, snow covers the painted lines. Heavy rain and glare are also an issue. Northern climate residents may see a slower roll out of the autonomous vehicle.

The autonomous vehicle will be assisted by other vehicles and by road side devices to help with navigation. Auto manufactures are designing the communication and devices for a smart highway. The smart highway will help to define the autonomous experience. States may have to do a better job with road line painting for the autonomous vehicle fleet. What happens after a road is paved? There are no lines on the road immediately after paving. Will line painting be autonomous too?

What about computer hacking? Vehicles have already been shown to be hackable. Will a terrorist be able to cause death and mayhem on the road?

What about the classic car owners? How will older cars fit into the world of autonomous vehicles? Will they have to be fitted with special sensors?

The decrease in the number of cars will free up resources for other purposes. Commodity prices will fall as demand for iron and copper to build cars go down. Yes, there will be less cars on the road, but will there be more or less miles traveled? Ride sharing would decrease traveled miles, but those that didn't drive will be more mobile. Does oil demand go down or up?

How will criminals use autonomous vehicles?

The Chinese character for crisis, 危機, combines glyphs from danger, 危險, and opportunity, 機會. The world of autonomous vehicles will be different than today, danger for those supporting the non-autonomous world and opportunity for those that embrace the autonomous world. Will there be room for everyone?

Saturday, September 3, 2016

Loss of the Night Sky

This is not my usual computer geek blog, this blog is on the theme of

"It should be dark at night"

Earth at Night


NASA has published this famous picture of what the dark side of the Earth looks like from space. This picture shows hot bright the cities of the world are at night, and how much of the developed world is lit up 24/7. (A really large version of this map, 10MB, is here. [biggest, 40MB, here.]

How many features can you see of the Earth without sun light? Find the Nile River, the Hawaiian Islands, the border between N and S Korea.

A lot of places 'left the lights on'.

 Light Pollution

All of these lights cause light pollution, light that obscures the night sky.

Here is an interactive map of world light pollution. Light pollution has been put on a color scale. Red is like Times Square, NYC. Black is Point Nemo. Where do you live? Orange, Yellow, maybe Green. [Map from Falchi et. al.]

[Good background, nice tweet, good references for mapping light pollution on the ground.]

Years ago a scale was developed for the apparent magnitude of stars in the night sky. The scale is logarithmic (read the link for all the maths :) ). The edge of human seeing is 6.5 on this scale. Just over 9000 stars can be seen by the human eye if the sky is dark enough (article has a lot more detail on the exact count and where this count is valid).

Add light pollution to obscure the dim stars, and the number of stars you could see at 4th magnitude drops to only HUNDREDs.
Here's a great set of charts showing how the constellation Cynus the Swan changes from 0 magnitude to 6 magnitude viewing.

Excellent demonstration of the difference of two areas with different night pollution. 

Milky Way

Have you ever seen the Milky Way, horizon to horizon? Seeing it makes you understand why the Ancient Greeks called it  gala, 'milk'.

All ancient civilizations have stories on how the Milky Way was created or what it represents. In the modern world, with our night skies, that connection is lost.

Citizen Science, Apps, No Telescope Needed

There is a great mobile phone app "Loss of the Night" (iTunes here). You become a citizen scientist. The program guides you through locating dimmer and dimmer stars until it determines how dark is your sky. It takes 15 or so 'sightings' to get enough data for a reasonably accurate measurement. The authors have a blog, shows the results, good stuff. They are developing a map of how the night sky is changing, literally from the ground up. See this for more details.

Another, more manual, citizen science project that has been running longer than 'Loss of the Night' is Globe at Night. Simple 5 step directions here.

Dark Sky Reserves and Parks

The International Dark Sky Association has started declaring areas a Dark Sky Reserve. It is a rigorous process to get an area designated such a site. Only 11 such sites are listed.

 A lower designation is a Dark Sky Park. The US has many of these, associated with its National Park system.

Finally, a Dark Sky Sanctuary is like a Dark Sky Reserve, but remote, with limited access.

It is a fact of modern times, one must create a park to see the night sky as it was ONLY 100 or so years ago, just outside living memory.

Man Made Lights and Lighting Science

The culprit in the loss of the night sky is the electric light. IDSA does have an excellent page on outdoor lighting and what communities can do to "bring back the night".

One result of light pollution is the brightening of the night sky, Sky Glow.

Another excellent website on lighting is The Lighting Institute at Rensselaer Polytechnic Institute The Lighting Institute conducts a two day course on outdoor lighting, and much more. A great resource for municipal public works departments, planning and zoning committees, as well as departments of transportation at the state level (Yes, USA centric terminology).

Look Up, Measure Your Night Sky, Report It, Get Involved


Lots of links in this article. Lots of people to talk to.

Learn about outdoor lighting.

Learn where you can educate local government officials on what is a loss of a NATURAL RESOURCE, the night sky. They can control lighting issues with local zoning.

Many US states are passing laws regarding the lighting of highways and roads, state buildings, etc.

Bring the Milky Way back, for good!



Friday, July 8, 2016

Software Archaeology

This blog is to discuss what a software engineer, me, has to do when there has been years of neglect to program.

I work in the embedded systems space, so this blog will talk about embedded programs, not Windows, not Unix, but embedded programs. Some written exclusively in assembly, some in C. Most with no threads or other OS assistance.

Definitions: Software Archaeology - The investigation, research, documentation, and rewriting to gain meaningful understanding of long ago abandoned or neglected software programs.

What causes a program to be abandoned or neglected? Why is the archaeology required in the first place?

The programs I have worked with were written in the early '90s. Software standard practices are better than back then. I will say that many projects are better than back then, but there are many that are still built the same way it was done 20 years ago.

Software consultant, Joe, talking to friend consultant John.

"Joe - How's the new assignment going?" ask John. "Oh, they're writing legacy code" replied Joe.

When software is written it combines 1) the author's domain knowledge, and 2) the author's understanding of the underlying hardware.

The software is constrained by how well the underlying hardware can accomplish the task to be completed. The software is also constrained by the author's knowledge of the domain problem that is the source of information for the task. The author then brings their personality, experience, drive, and insight to the writing of software.

Software is the art and science of translating human goals into a language where a computer can perform the task expounded in the goal.

What do I find when I read code from another era? I find the remnants of enough of the domain and language knowledge to do the job, but no more.

I took a computer languages course in 1976. I was introduced to Algol, PL/1, Snobol, and APL. I do not use any of these languages today. I don't know who does. I learned and used FORTRAN in other courses, which is still widely used in numerical computing applications. C was just starting to be used in research labs.

If I had to resurrect a program from that era, I would have to learn, to a certain extent, the actual computer language, its syntax and nuance, to understand how the program functioned.

Sometimes, the need for the source code is not necessary. Sometimes all that is required it a complete definition of the inputs and outputs of the program. This is probably a simple program, but if you know that a certain list of numbers goes into a program and a set of operations is performed on that list and a new new set of numbers is created, then any programming language that handles the inputs and outputs could be used to satisfy the task of converting the input to the output.

The constraint on recovering an old program is that the inputs and outputs are still in place. They cannot be changed. What is missing is the exact definition of the inputs and outputs. The program knows what those inputs and outputs are. But it is in the code.

Unfortunately, the program can't expound on its nuances or give background on what the author was thinking. Comments help, when they are present.

The techniques of re-factoring are those that will provide the most insight with the best chance of documenting the inputs and outputs.

The other difficulty with working with old programs are the tools. With each generation of processors, comes a new generation of tools.

In the '90s the method by which one debugged an embedded program was very primitive or very sophisticated, but not much in between. The primitive method was to output RS-232 messages to display the current state of the code. Each output would reveal the changing state. Analysis would then determine what might be wrong. The very sophisticated, and thus very expensive method, was to use an In-Circuit Emulator or ICE.

Memory was expensive in the '90s. Embedded processors did not have cache. Programs ran from Read Only Memory, which may have been PROMs, EPROMs or Flash. The processor would have break point capability, but only if the memory location could be changed to an 'illegal instruction' to cause a jump to the interrupt handler that would provide the debugging support. This only worked if the program was running in RAM. Inserting an illegal instruction into ROM is impossible. This is the same mechanism used today for software breakpoints. Hardware breakpoints were nowhere to be seen.

The ICE provided a way for a host processor, a PC, to have RAM memory substitute for ROM memory as well as take over the clock operations of the processor, allowing the user to watch the processor step through each instruction in as much detail as desired.

Breakpoints are essential.

RS-232 would disturb the timing of the program and use up precious memory. The ICE was an emulator and thus provided the debugging functions without the rewriting of code and using any additional memory.

If the program was neglected, then the tools have been neglected. The ICE unit may no longer power  on, if it can be found at all.

The history of how the author came to writing the code is lost. The author learned, most likely by trial and error, the nuances of the language and the hardware. This history is not documented.

All in all a puzzle.

That's another good definition of software archaeology, the study of puzzles created by time and neglect.







Tuesday, September 8, 2015

Number of Processors, Linux, and Make

[Much of this post is from here, http://www.binarytides.com/linux-cpu-information/]

For those who use Windows, there is an environment variable that comes pre-loaded, NUMBER_OF_PROCESSORS.

This variable contains the number of processors that a program might want to know to allow for parallel operations.

But in this day and age, the number of actual processors and the number reported may differ by a factor of two. The technology that allows this is called hyperthreading. Hyperthreading has been around for more than ten years.

A CPU chip may have two physical cores, but with hyperthreading, the operating system is presented with four processors.

Under Windows,  NUMBER_OF_PROCESSORS is the hyperthreading number.

Under Linux, we can run a set of commands and get both.

# Get the number of actual processors, not hyperthreaded processors
NUMBEROFPROCESSORS=`cat /proc/cpuinfo | grep 'core id' | wc -l`
echo "Number of Processors="$NUMBEROFPROCESSORS

# Get the number of hyperthreaded processors
NUMBEROFHTTPROCESSORS=`cat /proc/cpuinfo | grep processor | wc -l`
echo "Number of Hyperthreaded Processors="$(($NUMBEROFHTTPROCESSORS-$NUMBEROFPROCESSORS))

The environment variable NUMBEROFPROCESSORS is created by the output of the file /proc/cpuinfo being piped to grep, which looks for the string processor or core id. The result is then piped to wc which counts the number of lines grep found.

My results are as follows. I'm running on an Intel(c) Core(tm)2 T9400. It does not support hyperthreading.

Number of Processors=2
Number of Hyperthreaded Processors=0

We can use this information with the -j argument for make.

# Get the number of actual processors, not hyperthreaded processors
NUMBEROFPROCESSORS=`cat /proc/cpuinfo | grep processor | wc -l`
make -j$NUMBEROFPROCESSORS

Now make will use the number of processors (with hyperthreading). We don't have to change the make script each time we get more cores on our VM or laptop.

It is mentioned (can't find the URL) that -j has no effect when running under MS-DOS. Don't know if this has changed. The version of Windows is not mentioned. nmake, supplied with Visual Studio, is said to be available for multi-core (parallel) operation. StackOverflow discussion here.

Using -j, in general, assumes that the build is not a highly dependent build. Lots of leaves in the build tree.

More details on cpuinfo can be found here.

Nice discussion on the why's and wherefore's of processors, physical ids, and cores here.

Busy - Sluggish

Now that you have enabled make to use all the available processors, be prepared for your system to get sluggish.

On a build machine you want to build fast. On a development machine you want to do something else, but you just told make to use all the available CPUs. Ctrl-C might not even be effective, for a while.

Your mileage may vary. Forewarned is forearmed.

Number of Processors + 1

There are many discussions that state the value for -j should be the number of processors +1, even +2. The idea is that make (and compiling) is an I/O intensive activity. While waiting for one compile to finish, another can be prepared.

At this URL, Agostino Sarubbo, has shown that one should use the number of processors and no more.

The reason for this would be a subject of a separate blog. Exercise for the reader. :)

References

All the in-line links listed in one place.






Thursday, August 27, 2015

IAR, Vybrid, Low Power Timer, and Getting Started Example

This is a short blog on a tiny issue I found working with the FreeScale Vybrid Tower evaluation board with the IAR IDE 'Getting Started' example.

The Getting Started example has two interrupts, one for a periodic timer using the Lower Power Timer (0) and a hardware interrupt for a button, SW1.

There are two IRQs, one for the timer, and one for the button.

The button IRQ outputs the number of times the button has been pushed each time the button is pushed. The timer IRQ blinks a blue LED, off for 1/2 second and on for 1/2 second.

I decided to tweak the example. I want to report the value of the timer counter when the button is pushed. I should see a value from 0 to 500, as the periodic timer is set to 1kHz.

I used the existing IAR symbols, LPTMR0->CNR, to read the timer counter.

Each time I pushed the button the value for CNR was 0.

The Low Power Timer documentation in the FreeScale Vybrid reference manual, pg. 1910-1913.

On page 1913, 41.4.5 LPTMR Counter, it states

The CNR cannot be initialized, but can be read at any time. On each read of the CNR, software must first write to the CNR with any value. This will synchronize and register the current value of the CNR into a temporary register. The contents of the temporary register are returned on each read of the CNR.

Thus one must first write to CNR, then read. The write does not change CNR, it just enables the timer counter to be latched into the CNR register when read.

LPTMR0->CNR = 1;
printf("Timer: %d\n:, LPTMR0->CNR);

Unfortunately, the first line of code gives a compiler error. The error states that CNR cannot be modified.

The structure defined in the Getting Started example for the Vybrid tower that defines the LPTMR registers (MVF50GS10MK50.h), CSR, PSR, CMR and CNR. The registers CSR, PSR, and CMR are defined _IO uint32_t. _IO means read/write.

CNR is defined _I uint32_t as read only. Wrong.

The CNR definition must be changed to _IO uint32_t in order to read CNR.

/** LPTMR - Register Layout Typedef */
typedef struct {
  __IO uint32_t CSR;                               /**< Low Power
Timer Control Status Register, offset: 0x0 */
  __IO uint32_t PSR;                               /**< Low Power
Timer Prescale Register, offset: 0x4 */
  __IO uint32_t CMR;                               /**< Low Power
Timer Compare Register, offset: 0x8 */
  __IO  uint32_t CNR;        /* 
__I  uint32_t */   /**< Low Power
Timer Counter Register, offset: 0xC */
} LPTMR_Type;



Oops.

As I said, a tiny issue.

Thursday, August 20, 2015

Yet Another Make Tutorial - VI

Last post we created a framework to build a library, a unit test program, and a production program. Let's make a few changes to introduce the building of a library.

[This is a continuation of posts  III, and III, IV, and V. Make files are found here.]

Building a library requires a different program than clang, clang++, gcc, or g++. The program is called ar, for archive.

The standard arguments are:

ar rvf <libraryname>.a <object files>

Note: gcc requires, when using the -l switch to build a program, that the library name starts with the three letters lib, and the three letters lib are not supplied with the -l switch. The .a extension is also assumed.

For example:

# Create libmylibrary.
ar rvf libmylibrary.a iseven.o isnumber.o

#Use libmylibrary.a with gcc
gcc -o aprogram -lmylibrary

The ar program will be used for our $(TARGET): $(OBJS_C) $(OBJS_CXX) command in our make file in the lib directory.

The $(TARGET): $(OBJS_C) $(OBJS_CXX) command in the target and unitest directories will have have -lmylibrary added.

In addition, gcc (or clang) needs to know the path to the library. That is supplied with the -L switch (uppercase L). -L../lib/target

Duplicating two hard coded library names is not good coding practice.

In addition, the dependency list has to have the library added.

$(TARGET): $(OBJS_C) $(OBJS_CXX) ../lib/target/libmylibrary.a

Now there are three hard coded locations for the library.

Let's solve this problem using make variables.

One variable can hold the -L path and one variable can hold the -l library name.

But we have a problem, if the variables are defined in the lib directory make file, the variables are not seen by the unitest and target make files as the lib make file is a child of the parent make file. Variables don't flow up from the child make files.

If we define parts of the -L variable and the -l variable in the parent, pass them as variables to the child make file. We only change the parent make file is the name of the library changes.

# makefile17

LIB := lib
UNITEST := unitest
TARGET := target
MYLIBRARY := mylibrary

.PHONY: all clean

all: build_unitest build_target

clean:
  $(MAKE) -C $(TARGET) -f makefile17target clean
  $(MAKE) -C $(UNITEST) -f makefile17unitest clean
  $(MAKE) -C $(LIB) -f makefile17lib clean

build_target: build_lib 
  $(MAKE) -C $(TARGET) -f makefile17target LIBRARY=$(MYLIBRARY) LIBDIR=$(LIB)

build_unitest: build_lib
  $(MAKE) -C $(UNITEST) -f makefile17unitest LIBRARY=$(MYLIBRARY) LIBDIR=$(LIB)

build_lib:
  $(MAKE) -C $(LIB) -f makefile17lib LIBRARY=$(MYLIBRARY)


We've added the variable MYLIBRARYWe have added arguments to the make commands.

The argument of the form

<name>=<string>

defines <name> as a variable with the contents of <string>. <name> is a variable that is then available when make is run.

For the build_lib target, makefile17lib will have LIBRARY defined as mylibrary.

We will have a convention in directory names. The directory under lib where libmylibrary.a will be built is target. Yes, it is hard coded. A variable could be created, LIB_DIR_TARGET, but target will be fine for this tutorial.

Below are the other Make16 make files updated to use the new arguments.


# Makefile17lib

# Build the library

LIB_NAME := lib$(LIBRARY).a
TARGET_DIR := target

.PHONY: all clean

all: $(TARGET_DIR)/$(LIB_NAME)

clean:

$(TARGET_DIR)/$(LIB_NAME) :
mkdir -p $(TARGET_DIR)
cd $(TARGET_DIR)
touch $(TARGET_DIR)/$(LIB_NAME)



# makefile17unitest

LIB_NAME := lib$(LIBRARY).a


.PHONY: all clean

all: unitest

clean:

unitest: ../$(LIBDIR)/target/$(LIB_NAME)
echo Build unitest



# makefile17target

# Build the target program

LIB_NAME := lib$(LIBRARY).a

.PHONY: all clean

all: aprogram

clean:

aprogram: ../$(LIBDIR)/target/$(LIB_NAME)
echo Build target


The make files still don't do much, but the framework is coming into shape. We parameterized the library name.

Next post is to add some code. Add the make file commands from makefile13 to build a library, a unit test and a program.

Saturday, August 15, 2015

Yet Another Make File Tutorial - V

The past four tutorials, III, and III, and IV have created a good make file for a sub-directory of sources and kept the build directories clean and neat, as well as configure make to run a bit faster.

[Again the source for these tutorials is found here.]

Note: These make files are NOT POSIX compliant.

These next tutorial(s) will build a production quality set of make files that handle libraries, unit tests, and the production program. There will be more discussion about compilers, linkers, unit tests, libraries as well as make files. Using make files also solves the requirement of having one button builds for build tools such a Jenkins. These next tutorials will set up a framework for production and unit test programs.

Unit tests are the sanity checkers for programmers. They make you feel good because they prove that you haven't messed up with your last set of changes. But building unit tests and that program you want to ship for $$s with the same code takes some planning.

A unit test program has a main(). Your program has a main(). You can't have two main() functions in the same program.

Solution: A library or libraries for your code that isn't main(). Each library gets a directory. Each unit test gets a directory, and the production program gets a directory. Thus three directories, lib, unitest, and target.

Each directory will need a make file. A master make file is required to 'run' each of the other make files.

This tutorial will introduce new make syntax and features.

Our first make file is not a make file that compiles code. It calls other make files. Some of the more experienced readers will see we are starting down the path to make file recursion, where make calls make.

There is an article detailing this topic here. The argument is to create a single make file instead of series of recursively called make files.

Let's start.

The new directory Make16 is where we start.

# makefile16

LIB := lib
UNITEST := unitest
TARGET := target

.PHONY: all clean

all: build_unitest build_target

clean:
$(MAKE) -C $(TARGET) -f makefile16target clean
$(MAKE) -C $(UNITEST) -f makefile16unitest clean
$(MAKE) -C $(LIB) -f makefile16lib clean

build_target: build_lib 
$(MAKE) -C $(TARGET) -f makefile16target

build_unitest: build_lib
$(MAKE) -C $(UNITEST) -f makefile16unitest

build_lib:
$(MAKE) -C $(LIB) -f makefile16lib    

[Links for more details here.]

The -C switch changes the working directory to the next argument. When using -C the option -w is automatically turned on. The -w option outputs Entering <directory> and Leaving <directory>. This helps with debugging.

The $(MAKE) variable is a special variable of the Gnu Make. As has been said before, this tutorial is NOT writing POSIX make files.

The make files for lib, unitest, and target are below. These are just shells. The make files have no source code. make still outputs its messages. There are no errors and the framework is shown correct.

# Makefile16lib

# Build the library

.PHONY: all clean

all: alibrary

clean:


alibrary:

================

# makefile16unitest

.PHONY: all clean

all: unitest

clean:

unitest:

===============

# makefile16target

# Build the target program

.PHONY: all clean

all: aprogram

clean:

aprogram:


To run enter

make -f makefile16

or

make -f makefile16 clean


If you don't want all of the Entering and Leaving messages, add the -s switch.

make -s -f makefile16

No output appears.

Next blog we'll add more details about the library. VI.