Saturday, September 3, 2016

Loss of the Night Sky

This is not my usual computer geek blog, this blog is on the theme of

"It should be dark at night"

Earth at Night


NASA has published this famous picture of what the dark side of the Earth looks like from space. This picture shows hot bright the cities of the world are at night, and how much of the developed world is lit up 24/7. (A really large version of this map, 10MB, is here. [biggest, 40MB, here.]

How many features can you see of the Earth without sun light? Find the Nile River, the Hawaiian Islands, the border between N and S Korea.

A lot of places 'left the lights on'.

 Light Pollution

All of these lights cause light pollution, light that obscures the night sky.

Here is an interactive map of world light pollution. Light pollution has been put on a color scale. Red is like Times Square, NYC. Black is Point Nemo. Where do you live? Orange, Yellow, maybe Green. [Map from Falchi et. al.]

[Good background, nice tweet, good references for mapping light pollution on the ground.]

Years ago a scale was developed for the apparent magnitude of stars in the night sky. The scale is logarithmic (read the link for all the maths :) ). The edge of human seeing is 6.5 on this scale. Just over 9000 stars can be seen by the human eye if the sky is dark enough (article has a lot more detail on the exact count and where this count is valid).

Add light pollution to obscure the dim stars, and the number of stars you could see at 4th magnitude drops to only HUNDREDs.
Here's a great set of charts showing how the constellation Cynus the Swan changes from 0 magnitude to 6 magnitude viewing.

Excellent demonstration of the difference of two areas with different night pollution. 

Milky Way

Have you ever seen the Milky Way, horizon to horizon? Seeing it makes you understand why the Ancient Greeks called it  gala, 'milk'.

All ancient civilizations have stories on how the Milky Way was created or what it represents. In the modern world, with our night skies, that connection is lost.

Citizen Science, Apps, No Telescope Needed

There is a great mobile phone app "Loss of the Night" (iTunes here). You become a citizen scientist. The program guides you through locating dimmer and dimmer stars until it determines how dark is your sky. It takes 15 or so 'sightings' to get enough data for a reasonably accurate measurement. The authors have a blog, shows the results, good stuff. They are developing a map of how the night sky is changing, literally from the ground up. See this for more details.

Another, more manual, citizen science project that has been running longer than 'Loss of the Night' is Globe at Night. Simple 5 step directions here.

Dark Sky Reserves and Parks

The International Dark Sky Association has started declaring areas a Dark Sky Reserve. It is a rigorous process to get an area designated such a site. Only 11 such sites are listed.

 A lower designation is a Dark Sky Park. The US has many of these, associated with its National Park system.

Finally, a Dark Sky Sanctuary is like a Dark Sky Reserve, but remote, with limited access.

It is a fact of modern times, one must create a park to see the night sky as it was ONLY 100 or so years ago, just outside living memory.

Man Made Lights and Lighting Science

The culprit in the loss of the night sky is the electric light. IDSA does have an excellent page on outdoor lighting and what communities can do to "bring back the night".

One result of light pollution is the brightening of the night sky, Sky Glow.

Another excellent website on lighting is The Lighting Institute at Rensselaer Polytechnic Institute The Lighting Institute conducts a two day course on outdoor lighting, and much more. A great resource for municipal public works departments, planning and zoning committees, as well as departments of transportation at the state level (Yes, USA centric terminology).

Look Up, Measure Your Night Sky, Report It, Get Involved


Lots of links in this article. Lots of people to talk to.

Learn about outdoor lighting.

Learn where you can educate local government officials on what is a loss of a NATURAL RESOURCE, the night sky. They can control lighting issues with local zoning.

Many US states are passing laws regarding the lighting of highways and roads, state buildings, etc.

Bring the Milky Way back, for good!



Friday, July 8, 2016

Software Archaeology

This blog is to discuss what a software engineer, me, has to do when there has been years of neglect to program.

I work in the embedded systems space, so this blog will talk about embedded programs, not Windows, not Unix, but embedded programs. Some written exclusively in assembly, some in C. Most with no threads or other OS assistance.

Definitions: Software Archaeology - The investigation, research, documentation, and rewriting to gain meaningful understanding of long ago abandoned or neglected software programs.

What causes a program to be abandoned or neglected? Why is the archaeology required in the first place?

The programs I have worked with were written in the early '90s. Software standard practices are better than back then. I will say that many projects are better than back then, but there are many that are still built the same way it was done 20 years ago.

Software consultant, Joe, talking to friend consultant John.

"Joe - How's the new assignment going?" ask John. "Oh, they're writing legacy code" replied Joe.

When software is written it combines 1) the author's domain knowledge, and 2) the author's understanding of the underlying hardware.

The software is constrained by how well the underlying hardware can accomplish the task to be completed. The software is also constrained by the author's knowledge of the domain problem that is the source of information for the task. The author then brings their personality, experience, drive, and insight to the writing of software.

Software is the art and science of translating human goals into a language where a computer can perform the task expounded in the goal.

What do I find when I read code from another era? I find the remnants of enough of the domain and language knowledge to do the job, but no more.

I took a computer languages course in 1976. I was introduced to Algol, PL/1, Snobol, and APL. I do not use any of these languages today. I don't know who does. I learned and used FORTRAN in other courses, which is still widely used in numerical computing applications. C was just starting to be used in research labs.

If I had to resurrect a program from that era, I would have to learn, to a certain extent, the actual computer language, its syntax and nuance, to understand how the program functioned.

Sometimes, the need for the source code is not necessary. Sometimes all that is required it a complete definition of the inputs and outputs of the program. This is probably a simple program, but if you know that a certain list of numbers goes into a program and a set of operations is performed on that list and a new new set of numbers is created, then any programming language that handles the inputs and outputs could be used to satisfy the task of converting the input to the output.

The constraint on recovering an old program is that the inputs and outputs are still in place. They cannot be changed. What is missing is the exact definition of the inputs and outputs. The program knows what those inputs and outputs are. But it is in the code.

Unfortunately, the program can't expound on its nuances or give background on what the author was thinking. Comments help, when they are present.

The techniques of re-factoring are those that will provide the most insight with the best chance of documenting the inputs and outputs.

The other difficulty with working with old programs are the tools. With each generation of processors, comes a new generation of tools.

In the '90s the method by which one debugged an embedded program was very primitive or very sophisticated, but not much in between. The primitive method was to output RS-232 messages to display the current state of the code. Each output would reveal the changing state. Analysis would then determine what might be wrong. The very sophisticated, and thus very expensive method, was to use an In-Circuit Emulator or ICE.

Memory was expensive in the '90s. Embedded processors did not have cache. Programs ran from Read Only Memory, which may have been PROMs, EPROMs or Flash. The processor would have break point capability, but only if the memory location could be changed to an 'illegal instruction' to cause a jump to the interrupt handler that would provide the debugging support. This only worked if the program was running in RAM. Inserting an illegal instruction into ROM is impossible. This is the same mechanism used today for software breakpoints. Hardware breakpoints were nowhere to be seen.

The ICE provided a way for a host processor, a PC, to have RAM memory substitute for ROM memory as well as take over the clock operations of the processor, allowing the user to watch the processor step through each instruction in as much detail as desired.

Breakpoints are essential.

RS-232 would disturb the timing of the program and use up precious memory. The ICE was an emulator and thus provided the debugging functions without the rewriting of code and using any additional memory.

If the program was neglected, then the tools have been neglected. The ICE unit may no longer power  on, if it can be found at all.

The history of how the author came to writing the code is lost. The author learned, most likely by trial and error, the nuances of the language and the hardware. This history is not documented.

All in all a puzzle.

That's another good definition of software archaeology, the study of puzzles created by time and neglect.







Tuesday, September 8, 2015

Number of Processors, Linux, and Make

[Much of this post is from here, http://www.binarytides.com/linux-cpu-information/]

For those who use Windows, there is an environment variable that comes pre-loaded, NUMBER_OF_PROCESSORS.

This variable contains the number of processors that a program might want to know to allow for parallel operations.

But in this day and age, the number of actual processors and the number reported may differ by a factor of two. The technology that allows this is called hyperthreading. Hyperthreading has been around for more than ten years.

A CPU chip may have two physical cores, but with hyperthreading, the operating system is presented with four processors.

Under Windows,  NUMBER_OF_PROCESSORS is the hyperthreading number.

Under Linux, we can run a set of commands and get both.

# Get the number of actual processors, not hyperthreaded processors
NUMBEROFPROCESSORS=`cat /proc/cpuinfo | grep 'core id' | wc -l`
echo "Number of Processors="$NUMBEROFPROCESSORS

# Get the number of hyperthreaded processors
NUMBEROFHTTPROCESSORS=`cat /proc/cpuinfo | grep processor | wc -l`
echo "Number of Hyperthreaded Processors="$(($NUMBEROFHTTPROCESSORS-$NUMBEROFPROCESSORS))

The environment variable NUMBEROFPROCESSORS is created by the output of the file /proc/cpuinfo being piped to grep, which looks for the string processor or core id. The result is then piped to wc which counts the number of lines grep found.

My results are as follows. I'm running on an Intel(c) Core(tm)2 T9400. It does not support hyperthreading.

Number of Processors=2
Number of Hyperthreaded Processors=0

We can use this information with the -j argument for make.

# Get the number of actual processors, not hyperthreaded processors
NUMBEROFPROCESSORS=`cat /proc/cpuinfo | grep processor | wc -l`
make -j$NUMBEROFPROCESSORS

Now make will use the number of processors (with hyperthreading). We don't have to change the make script each time we get more cores on our VM or laptop.

It is mentioned (can't find the URL) that -j has no effect when running under MS-DOS. Don't know if this has changed. The version of Windows is not mentioned. nmake, supplied with Visual Studio, is said to be available for multi-core (parallel) operation. StackOverflow discussion here.

Using -j, in general, assumes that the build is not a highly dependent build. Lots of leaves in the build tree.

More details on cpuinfo can be found here.

Nice discussion on the why's and wherefore's of processors, physical ids, and cores here.

Busy - Sluggish

Now that you have enabled make to use all the available processors, be prepared for your system to get sluggish.

On a build machine you want to build fast. On a development machine you want to do something else, but you just told make to use all the available CPUs. Ctrl-C might not even be effective, for a while.

Your mileage may vary. Forewarned is forearmed.

Number of Processors + 1

There are many discussions that state the value for -j should be the number of processors +1, even +2. The idea is that make (and compiling) is an I/O intensive activity. While waiting for one compile to finish, another can be prepared.

At this URL, Agostino Sarubbo, has shown that one should use the number of processors and no more.

The reason for this would be a subject of a separate blog. Exercise for the reader. :)

References

All the in-line links listed in one place.






Thursday, August 27, 2015

IAR, Vybrid, Low Power Timer, and Getting Started Example

This is a short blog on a tiny issue I found working with the FreeScale Vybrid Tower evaluation board with the IAR IDE 'Getting Started' example.

The Getting Started example has two interrupts, one for a periodic timer using the Lower Power Timer (0) and a hardware interrupt for a button, SW1.

There are two IRQs, one for the timer, and one for the button.

The button IRQ outputs the number of times the button has been pushed each time the button is pushed. The timer IRQ blinks a blue LED, off for 1/2 second and on for 1/2 second.

I decided to tweak the example. I want to report the value of the timer counter when the button is pushed. I should see a value from 0 to 500, as the periodic timer is set to 1kHz.

I used the existing IAR symbols, LPTMR0->CNR, to read the timer counter.

Each time I pushed the button the value for CNR was 0.

The Low Power Timer documentation in the FreeScale Vybrid reference manual, pg. 1910-1913.

On page 1913, 41.4.5 LPTMR Counter, it states

The CNR cannot be initialized, but can be read at any time. On each read of the CNR, software must first write to the CNR with any value. This will synchronize and register the current value of the CNR into a temporary register. The contents of the temporary register are returned on each read of the CNR.

Thus one must first write to CNR, then read. The write does not change CNR, it just enables the timer counter to be latched into the CNR register when read.

LPTMR0->CNR = 1;
printf("Timer: %d\n:, LPTMR0->CNR);

Unfortunately, the first line of code gives a compiler error. The error states that CNR cannot be modified.

The structure defined in the Getting Started example for the Vybrid tower that defines the LPTMR registers (MVF50GS10MK50.h), CSR, PSR, CMR and CNR. The registers CSR, PSR, and CMR are defined _IO uint32_t. _IO means read/write.

CNR is defined _I uint32_t as read only. Wrong.

The CNR definition must be changed to _IO uint32_t in order to read CNR.

/** LPTMR - Register Layout Typedef */
typedef struct {
  __IO uint32_t CSR;                               /**< Low Power
Timer Control Status Register, offset: 0x0 */
  __IO uint32_t PSR;                               /**< Low Power
Timer Prescale Register, offset: 0x4 */
  __IO uint32_t CMR;                               /**< Low Power
Timer Compare Register, offset: 0x8 */
  __IO  uint32_t CNR;        /* 
__I  uint32_t */   /**< Low Power
Timer Counter Register, offset: 0xC */
} LPTMR_Type;



Oops.

As I said, a tiny issue.

Thursday, August 20, 2015

Yet Another Make Tutorial - VI

Last post we created a framework to build a library, a unit test program, and a production program. Let's make a few changes to introduce the building of a library.

[This is a continuation of posts  III, and III, IV, and V. Make files are found here.]

Building a library requires a different program than clang, clang++, gcc, or g++. The program is called ar, for archive.

The standard arguments are:

ar rvf <libraryname>.a <object files>

Note: gcc requires, when using the -l switch to build a program, that the library name starts with the three letters lib, and the three letters lib are not supplied with the -l switch. The .a extension is also assumed.

For example:

# Create libmylibrary.
ar rvf libmylibrary.a iseven.o isnumber.o

#Use libmylibrary.a with gcc
gcc -o aprogram -lmylibrary

The ar program will be used for our $(TARGET): $(OBJS_C) $(OBJS_CXX) command in our make file in the lib directory.

The $(TARGET): $(OBJS_C) $(OBJS_CXX) command in the target and unitest directories will have have -lmylibrary added.

In addition, gcc (or clang) needs to know the path to the library. That is supplied with the -L switch (uppercase L). -L../lib/target

Duplicating two hard coded library names is not good coding practice.

In addition, the dependency list has to have the library added.

$(TARGET): $(OBJS_C) $(OBJS_CXX) ../lib/target/libmylibrary.a

Now there are three hard coded locations for the library.

Let's solve this problem using make variables.

One variable can hold the -L path and one variable can hold the -l library name.

But we have a problem, if the variables are defined in the lib directory make file, the variables are not seen by the unitest and target make files as the lib make file is a child of the parent make file. Variables don't flow up from the child make files.

If we define parts of the -L variable and the -l variable in the parent, pass them as variables to the child make file. We only change the parent make file is the name of the library changes.

# makefile17

LIB := lib
UNITEST := unitest
TARGET := target
MYLIBRARY := mylibrary

.PHONY: all clean

all: build_unitest build_target

clean:
  $(MAKE) -C $(TARGET) -f makefile17target clean
  $(MAKE) -C $(UNITEST) -f makefile17unitest clean
  $(MAKE) -C $(LIB) -f makefile17lib clean

build_target: build_lib 
  $(MAKE) -C $(TARGET) -f makefile17target LIBRARY=$(MYLIBRARY) LIBDIR=$(LIB)

build_unitest: build_lib
  $(MAKE) -C $(UNITEST) -f makefile17unitest LIBRARY=$(MYLIBRARY) LIBDIR=$(LIB)

build_lib:
  $(MAKE) -C $(LIB) -f makefile17lib LIBRARY=$(MYLIBRARY)


We've added the variable MYLIBRARYWe have added arguments to the make commands.

The argument of the form

<name>=<string>

defines <name> as a variable with the contents of <string>. <name> is a variable that is then available when make is run.

For the build_lib target, makefile17lib will have LIBRARY defined as mylibrary.

We will have a convention in directory names. The directory under lib where libmylibrary.a will be built is target. Yes, it is hard coded. A variable could be created, LIB_DIR_TARGET, but target will be fine for this tutorial.

Below are the other Make16 make files updated to use the new arguments.


# Makefile17lib

# Build the library

LIB_NAME := lib$(LIBRARY).a
TARGET_DIR := target

.PHONY: all clean

all: $(TARGET_DIR)/$(LIB_NAME)

clean:

$(TARGET_DIR)/$(LIB_NAME) :
mkdir -p $(TARGET_DIR)
cd $(TARGET_DIR)
touch $(TARGET_DIR)/$(LIB_NAME)



# makefile17unitest

LIB_NAME := lib$(LIBRARY).a


.PHONY: all clean

all: unitest

clean:

unitest: ../$(LIBDIR)/target/$(LIB_NAME)
echo Build unitest



# makefile17target

# Build the target program

LIB_NAME := lib$(LIBRARY).a

.PHONY: all clean

all: aprogram

clean:

aprogram: ../$(LIBDIR)/target/$(LIB_NAME)
echo Build target


The make files still don't do much, but the framework is coming into shape. We parameterized the library name.

Next post is to add some code. Add the make file commands from makefile13 to build a library, a unit test and a program.

Saturday, August 15, 2015

Yet Another Make File Tutorial - V

The past four tutorials, III, and III, and IV have created a good make file for a sub-directory of sources and kept the build directories clean and neat, as well as configure make to run a bit faster.

[Again the source for these tutorials is found here.]

Note: These make files are NOT POSIX compliant.

These next tutorial(s) will build a production quality set of make files that handle libraries, unit tests, and the production program. There will be more discussion about compilers, linkers, unit tests, libraries as well as make files. Using make files also solves the requirement of having one button builds for build tools such a Jenkins. These next tutorials will set up a framework for production and unit test programs.

Unit tests are the sanity checkers for programmers. They make you feel good because they prove that you haven't messed up with your last set of changes. But building unit tests and that program you want to ship for $$s with the same code takes some planning.

A unit test program has a main(). Your program has a main(). You can't have two main() functions in the same program.

Solution: A library or libraries for your code that isn't main(). Each library gets a directory. Each unit test gets a directory, and the production program gets a directory. Thus three directories, lib, unitest, and target.

Each directory will need a make file. A master make file is required to 'run' each of the other make files.

This tutorial will introduce new make syntax and features.

Our first make file is not a make file that compiles code. It calls other make files. Some of the more experienced readers will see we are starting down the path to make file recursion, where make calls make.

There is an article detailing this topic here. The argument is to create a single make file instead of series of recursively called make files.

Let's start.

The new directory Make16 is where we start.

# makefile16

LIB := lib
UNITEST := unitest
TARGET := target

.PHONY: all clean

all: build_unitest build_target

clean:
$(MAKE) -C $(TARGET) -f makefile16target clean
$(MAKE) -C $(UNITEST) -f makefile16unitest clean
$(MAKE) -C $(LIB) -f makefile16lib clean

build_target: build_lib 
$(MAKE) -C $(TARGET) -f makefile16target

build_unitest: build_lib
$(MAKE) -C $(UNITEST) -f makefile16unitest

build_lib:
$(MAKE) -C $(LIB) -f makefile16lib    

[Links for more details here.]

The -C switch changes the working directory to the next argument. When using -C the option -w is automatically turned on. The -w option outputs Entering <directory> and Leaving <directory>. This helps with debugging.

The $(MAKE) variable is a special variable of the Gnu Make. As has been said before, this tutorial is NOT writing POSIX make files.

The make files for lib, unitest, and target are below. These are just shells. The make files have no source code. make still outputs its messages. There are no errors and the framework is shown correct.

# Makefile16lib

# Build the library

.PHONY: all clean

all: alibrary

clean:


alibrary:

================

# makefile16unitest

.PHONY: all clean

all: unitest

clean:

unitest:

===============

# makefile16target

# Build the target program

.PHONY: all clean

all: aprogram

clean:

aprogram:


To run enter

make -f makefile16

or

make -f makefile16 clean


If you don't want all of the Entering and Leaving messages, add the -s switch.

make -s -f makefile16

No output appears.

Next blog we'll add more details about the library. VI.


Thursday, August 13, 2015

Yet Another Makefile Tutorial - IV

The three previous tutorials on make files, I, II, and III, discussed the contents of a make file.

This tutorial will discuss the program make itself, some switches, and internals to help with performance.

make has a lot of built in defaults. The defaults can allow one to create a quick make file. No dependencies have to be created. The simple dependency patterns are already defined. For example.

A directory containing *.c files.

$(SRC_LIST) := $(wildcard *.c)
$(OBJ_LIST) := $(SRC_LIST:.c=.o)

aprogram : $(OBJ_LIST)
    gcc $^ -o $@

That's it. make has default dependencies for %o:%c and gcc.

make --help

will show all the arguments supported by make.

-d  Print a lot of debugging information.

The debugging referred to is about patterns and rules. It is not about variables. [You have to use $(info ...) for variables as discussed in II.]

Create an empty make file and debug it.

touch makefile
make -d

A partial output from make is below.

GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Reading makefiles...
Reading makefile `makefile'...
Updating makefiles....
 Considering target file `makefile'.
  Looking for an implicit rule for `makefile'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.o'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.c'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.cc'.
  Trying pattern rule with stem `makefile'.
  Trying implicit prerequisite `makefile.C'.
  Trying pattern rule with stem `makefile'.
. . . . . . . .

Examining the complete listing the following file extensions are found.

c, cc, C, o, cpp, p, f, F, m, r, s, S, mod, sh, v, y, l, and w.

There are many more implicit patterns and rules than one needs for a simple set of C or C++ files.

Processing all the implicit patterns does slowdown make.

-r --no-builtin-rules  Disable built-in implicit rules.

make -d -r

The entire debug output is below.

GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Reading makefiles...
Reading makefile `makefile'...
Updating makefiles....
 Considering target file `makefile'.
  Looking for an implicit rule for `makefile'.
  No implicit rule found for `makefile'.
  Finished prerequisites of target file `makefile'.
 No need to remake target `makefile'.

Let's go back to the Make15 directory and use -d and -r.

make -d -f makefile15

2689 lines of output for 4 files!

Now

make -d -r -f makefile15

Only 152 lines of output.

For a project with just 4 files, the difference between using -r and not using it is probably isn't noticeable, but with a large project using only one or two patterns, -r can save time.

There is a pseudo target that has a similar feature as -r, .SUFFIXES:

Create a make file with just .SUFFIXES:

echo .SUFFIXES: > makefile
makefile -d

Similar short output, but not as short as -r.

Two methods of saving time with make, -r or .SUFFIXES:

Saving time is what make is all about.

A page discussing many of the implicit rules.
https://www.gnu.org/software/make/manual/html_node/Catalogue-of-Rules.html

Next post on make files is here, V.

-----------------------------------

P.S. Is there a way to find out ALL the implicit rules and patterns?

Try

make-p

http://stackoverflow.com/questions/16842930/why-does-gnu-make-define-implicit-pattern-and-implicit-suffix-rules

More details on .SUFFIXES:

https://www.gnu.org/software/make/manual/html_node/Suffix-Rules.html