Category Archives: JRT-Dev

This is a blog where I can present my ideas, concerns and thoughts on all things related to Software Development, from the tools to the talent. This is a place for professionals, not those that expect things to be PC and nicey-nice.

The Future of Computer Memory Architecture

So on Quora there is an author named Franklin Veaux.  He writes answers and comments on a lot of different topics and generally seems to have a good head on his shoulders and knows what he is talking about. One such topic that came up was the future of computer architectures with regard to system memory and offline storage.  Today’s modern computer systems utilize smaller amounts of primary RAM for system storage, which is fast but volatile; and larger amounts of slow but persistent storage via magnetic rotational or solid state drives (i.e. hard drives).

As we make advancements in memory technologies, it is conceivable that we will eventually end up with a memory device that is dense (as in density, not intelligence), fast, robust, and persistent.   When this happens, the differentiation between system RAM and persistent storage may start to blur, or go away entirely.

Continue reading The Future of Computer Memory Architecture

Old School, But New Log Format

So I was thinking about a logging system I implemented almost 15 years ago to replace an existing one that lacked a standard way of presenting context and information.  I drew from how the VAX/VMS DCL interface returned status messages. Some examples:

%DECnet-W-ZEROLEN, length of file i zero -- SYS$SYSROOT:[SYSEXE]NET$CONFIG.DAT

%SMP-I-CPUTRN, CPU #1 has joined the active set.

%STARTUP-E-NOPAGFIL, No page files have been successfully installed.

But the first part was replaced with the app/service name, each line was prefixed with the current date and time: 2024-05-28 02:09:42, and the last token was a facility code.

Looking back, I think I should have followed the DCL format a little more closely, using a format more like: SUBSYSTEM-e-FACILITY, where “e” is a single character severity code (in increasing order of security): Debug, Info, Warning, Error, and Fatal, with the addition of an Always level.  This allows for filtering and configuring the verbosity level of the logging system with the exception that the Always level is always written to the log/console regardless of the configured logging level.  The time format would also be changed slightly to what’s log in UTC make it consistent with standard formats: 2024-05-28T02:09:42Z

This yields a format that can easily be scanned visually or parsed by a tool for offline analysis.

 

Outlook Taking Hours to Download Email

JIC it helps anyone else out there.  For no apparent reason, Outlook 2010 started taking hours to download emails that usually came down in minutes.  And when I say hours, I mean 3-5 hours!  The app would remain responsive for the most part, but would consume 100% of a CPU/core.  It would download a number of messages and then just pause for 45 minutes or more.

Turns out that the problem was my configured %TEMP% directory was filled up with almost 65,000(!) entries.  An automatic update of my Antivirus software failed, and started looping retry attempts.  This filled up the temp directory with little 1KB  files until it crashed.  It also filled up my event log.

Seems that Outlook gets really cranky when it has trouble with its temp directory.

So if you suddenly start having strange issues like this with Outlook, make sure your temp directory is not too full.

Agile Development – Scrum Pitfalls

I have been subjected to a few so-called agile development methodologies over the years.  Most recently, Scrum was the one in question.  Well, it was kinda-sorta-almost Scrummish, which means it was not really Scrum at all, but we will ignore that for now.

All in all, the use of stories and a Wiki to keep track of development, testing, etc. was a really good thing.  It kept developers (er, me) focused on what needed to be done, keep straight how it was gonna get done, and identify any problems that would prevent it from getting done.

Add to that the daily 10 minute standup meetings, which was the place to specify to everyone else on the team what you were going to be working on today, and what, if anything, was standing in your way.

However…

There were also some things that got in the way of using all of these tools, and those things tended to be the immediate (and sometimes upper) management, and the business side people.  Here are the three biggest problems we (I) had with things.

Problem 1 – everyone needs to use the Wiki!

More often than not, stories were written by our immediate supervisor/manager after getting verbal(!) information from our business side person.  The resulting stories were usually spot-on for what the business side had in mind, but not always.  When they did not jive with the expectations of the business side person, the developers were to blame, even though we were just following the story!  The supervisor rarely took direct responsibility for any miscommunication.  This would not have been a problem if the business side (and any other interested parties) used the Wiki for its intended purpose.  At a minimum, they should have looked at the stories written by the supervisor to make sure that everything was kosher.

Problem 2everyone needs to use the Wiki!

The Wiki was used for just about everything development-related.  What needed to be done, how it was going to get done, how it was going to be tested (the QA team members would also use the same Wiki), and, most importantly, show the progress of a story and its tasks.  This was good.

What was bad, is that developers often would get contacted directly by either our supervisor or our business side person to check on the status of things, or get details on what was being done or how it was being done.  We just spent 10 minutes updating our stories so that anyone could find out what is going on just by clicking a few links.  Now we get to spend more time updating this person or that person.  It is not that the time taken is a big deal.  It is the act of being distracted away from whatever we were working on.  Having to switch gears throughout the day to answer questions that have already been answered on the Wiki is not just a waste of our time, it shows a lack of respect for our time and what we do.

Problem 3Everyone…  Needs…  To…  Use…  The…  Wiki!!!

Meetings further exacerbated this problem.  Sometimes, questions would be asked during the standup meeting that were already answered on the Wiki.  Sometimes, hours prior to the standup.  Not only does this show a lack of respect for the developers, it shows a lack of respect for other people in the meeting as well — their time is being wasted here, too.  It also shows a lack of respect for the development process — we have the Wiki there for a reason.  Use It!  If our examples (anyone above us in the food chain) do not respect the system, then why should we?

Standups were not the only problem.  If any meetings were called that involved other people (like the business side), we would often get stuck answering information that was not just recently on the Wiki, but stuff that may have been answered at the start of the iteration, which may have been more than a week ago!  This is not just disrespectful, but it is a waste of money as well.

Assume that we have a 7-member team that has 5 developers, one supervisor and one business side person.  Assume an average salary of $100K for each developer, $175 for the supervisor and $200 for the business side person.  If the business side person takes up 15 minutes of a meeting asking a question (and getting answers) for something that was already on the Wiki, that costs the company $$$ (remember: time is money).  How much money?  Well, going by contractor’s math, a salaried person’s corresponding hourly rate is ~salary * .0005.  Examples:

$100,000/year ~=  $50/hour.   Rational: assume ~250 working days in a year (including 2 weeks vacation), at 8 hours a day, that comes out to 2000 hours.  So, 2000 hours * $50/hr = $100,000/yr.

If the business person spent 15 minutes reading the Wiki, that would only be 15 minutes of their time, so that would cost the company $25.  By wasting 15 minutes of time in a meeting with the developers and the supervisor, this costs the company $109.37.  ($84.37 of everyone else’s time plus the $25 for the business person’s time.)  Check the math yourself if you do not believe me – more than 4 times as much time and money is wasted!  Big difference, eh?  Even by calling a single developer, and chatting for 15 minutes, this costs the company more than necessary – it costs the business person’s time, the developer’s time, and the time for them to switch gears back to whatever they were doing before the unnecessary interruption.  At a minimum, this is $37.50, and 15 minutes of developer time where they were doing something other than development.

A simple way to avoid this problem – everyone in a meeting should assume that everyone else’s time is at least as valuable/important as their own.  Once you start thinking like this, you may stop wasting other people’s time as easily as you once did.

Conclusion: using any agile development methodology and its tools is probably a good thing.  But you must get everyone involved to buy-in and use the system.  Having anyone (including those higher up on the food chain) refuse to use the system can make others wonder why.  If anyone is too good for the system, then why are we saddled with the burden of using it?  If there is a senior-level manager that thinks that they are too important to click a few links on a web page, then fine.  But this should be handled by a lower manager, not by the developers themselves.

The developers have a job to do – let them do it.

Guide and other nice XNA features…

So I have started playing around with the Guide, signed-in user preferences and information, and the UI features in XNA.  Gotta say, I am impressed so far…

I am currently punting and using the text input feature to get a player’s high score name, defaulting it to the configurd player’s name.  When the game is done, I want to provide a custom-drawn GUI that allows the user to use the gamepad to select and enter letters, just like an old-school game does. 

I am thinking of showing a visual keyboard and allowing the user to select characters from there.  I believe it will be quicker than just using up/down to change the character at the cursor’s position.  Uses either approach allows me to control the characters used, but I think the former will be more fun and a better learning experience.

XNA Audio Troubles…

So as I am adding sound effects to the game, I noticed that over time it would consume more and more memory and start to slow down. Another thing that would happen is that the audio would drag out and get sloooooooooooooooooow.

Now, I consider myself a pretty decent developer and I took steps to ensure that objects are reused where possible, instead of being new-ed up over and over again (.NET may be an environment that takes care of a lot of the dirty work of managing memory, but you still need to remember that it is a limited resource and code accordingly).

I should have taken the slowed sounds as a hint, but I went down the path of tracking objects and Garbage Collections to make sure that the object caches were healthy and the few objects that are destroyed and recreated are cleaned up in an orderly fashion as well.

After going though all that, I realized that the XACT-related classes are the culprits. If I turn off the sound, memory usage remains stable, even after running the game (in its attract mode) for more than 12 hours. It normally started to get slow after 45 minutes or so.

Looking up memory-related issues with audio in XNA, I read that other people have a similar problem, but only if they are not calling Update(…) in the Audio Engine during each frame. However, I AM calling Update(…) and am still having the problem. I even added additional calls to Update(…) from other locations in the code, but to no avail.

I even tried manually managing each Cue instance by storing them in a collection when they are played and then Dispose()-ing them when they are done playing. No difference.

I have temporarily resolved the issue by manually cleaning up and reallocating the Audio Engine at certain times during the game — between the end of a level and the start of a new one, and whenever the Title Screen transitions to the next screen when the game is in attract mode. Kludgy? Yep! But it works, and until an updated version of the XNA Framework is available that possibly fixes this bug, it will have to do.

Oh, and I used The GIMP to create a starburst graphic and am briefly displaying it as a “superluminal flash” (the warp equivlent of a sonic boom) when the player’s ship goes to warp. It looks sweet, even if I do say so myself! 🙂

XNA Wonderfulness

So having some extra time on my hands, I have started playing around with XNA Game Studio again (version 3.0 this time). I picked up a shooter tutorial created by PHStudios which gives you a functional, but very simple game.

I started about the shooter that I have always wanted to write. It was going to look and behave much like an old arcade shooter. I wanted the details of the game to be just like an arcade game. For example, when it first starts, it will show a garbage screen quickly followed by an alphanumeric test (with some sprites in there), grids and colors, and ROM and RAM test (with successful results, of course), and then jump into attract mode waiting for someone to coin it up.

I have started this dream by taking the shooter tutorial and making extensive changes to it. I have since extended it by adding the following features:

  • Background Stars with multiple behaviors and effects (stopped, moving, warp, etc.)
  • Improved collision detection (only objects “near” each other are checked, and multiple collision rectangles are used for better checks)
  • Multiple enemies, including mini-bosses and bosses
  • Bonus items that “fall” that give you health, weapons, points, etc.
  • Additional weapon types
  • “Buddy” ships that dock to your ship for additional firepower
  • Auto-pilot/behavior for the enemy ships (i.e. what they do when the come on screen)
  • A “story” manager that manages each game level (i.e. what ships appear when and what their behavior is, also handles “waves” of attacks)
  • Explosions
  • Sound
  • A console that you can enter commands on to change/adjust the game’s behavior
  • And more!

Some of the features could be useful to others (like the background star, console, input manager) and I will release them free-for-use once I am done.

I am starting to think that I may be able to actually sell it once it is complete. I think that the retro arcade behavior of the game may appeal to some, and I hope that I can make it fun and challenging enough to get people interested.

Oh, and before I forget, there is a really nice free tool called (sfxr ) that can create sound effects. I am using it for the retro-like sounds in the game. I have also created a bunch of different types of sounds that I will likely also release free just as a time-saver to others.

Another Downside of Browser-Based Apps

I once again find myself having to use a web-based application. This is a often just a fancy name for a bloated set of code that provides a little more functionality than what a set of CGI scripts could provide.

The beauty of CGI apps was that they were often very succinct – they were used to process relatively small amounts of data that were entered in a small form.

Today’s applications give you multiple form fields and expect you to enter larger amounts of data. Some even play fancy DHTML tricks to allow you to dynamically add more fields so you can enter even larger sets of data. Nice, huh?

But what happens if that server goes down while you are entering all that data? Or if the people operating the site do not take into consideration just how long you can be entering data into one of their pages? You usually do not know about this as you are entering the data, and usually are not aware that you are about to lose all of that data you just entered until you press [Submit] and get back an error — too late!

Now, I do not script web pages, so I could be wrong about this… But we are in a world where we can play all kinds of fancy AJAX tricks, so why the HELL do web scripters (not developers, that term is reserved for people that do more than just write fancy client-side script) not just put a little AJAX code that keeps hitting the server to do things like (1) make sure it is still alive, (2) check for impending session timeouts, (3) and other stuff that make web apps appear more robust and professional?

Having a warning that the server has gone down before I submit some data would be great – I could copy my data to Notepad and then get it back when the server comes back. Now, this would be harder for pages that contain too many fields, but that is another indication that your app needs a better platform.

IIRC, even 3270-based form-style applications could handle server disconnections better than today’s equivalent browser-form based applications — at least they had an icon on the status line to indicate session state! That is the ironic part about this… not only did we take a giant backward in UI evolution, but we completely missed the robustness that those older applications had.

Hurray for progress!

Remember kids – while lots of applications can work in a (D)HTML/AJAX browser-based interface, not all of them can work WELL in that interface. Read up on what happened when someone tried to port Lotus 1-2-3 to a 3270-style interface… Wanna guess how well that went?

Assumption is the mother of all Fuckups…

So I find myself in the middle of a posting frenzy regarding a story on The Daily WTF: http://thedailywtf.com/Comments/A-Problem-at-the-Personal-Level–More.aspx.

The point of my posts was that by withholding the assumptions made by the interviewer with his “one right answer”-kind of question, you put the interviewee in a bad position. (The link above explains the scenario.) IME, in the absence of specific details, one is likely to draw upon their experience to formulate a solution.

So when I read that according to the interviewer, the one right answer was to use a move operation to relocate the complete data file to where the watcher was looking for it. Of course, my first question was if the move was atomic or not. Far too many posters claimed that it always(!) was, other more intelligent ones indicated that it should be.

So my first post there was asking about different filesystems. For example, the average Linux filesystem can support many different filesystem types: ext2, ext3, ffs, UFS, RiserFS, FAT32, NTFS, etc., and can have filesystem locations on different partitions, drives, and even network locations… So what if the source and destination locations are not on the same device/partition? Are the moves still atomic? My experience with both Linux and Win32 filesystem driver code tells me no, so that is what I posted, indicating that the assumption that everything is on one filesystem/partition must be known.

This post started to draw out lots of interesting people… One started talking about how the POSIX specification states that renames (and moves?) must be atomic, but did not know enough to realize that some systems may play fast-and-loose with the specification (Hello, Windows!). Another started talking about how the rename(…) syscall (the syscall!) is atomic. Well duh – most C-style functions are… it may return to the caller only after the rename (or move) is complete, but that does not mean that the behind-the-scenes action is atomic to an outside (filesystem) observer.

It amazes me how so many people just do not “get it.” Maybe I am not just a good communicator…

Or maybe these people really should stay away from a keyboard as much as possible… 🙂

Software – Robust vs. Works

I got myself involved in a thread on The Code Project Forums that lead to the idea of Robust software, and one of the posters said:

I call an application robust if it’s in use in the real world for a lengthy period of time and no data has been lost and no one has experienced downtime as a result of a bug in that app.
That’s the bottom line in the real world.

(Obviously, this developer never accidently opened a binary file in Notepad — which it will let you do, and is a very quick way to end up in the “data has been lost” scenario.) To which I responded why I was glad that he did not live in my real world, because the criteria he provided was a very low target in my opinion.

However, upon further reflection, while I still may not want developers like that in my world, I would just love to live in his! Imagine – not having to worry about things like data consistency across distributed systems, ordering, synchronization, and distribution of asynchronous events! Not having to think about the amounts of money that could be affected just by one little bug in the code somewhere because hey! – it did not lose any data, and the trader kept on using the system, right?! That would be heaven!

It would fan-effing-tastic to be able to use Notepad as the model of what “robust” means! When was the last time you used Notepad? How long has it been around? Ever lost data or experienced downtime due to a real bug in it?

Imagine how much more simple developer’s lives would be if that were the case…! Hell, at least I know that I would sleep better at night!

Learn your history before you present yourself as an authority

When you read a book about something, you should be able to safely presume that the author is somewhat of an authority on the subject matter of the book. So when reading an e-book on C#, I was surprised to come across the following Author’s Note:

Author’s Note: struct is not a new concept. In C++, the struct is another way of defining a class (why?). Even though most of the time C++ developers use struct for lightweight objects holding just data, the C++ compiler does not impose this.

I find it interesting that the author of a C# programming book has to ask why “struct is another way of defining a class.” Tip: some things were done for backward compatibility and to make it easier to work with older code. Not to mention that in C++ there actually is a (small) difference between struct and class.

CString misuse #2

Here is another one:

/////////////////////
LPCTSTR cpString = _T( "This was some string message..." );
//
// Later On In Code...
//
CString str( cpString );
char cChar = str.GetAt( 0 );
/////////////////////

If you write code like this, stop now and back slowly away from the keyboard – You’re Doing It Wrong!

The developer here is using a string object for a very simple operation. This is the kind of think people talk about when they say something like “using a shotgun to kill a fly”.

Extracting characters from a string (an array!) is a very basic operation – it is something we learn in our first C/C++ class or read about in our first C/C++ book. This is not something that you need a heavyweight class to help you out with.

Extracting the first character from cpString is as easy as doing one of the following:

/////////////////////
char cChar = *cpString;
//
// -Or-
//
char cChar = cpString[ 0 ];
/////////////////////

Remember – constructing and initializing an object always takes longer (i.e. has more overhead) than not constructing and initializing one. Think about wether or not you really need an object before you create one. If you can get along without it, see if doing so improves things.

For reasons mentioned in a previous post, in this case, the code is better without the CString.

CString misuse #1

This is the first of many examples of ways to misuse and/or abuse MFC’s CString class. While this example (and following ones) are specific to MFC, they likely apply to all string classes (mutable or not). Here is the offending code:

/////////////////////
CString str( "First Part Of Message\n" );
str = str + "Second Part Of Message\n";
str = str + "Third Part Of Message";

MessageBox( str );
/////////////////////

If you write code like this, stop now and back slowly away from the keyboard – You’re Doing It Wrong!

First, the developer is adding (concatenating) strings together, but these are static/constant strings! They always add up to the same string, and as such can be made into a single constant string:

/////////////////////
"First Part Of Message\nSecond Part Of Message\nThird Part Of Message"
/////////////////////

So at a minimum, the start of the code should read:

/////////////////////
CString str( "First Part Of Message\nSecond Part Of Message\nThird Part Of Message" );
/////////////////////

Why not add up the strings separately like the original code did? Two reasons – overhead and exception opportunity. Each use of CString::operator+(…) can result in dynamic memory operations (allocation and deallocation). So you are looking at six potential heap operations (three potential allocations and deallocations including destruction, although in release builds of CString, the number of operations is less). Each operation has the potential to raise an exception and in the absence of per-thread heaps, can effectively bottleneck a multi-threaded application to the performance of a single-threaded one because the heap operations have to be serialized.

So by manually putting the strings together we have reduced heap operations from 6 to 2 – one allocation and one deallocation. That is a pretty good improvement, but we can do better!

The MessageBox(…) function does not take CStrings, it takes pointers to constant strings (LPCTSTR). So why is a CString needed here at all?

/////////////////////
MessageBox( "First Part Of Message\nSecond Part Of Message\nThird Part Of Message" );
/////////////////////

This final version of the code is simpler, will execute faster, and is more robust. Sounds like a winner to me!

Note: Some of you may be thinking about the preprocessor’s ability to automatically concatenate static strings. Yes it does, but it cannot automatically coalesce the above strings because they are separate – they are being passed (separately) as parameters to a function. If the + operator was not present in-between the parts of the string, they would be coalesced to a single string, but the unnecessary CString would still be there.

You’re Doing it Wrong!

Having been inspired by the amount of photos on the internet showing various forms of spectacular failures (http://www.doingitwrong.com), ranging from failed bunny-hops to the most graceful faceplants, I thought that a coding-equivlent of it might be worth trying out.

To that end, this section will cover little snippets of “wrong” code found in the wild. Unlike the photos, where the failure is usually fairly obvious, the failures present in the code snippets are not always as obvious, so a small discussion explaining the failure will always be present.

OK – enough of the BS…! Let’s get started!

Amazing thing with Test Driven Design (TDD)

At my current place of employment, we had someone come in to talk about some Agile development processes.  One of them was Test Driven Development.  As an example, the presenter explained how scoring works in bowling and then asked the audience to create the code used to implement the scoring.

To make a long story short, both he and the audience started with a brief design phase(!) to design something to do something as simple as keeping score.  I thought this was very interesting, even when hearing some of the other developers as they started to design a multi-level hierarchical design for KEEPING SCORE!

I was reminded of an older idea/post that I had regarding the problems when you combine too much formal education with too little practical experience and throw the resulting person into a production-level software project.  Just like when you give a small child a hammer, everything looks like a nail.  When you give a lesser-experienced developer the task of designing something, you are more likely to get an over-design of something that does not reflect reality and tries to attain perfection instead of practicality.

(That post can be found here:
http://www.jrtwine.com/blog/?m=200407.)

What scares me is that some of the developers that were helping along this heavyweight, over-engineered design may now be responsible for new development efforts. I have always thought that you need to be a coder before you are a developer.  That you need to understand how and why things work in order to make better use of them.  As such, I shudder at the thought of having a group of lesser-experienced developers hitting everything they see with the same design hammer.  Especially having seen first hand what they are capable of with something as simple as a scoring system!

And people wonder why I believe that managed environments contribute to the dumbing-down of the modern software developer…

What is “good software”?

As I am reading a thread on WorseThanFailure (Our Dirty Little Secret), I see a post by “VGR” that states that companies do not recognize “good software”, but rather “finished” or “not finished.”  This is an interesting point.

But a question I would like answered is this – what exactly is good software?  How does one decide that this bit of software is good and that one is bad?  More to the point, since software starts with source code, how do you decide that this code is good and that code is bad?

I have grappled with this question myself, as I am sure others have as well.  I believe I know what good code is.  My education, experience and wisdom are my guides.  But what I believe to be good code is different from what another developer believes to be good code.  Another less seasoned developer may think something else, just as a more seasoned developer might.  So who is correct?

I believe that part of the problem with software today is that there are no common (or otherwise shared) standards for what constitutes good software (or good code which is where good software starts), excluding obvious things like “does not crash” or “does not corrupt data.”

So what exactly makes good code?  Is it…

  • Code that just works, or code that works well?  And what is the difference, if any?
  • Code that is Declarative, Imperative/Procedural, or just well commented?  Is it a combination of the two or all three?  And to that point, what exactly does “well commented” mean, anyway?
  • Code that uses encapsulation as much as possible, because (of course) encapsulation is “a good thing”, or is it code that selectively decides when it is advantageous to do so?
  • ??? What else?  I am sure that many other developers have other criteria…

For each of the items above, you can find developers that will argue for one thing or the other.  Worse, you can also find academics with relatively little to no real-world development experience doing the same, cultivating other future developers with the same thoughts!  This is good or bad depending on your point of view.

So how do we solve the problem?  I am not sure that it can be solved.  Software development is both art and science, and that art part is the killer.  Art is very subjective, and one person’s Picasso is another person’s misaligned jigsaw puzzle.  We may have to learn to all just get along here.

Why Performance is Important

When discussing topics like optimization and performance, there are far too many developers that either believe that performance is not important(!) or that the things taken to optimize the performance of something somehow magically results in making that system less robust.

For the first point, I can not image any developer that has ever uttered the words Damn, this thing is slow regarding their computer or a particular software application running on it, ever thinking that performance is not important. By the fact that you are complaining about somethings performance, that means that performance is important. Or at least, important enough to complain about.

For the second, there are lots of ways to optimize something, and none of them have to directly result in reduced robustness. One of my favorite examples, which is to prefer stack memory over heap memory, can actually improve the robustness of software – it reduces the possible places where exceptions can be raised and thus lessens the chance for exception mis-management to cause problems.

One of the things to remember before opening your mouth to say that performance is not important is to remember that your compiler still optimizes things to the best of its ability. Newer generations of compilers often offer more and better optimization capability as well. Why is this? It is because performance is NOT important, and the compiler writers wanted to just waste time?

When a new processor architecture is made available, manuals that detail that architecture are produced that often specify the best way to utilize that new architecture. Cache utilization, multiple execution engines, out-of-order execution, register allocation, store/load stall scenarios, etc are usually covered in great detail so that all of the capability of that new architecture can be used to its fullest potential.

Again, was that material written just to waste time, or does someone out there know something that you do not – that again, performance is important.

One of the things that today’s developers may often forget is that while their software is running on better hardware, it is also running along with other software applications. For you Windows users, have a quick look at your Task Bar, and SNA area (often mistakenly called the tray). How many applications are you running? Have a look at the process list in Task Manager and see how many processes are really running.

Now compare that value to how many applications you were running simultaneously on previous versions of Windows – 2000, NT, 9x, or even Windows 3.1. As our hardware gets better, we expect to be able to do more with it. But when that many applications are competing for shared resources (CPU, memory, etc.), the specter of performance once again rears its ugly head.

Just like writing device drivers takes a different discipline than writing desktop applications, writing software that has to execute in a shared environment is different than writing software that runs in a dedicated environment. The average desktop developer cannot forget that their software will not be running in an ideal environment, and that just because it works great on the clean demo system, or the developer’s multi-CPU box with 4GB of memory does not mean that its performance is good enough when it hits the target user’s system.

Premature Optimization may not be premature…

There is an interesting thing I am noticing with younger developers – anytime someone mentions optimization, the first words out of their mouths is something about how how the optimization is premature optimization, and is only going to cause more harm than good.

These developers lack a certain wisdom that comes with years of varied experience – once you have experienced something that inefficient, you know how to spot it in the future.

Optimization is about simplicity. Think about it – whenever something is considered optimal, it is usually simplified somewhat from its original incarnation. An optimized interface is usually a simplified one. Optimized code usually takes less steps to do something, and thus is usually less complex; hence – simplified.

From the first Computer Science (or programming) class, the KISS philosophy is hammered in. Keep It Simple, Stupid. The art of optimization is the ultimate application of the KISS philosophy.

Never underestimate or disregard the benefit of simplification, of which is nothing but a better word for optimization. Simple is easier to use, understand and modify in the future. What could possibly be wrong with that?

I want to meet the developer(s) of Sybase SQL Advantage…

Version: 12.5.0.3/EBF 10752 IR/P/NT (IX86)/OS 4.0/Wed Jan 15 12:59:30 2003

Why? Because I want to ask them why it takes SQL Advantage ~16 seconds to process a query that returns 674 rows of ~70 columns each and display them in Grid or Text output, when I have written an application that goes through two additional API layers above the CT libraries, and can do the same thing (even in a Grid) in less than 2 seconds?

You actually have to go out of your way and TRY to write code that slow – that kind of lousy performance does not happen automatically. That is the kind of stupidity that has to be cultivated through bad practices.

I just want to know how someone could write such an app, and consider it suitable for release to the public. I need to know the mindset behind that, so I know what questions to ask potential employees during an interview so that they can be weeded out. I do not even want to sit next to someone like this, for fear of them making be dumber via osmosis.

I can only hope that the GUI developers are not the same ones that implement the underlying libraries or the RDBMS itself…

Best WTF Moments – Correcting the Test’s Answers

In talking with a friend I was reminded of one of my favorite WTF moments – correcting the answers on an interviewer’s tech questions. Once, while taking an interview for a position, the interviewer was going over my answers for the tech questions and happened to mention that one (or two?) of them were incorrect.

When I asked about them it turns out to have been questions regarding the size of C++ objects that have no data members. For example:

class CEmpty { };
class CEmpty2 { public:  void MyFunc( void ) { return; } }; 
class CEmpty3 { public: virtual void MyFunc( void ) { return; } };

So what is the size of CEmpty, CEmpty2 and CEmpty3? My answer was that it was basically implementation defined. The interviewer’s answers said that CEmpty and CEmpty2 had a size of zero, and that CEmpty3 had a non-zero size.

I had answered that CEmpty and CEmpty2 will have a implementation-specific/defined size (IME, a size of 1) and that CEmpty3 will have a size due to the vtbl that will be added to the class, and added that the size of the vtbl will not be added to the implementation-specific/defined size given to otherwise empty objects. (In other words, if the size of the vtbl pointer is 4 bytes, the object size will be 4, not 5.)

The interviewer, being a/the senior developer that I would have been working with or for, did not agree and we ended up with some code snippets being compiled and executed in the VC++ 6.0 IDE. Wanna take a guess who was correct?

It turns out that the company’s CTO decided not to accept me for the position. I was never told why (I otherwise aced the interview, of course), but I was told that the CTO created the interview questions (and answers). Go figure…!

More Stupid Code…

Here is another example of code that demonstrates a complete misunderstanding of how things work, or at least of MFC and/or the RTL…

char value[256];
::GetPrivateProfileString("section","ValueName", "OFF", value, 256, INI_PATH);
CString temp(value);
temp.MakeUpper();
if (temp != "OFF")
{ ... }

Now, I can understand the need to do a case-insensitive compare of an INI file value.  But we have functions designed to do that!  Never heard of stricmp(…) and its variants?  OK – even if you do not know about the available RTL functions and all you know is CString, never heard of CString::CompareNoCase(…)?

Code like this just demonstrates ignorance, plain and simple.  Oh, and how goes that exception handling for situations where CString fails to allocate memory?  Oh, yeah…  THERE IS NONE!

Yet another real-world example of useless allocation.

Worst… Spam… Ever…

So while browsing through my spam-basket I came across an interesting message that was caught by SpamAssassin.  The headers from that message follow (edited slightly to remove addresses and to emphasise details):

Subject: §Ú¬O¤@­Ó·Å¬Xªº¨Ä¨Ä¤k«Ä.·Q´M§ä¨ë¿Eªº­ô­ô±z³ßÅw¶Ü?¡ð20·³
X-Spam-Status: Yes, score=63.4 required=5.0 tests=BAYES_99, DATE_IN_FUTURE_96_XX, FORGED_MUA_EUDORA, FORGED_QUALCOMM_TAGS, FROM_ILLEGAL_CHARS, HEAD_ILLEGAL_CHARS, HTML_30_40, HTML_IMAGE_ONLY_08, HTML_MESSAGE, HTML_MIME_NO_HTML_TAG, HTML_SHORT_LINK_IMG_1, MIME_BOUND_DD_DIGITS, MIME_HTML_ONLY, MIME_HTML_ONLY_MULTI, MISSING_MIMEOLE, MSGID_SPAM_CAPS, NORMAL_HTTP_TO_IP, RCVD_DOUBLE_IP_SPAM, RCVD_IN_BL_SPAMCOP_NET, RCVD_NUMERIC_HELO, REPTO_QUOTE_QUALCOMM, REPTO_QUOTE_YAHOO, SUBJ_ILLEGAL_CHARS, UNPARSEABLE_RELAY, URIBL_JP_SURBL, URIBL_OB_SURBL, URIBL_SBL, URIBL_SC_SURBL, URIBL_WS_SURBL, X_IP, X_PRIORITY_HIGH autolearn=spam version=3.1.7
Date: Tue, 19 Jan 2038 11:14:07 +0800
X-Spam-Flag: YES
X-Spam-Level: **************************************************
From xxxx.xxxxxxxx@xxxxxx.com.br Tue Oct 24 16: 43:51 2006
X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on xxxxxx.xxxxxx.com
Message-Id: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Content-Type: multipart/mixed; boundary="----------=_453E7B09.452041D8"
To: support@xxxxxxx.com.tw
From: mailto:¼ÐÃD¡G¡m¥¨¨Å°Ï¡n·s¼W¡m»a¤«ªÅ¡n±j¥´·s¤ù¡A¤ù¤¤¦³¦o¸g¨åªº¼é§jÃèÀY¡I³ôºÙ¥L§@«~¤ºªº¤W¤W¤§¿ï%20(¡m¥¨¨Å°Ï¡n·s¼W¥i·Rµ£ÃCÄ_¨©¡m»a¤«)

I can honestly say – that has got to be the worst message I ever received.  Most of my spam emails never get above a spam-score of 20!  What gets me is that the sender of the message somehow managed to completely mess it up.  So this is an example of the fourth (I think) rule of software development – always test what you are doing (or trying to do)!

I mean, come on now… If you are stupid enough to construct a message that sets off that many spam-traps, you really are an idiot!  It is things like this that give me hope that we will eventually win the war on spam.  Hell, look at the kinds of people we are fighting! 🙂

Take the Advice you are Paying for

After you have been doing something well for a number of years, you begin to gain experience and wisdom regarding it. Generally, this translates to a higher salary and/or rate, as it should of course.

Companies pay this higher salary/rate because of that experience and wisdom. But it makes no sense to have that wisdom ignored by the people you work for. When that happens, it is nothing but a waste of your time, their money, and is demonstrative of complete ignorance of your experience.

And as usually the case, when other people come from a position of ignorance, they tend to inflict the problems it causes on others, instead if correcting their own problem (i.e. their ignorance) first.

The moral – if you are paying someone $167K (or more) a year, you are paying for their knowledge, experience, wisdom, and advice. Time to start getting your money’s worth – take the advice you are paying for; do not unnecessarily question it, and realize that despite your age and position, this person might know something that you do not.


Oh, and to clear it up, this is my point of view on the differences between knowledge, experience and wisdom:

  • Knowledge is what you get from schools and books, magazines, articles, training, self-study, etc. (e.g. learning C++, VB, COM and Java)
  • Experience is gain you get by applying that knowledge in real-life situations (e.g. using C++, VB and Java to solve particular problems)
  • Wisdom is what is learned from the results of the experience (e.g. learning when to use C++ over Java, Java over VB, and what things should and should not be a COM object, etc.)

NULL != NUL

I continue to find it rather amusing that even (so-called) experienced developers will use fundamentally different concepts interchangeably, even after doing this for so long.

For example, I have seen documentation by developers that mention nul values in a database table, or worse yet, NULL-terminated strings, and NUL pointers.

Now, some that simply miss the point will be saying something like: “In C++, NULL is zero, and the NUL ASCII code has a value of zero, so they are the same thing!”

Wrong.
Continue reading NULL != NUL