Localization Testing: Part 1

When I first joined Microsoft 15 years ago I was on the Windows 95 International team. Our team was responsible for reducing the delta between the release of the English version and the Japanese version to 90 days, and I am very proud to say that we achieved that goal and Windows 95 took Japan by storm. It was so amazing that even people without computers were lined up outside of sales outlets waiting to purchase a copy of Windows 95. The growth of personal computers in Japan shot through the roof over the next few years. Today the Chinese market is exploding, and eastern European nations are experiencing unprecedented growth as well.  While the demand for the English language versions of our software still remains high, many of our customers are demanding software that is ‘localized’ to accommodate the customers national conventions, language, and even locally available hardware. Although much of the Internets content is in English, non-English sites on the web are growing, and even ICANN is considering allowing international domain names that contain non-ASCII characters this week in Seoul, Korea.

But, a lot has changed in how we develop software to support international markets. International versions of Windows 95 were developed on a forked code base. Basically, this means the source code contained #ifdefs to instruct the compiler to compile different parts of the source code depending on the language family. From a testing perspective this is a nightmare, because if the underlying code base of a localized version is fundamentally different than the base (US English) version then the testing problem is magnified because there is a lot of functionality that must be retested. Fortunately today, much software being produced is based on a single-worldwide binary model. (I briefly explained the single world wide binary concept at a talk in 1991, and Michael Kaplan talks about the advantages here.) In a nutshell, a single worldwide binary model is a development approach in which any functionality any user anywhere in the world might need is codified in the core source code so we don’t need to modify the core code once it is compiled to include some language/locale specific functionality.  For example, it was impossible to input Japanese text into Notepad on an English version of Windows 95 using an Input Method Editor (IME); I needed the localized Japanese version. But, on the English version of Windows Xp, Vista, or Windows 7 all I have to do is install the appropriate keyboard drivers and font files and expose the IME functionality. In fact, these days I can map my keyboard to over 150 different layouts and install fonts for all defined Unicode characters on any language version of the Windows operating system.

The big advantage of the single worldwide binary development model is that it allows us to differentiate between globalization testing and localization testing.  At Microsoft we define globalization as “the process of designing and implementing a product and/or content (including text and non-text elements) so that it can accommodate any locale market (locale). ” And, we define localization as “the process of adapting a product and/or content to meet the language, cultural and political expectations and/or requirements of a specific target market.” This means we can better focus on the specific types of issues that each testing approach is most effective at identifying. For localization testing, this means we can focus on the specific things that change in the software during the “adaptation processes” to localize a product for each specific target market.

The most obvious adaptation process is the ‘localization’ or actually the translation of the user interface textual elements such as menu labels, text in label controls, and other string resources that are commonly exposed to the end user. However, the translation of string resources is not the only thing that occurs during the localization process. The localization processes that are required to adapt a software product to a specific local may also include other changes such as font files and drivers installed by default, registry keys set differently, drivers to support locale specific hardware devices, etc.

3 Categories of Localization Class Bugs

I am a big fan of developing a bug taxonomic hierarchy as part of my defect tracking database as a best practice because it better enables me to analyze bug data more efficiently. If I see a particular category of bug or a type of bug in a category that is being reported a lot, then perhaps we should find ways to prevent or at least minimize the problem from occurring later in the development lifecycle. After years of analyzing different bugs, I classified all localization class bugs into 3 categories; functionality, behavioral/usability, and linguistic quality.

Functionality type bugs exposed in localized software affect the functionality of the software and require a fix in the core source code. Fortunately, with a single worldwide binary development model where the core functionality is separated from the user interface the number of bugs in this category of localization class bugs is relatively small.  Checking the appropriate registry keys are set and files are installed in a new build is reasonably straight forward and should be built into the build verification test (BVT) suite. Other types of things that should be checked include application and hardware compatibility. It is important to identify these types of problems early because they are quite costly to correct, and can have a pretty large ripple effect on the testing effort.

Behavioral and usability issues primarily impact the usefulness or aesthetic quality of the user interface elements. Many of the problems in this category do not require a fix in the core functional layer of the source code. The types of bugs in this category include layout issues, un-translated text,  key mnemonic issues, and other problems that are often fixed in the user interface form design, form class, or form element properties. This category of problems often accounts for more than 90% of all localization class bugs. Fortunately, the majority of problems in this category do not require any special linguistic skills; a careful eye for detail is all that is required to expose even the most discrete bugs in this category.

The final category of localization class bug is linguistic quality. This category of bugs are primarily mis-translations. Obviously, the ability to read the language being tested is required to identify most problems in this category of errors. We found testers spent a lot of time looking for this type of bug, but later found the majority of linguistic quality type issues reported were resolved as won’t fix. There are many reasons for this, but here is my position on this type of testing….Stop wasting the time of your test team to validate the linguistic quality of the product. If this is a problem then hire a new localizer, hire a team of ‘editors’ to review the work of the localizer, or hire a professional linguistic specialist from the target locale as an auditor. Certainly, if testers see an obvious gaff in a translation then we must report it; however, testers are generally not linguistic experts (even in their native tongue), and I would never advocate hiring testers simply based on linguistic skills nor as a manager would I want to dedicate my valuable testing resources on validating linguistic quality…that’s usually not where their expertise lies, and it probably shouldn’t be.

What’s Next

Since behavioral /usability category issues are the most prevalent type of localization class bug this series of posts will focus on localization testing designed to evaluate user interface elements and resources. The next post will expose the often single most reported bug in this category.

Comments

  • Anonymous
    November 11, 2009
    The comment has been removed
  • Anonymous
    November 12, 2009
    Hi Rikard, Spot on! Over-localization of strings that should not have been localized is indeed something that could lead to functionality problems. Fortunately, the maturation of our pseudo-localization processes have largely reduced this and other problems. But, it is still something we must test for.