Mobile Application Design Success

Mobilized Software writes that “one-click simplicity and cultural relevancy are the keys to user interface and design work.”

There are at least two good examples at the other end of the design simplicity spectrum. “What do Google and SMS have in common?” asked mobile blogger Russell Beattie. “In terms of interactivity, users just have to contend with one field.”

Beattie advises developers to think carefully about adding buttons, fields or any ounce of complexity to their mobile applications. Given how the masses use mobile devices and cell phones, these additional efforts might be wasted.

SugarCRM writes:

SugarCRM has a business model similar to that of Red Hat. It offers a free open-source version of its customer relationship management (CRM) application, Sugar Sales, on and sells licences for its enterprise version, Sugar Sales Professional, which includes additional features and services.

Smith claims her company is unique in that it gains revenue from an open-source business application, rather than middleware or an operating system. “We are one of the pioneers in (the) open-source vendor space–we provide a business tool which interacts with users, rather than just a back-end product,” Smith said.

The on-demand product will bring CRM to companies that could not use it before because of restricted IT and financial resources, Smith said. The company is hoping that the product, which will cost $39.99 per user per year, will rival the on-demand offerings of companies such as Siebel and’s entire offering is on-demand, and a basic package costs $995 a year for five users, according to the company’s Web site–five times the cost of SugarCRM’s proposed offering.

Source Code Analysis

InfoWorld has an article by Jon Udell:

TDD (test-driven development) is one increasingly popular approach to finding bugs. The overhead can be substantial, however, because the test framework that ensures a programs correctness may require as many lines of code as the program itself. Run-time checking is another popular approach. By injecting special instrumentation into programs or by intercepting API calls, tools such as IBMs Rational Purify and Compuwares BoundsChecker can find problems such as memory corruption, resource leakage, and incorrect use of operating system services. TDD and run-time checking are both useful techniques and are complementary. But ultimately, all errors reside in the programs source code. Although its always important for programmers to review their own code (and one anothers), comprehensive analysis demands automation.

One compelling demonstration of the power of automated source code analysis is Coveritys Linux bugs database. Viewable online, this April 2004 snapshot pinpointed hundreds of bugs in the Linux 2.6 source code. Coveritys analyzer, called SWAT (Software Analysis Toolset), grew out of research by Stanford professor Dawson Engler, now on leave as Coveritys chief scientist.

In the Windows world, a static source code analyzer called PREfast, which has been used internally at Microsoft (Profile, Products, Articles) for years, will be included in Microsoft Visual Studio 2005 Team System. PREfast is a streamlined version of a powerful analyzer called PREfix, a commercial product sold in the late 1990s by a company called Intrinsa. Microsoft acquired Intrinsa in 1999 and brought the technology into its Programmer Productivity Research Center.

Intersection of Media and Technology

Steve Neiderhauser writes:

About a week ago, I attended a Linkedin related networking event where a group of executives ate lunch and listened to a presentation. The topic? The convergence of home entertainment and technology.

Michael Greeson, President of The Diffusion Group, talked for 30 minutes about trends in digital entertainment. What’s worked? What hasn’t? Where is digital entertainment heading?

Here are some of the key points Michael made:

  • Home entertainment and technology are converging. TVs and other home entertainment products are using CPUs, disk drives, and memory.

  • The PC is no longer in replacement mode — today, when you buy a PC you’re able to keep it for three or four years. So, how will PC companies grow revenues? New business models are needed.

  • Dell’s supply chain is its business model. A model that will no longer produce additional revenues by itself.

  • HP is a company that creates products (it spends millions on R&D), and yet HP made a deal with Apple to resell the iPod. It takes a unique set of skills to make digital media products.

    What does this mean for companies that make PCs? They need to move to the intersection of media and technology if they wish to grow revenues.

    To me, it appears the stars are starting to align in Apple’s favor.

  • Addressing Security

    Jakob Nielsen writes that “user education is not the answer to security problems.” His recommendations:

  • Encrypt all information at all times, except when it’s displayed on the screen. In particular, never send plaintext email or other information across the Internet: anything that leaves your machine should be encrypted.

  • Digitally sign all information to prevent tampering and develop a simple way to inform users whether something is from a trusted source. This might, say, replace current stupid security warnings that people don’t understand because they expose the guts of the technology. (“The security certificate has expired or is not yet valid.” Aha. And what does that mean to a normal person?)

  • Turn on all security settings by default since most people don’t mess with defaults. Then, make it easy to modify settings so that users can get trusted things done without having to open a wide hole for everybody.

  • Automate all updates. Most virus software downloads new virus definitions in the background, which is a good first step. The automated patching introduced with Windows XP’s SP2 is also an improvement.

  • Polish security features’ usability to a level far beyond anything we’ve seen so far. Security is inherently complicated, and it’s something users don’t care about (until it’s too late). The user interface requires the ultimate in simplicity. Heavy user testing and detailed field research are a must.

  • TECH TALK: CommPuting Grid: Grid Computing (Part 3)

    Anurag Shankar compares the computing grid with a power grid, and then discusses it in the context of the Web that we are so familiar with:

    Grid computing is a way to use many computers connected via a network simultaneously to solve a single scientific or technical problem. In the most common cases, these problems usually require substantial amounts of CPU cycles (i.e. computer power) and/or produce or access massive amounts of data.

    The word grid is borrowed from the power grid context. Just so we are clear and in sync, a power grid is a system that encompasses:
    1. a physical hardware layer of
    a network of wires that run across the country and carry electricity, and
    a large number power generation stations,
    2. a power distribution system that detects overloads and underloads and diverts electricity accordingly to different parts of the country or shuts it off entirely in case of problems, and
    3. a large number of users around the country that use electricity.
    A computing grid is relatively similar, except electric power is replaced by compute power. When fully realized, the computing grid will consist of
    1. a physical hardware layer of
    a network of optical fiber that run across the world and carry data bits, and
    a large number of computers, data storage systems, communication systems, global positioning systems, live instruments, etc.,
    a computing power distribution system that knows where compute power is available and diverts work accordingly to different compute resources around the world, or shuts a user or a resource off entirely if it finds security or other problems, and
    a VERY LARGE number of users around the world that use computing and communications.

    Let me thus define grid as it pertains to computing very precisely as follows:

    A computing grid is a collection of some or all of the following resources: computer networks (optical fiber, routers, switches), CPUs (PCs/Macs, other servers/computers), data storage systems, scientific/medical instruments (X-ray, CAT scanners, etc.) feeding live or accumulated data, sensor networks (for example a thousand RFID tags placed in a rainforest to measure temperature, humidity, light exposure, etc. in very fine detail), visualization systems (PCs or fancier viz gear like virtual reality), data collections (scientific, demographic, medical, etc.), housed either in or out of databases, communication systems (cell phones, Blackberry like devices, etc.), global positioning systems, and the like. All the resources are connected at high speeds by a computer network and mostly used in parallel to solve a single problem. In addition, the resources are
    made visible to a user via some sort of software that presents the resources holistically as a single, coherent entity, or
    accessed via a web portal that uses the grid at the back end but which hides its complexity from the end user.
    As you can see immediately, grid computing is really a superset of today’s World Wide Web (WWW). The similarity is relatively obvious, especially in scenario b) above. While all of the computers involved in the WWW are connected by a computer network, your web browser in a majority of cases connects ultimately to a single web server. Clearly, two computers connected over the network is not a terribly exciting grid. On the other hand, it is entirely the case that a WWW address (URL) you enter in your web browser may involve many, many WWW servers, located either in the same room or across the continent or world from each other, simultaneously and in parallel via the magic of software.

    Anurag discusses computing grids that we are in contact almost daily as we browse the Web: Google and Akamai.In a sense, the grid is the future Internet. He added: I am assuming that CPU power and network bandwidth will soon be completely commodity and infinite in extent (in particular computers will become essentially throw away items every eighteen months or sooner when a new generation of CPUs twice as fast as the one you are currently using come out). The future is completely data-centric; i.e. it’s ALL ABOUT DATA, moving it, mining it, and delivering it on demand (as Google and Akamai are showing already).

    Tomorrow: Recent Developments

    Continue reading