State of CAD and Engineering Workstation Technologies

Abbreviations

  • CAD is Computer Aided Design
  • CAE is Computer Aided Engineering
  • CEW is Computer aided design and Engineering Workstation
  • CPU is Central Processing Unit
  • GPU is Graphics Processing Unit

Hardware for CPU-Intensive Applications

Computer hardware is designed to support software applications and it is a common but simplistic view that higher spec hardware will enable all software applications to perform better. Up until recently, the CPU was indeed the only device for computation of software applications. Other processors embedded in a PC or workstation were dedicated to their parent devices such as a graphics adapter card for display, a TCP-offloading card for network interfacing, and a RAID algorithm chip for hard disk redundancy or capacity extension. However, the CPU is no longer the only processor for software computation. We will explain this in the next section.

Legacy software applications still depend on the CPU to do computation. That is, the common view is valid for software applications that have not taken advantage of other types of processors for computation. We have done some benchmarking and believe that applications like Maya 03 are CPU intensive.

For CPU-intensive applications to perform faster, the general rule is to have the highest CPU frequency, more CPU cores, more main memory, and perhaps ECC memory (see below).

Legacy software was not designed to be parallel processed. Therefore we shall check carefully with the software vendor on this issue before expecting multiple-core CPUs to produce higher performance. Irrespectively, we will achieve a higher output from executing multiple incidences of the same application but this is not the same as multi-threading of a single application.

ECC is Error Code Detection and Correction. A memory module transmits in words of 64 bits. ECC memory modules have incorporated electronic circuits to detect a single bit error and correct it, but are not able to rectify two bits of error happening in the same word. Non-ECC memory modules do not check at all – the system continues to work unless a bit error violates pre-defined rules for processing. How often do single bit errors occur nowadays? How damaging would a single bit error be? Let us see this quotation from Wikipedia in May 2011, “Recent tests give widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 errors/bit-hour, roughly one bit error per hour per gigabyte of memory to one bit error per century per gigabyte of memory.”

Hardware for GPU-Intensive Applications

The GPU has now been developed to gain the prefix of GP for General Purpose. To be exact, GPGPU stands for General Purpose computation on Graphics Processing Units. A GPU has many cores that can be used to accelerate a wide range of applications. According to GPGPU.org, which is a central resource of GPGPU news and information, developers who port their applications to GPU often achieve speedups of orders of magnitude compared to optimized CPU implementations.

Many software applications have been updated to capitalize on the newfound potentials of GPU. CATIA 03, Ensight 04 and Solidworks 02 are examples of such applications. As a result, these applications are far more sensitive to GPU resources than CPU. That is, to run such applications optimally, we should invest in GPU rather than CPU for a CEW. According to its own website, the new Abaqus product suite from SIMULIA – a Dassault Systemes brand – leverages GPU to run CAE simulations twice as fast as traditional CPU.

Nvidia has released 6 member cards of the new Quadro Fermi family by April 2011, in ascending sequence of power and cost: 400, 600, 2000, 4000, 5000 and 6000. According to Nvidia, Fermi delivers up to 6 times the performance in tessellation of the previous family called Quadro FX. We shall equip our CEW with Fermi to achieve optimum price/performance combinations.

The potential contribution of the GPU to performance depends on another issue: CUDA compliance.

State of CUDA Developments

According to Wikipedia, CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by Nvidia. CUDA is the computing engine in Nvidia GPU accessible to software developers through variants of industry-standard programming languages. For example, programmers use C for CUDA (C with Nvidia extensions and certain restrictions) compiled through a PathScale Open64 C compiler to code algorithms for execution on the GPU. (The latest stable version is 3.2 released in September 2010 to software developers.)

The GPGPU website has a preview of an interview with John Humphrey of EM Photonics, a pioneer in GPU computing and developer of the CUDA-accelerated linear algebra library. Here is an extract of the preview: “CUDA allows for very direct expression of exactly how you want the GPU to perform a given unit of work. Ten years ago I was doing FPGA work, where the great promise was the automatic conversion of high level languages to hardware logic. Needless to say, the huge abstraction meant the result wasn’t good.”

Quadro Fermi family has implemented CUDA 2.1 whereas Quadro FX implemented CUDA 1.3. The newer version has provided features that are significantly richer. For example, Quadro FX did not support “floating point atomic additions on 32-bit words in shared memory” whereas Fermi does. Other notable improvements are:

  • Up to 512 CUDA cores and 3.0 billion transistors
  • Nvidia Parallel DataCache technology
  • Nvidia GigaThread engine
  • ECC memory support
  • Native support for Visual Studio

State of Computer Hardware Developments

Abbreviations

  • HDD is Hard Disk Drive
  • SATA is Serial AT Attachment
  • SAS is Serial Attached SCSI
  • SSD is Solid State Disk
  • RAID is Redundant Array of Inexpensive Disks
  • NAND is memory based on “Not AND” gate algorithm

Bulk storage is an essential part of a CEW for processing in real time and archiving for later retrieval. Hard disks with SATA interface are getting bigger in storage size and cheaper in hardware cost over time, but not getting faster in performance or smaller in physical size. To get faster and smaller, we have to select hard disks with SAS interfaces, with a major compromise on storage size and hardware price.

RAID has been around for decades for providing redundancy, expanding the size of volume to well beyond the confines of one physical hard disk, and expediting the speed of sequential reading and writing, in particular random writing. We can deploy SAS RAID to address the large storage size issue but the hardware price will go up further.

SSD has turned up recently as a bright star on the horizon. It has not replaced HDD because of its high price, limitations of NAND memory for longevity, and immaturity of controller technology. However, it has found a place recently as a RAID Cache for two important benefits not achievable with other means. The first is a higher speed of random read. The second is a low cost point when used in conjunction with SATA HDD.

Intel has released Sandy Bridge CPU and chipsets that are stable and bug free since March 2011. System computation performance is over 20% higher than the previous generation called Westmere. The top CPU model has 4 editions that are officially capable of over-clocking to over 4GHz as long as the CPU power consumption is within the designed limit for thermal consideration, called TDP (Thermal Design Power). The 6-core edition with official over-clocking will come out in June 2011 timeframe.

CurrentState & Foreseeable Future

Semiconductor manufacturing technology has improved to 22 x 10-9 metres this year 2011and is heading towards 18 nanometres in 2012. Smaller means more: we will get more cores and more power from a new CPU or GPU made on advancing nanotechnology. The current laboratory probe limit is 10-18and this sets the headroom for semiconductor technologists.

While GPU and CUDA are having big impacts on performance computing, the dominant CPU manufacturers are not resting on their laurels. They have started to integrate their own GPU into the CPU. However, the level of integration is a far cry from the CUDA world and integrated GPU will not displace CUDA for design and engineering computing in the foreseeable future. This means our current practice as described above will remain the prevailing format for accelerating CAD, CAE and CEW.

END

Factors Propelling Growth of Indian Construction Equipment Segment

The long-term potential for India’s construction equipment market seems to be very significant. Let’s look at the factors which will make a difference…

If the numbers are any truth, then the Earthmoving and Construction Equipment market in India is expected to grow by 20 to 25 percent over the next few years to reach 330,000 to 450,000 units sold in 2020, from current levels of about 76,000 units. This would mean $16 billion to $21 billion market, up from today’s $3 billion. Further, the sector is expected to be dominated by backhoe loaders, but broad-based growth is expected across products, with each segment expected to see double-digit growth. Thus, the construction equipment companies in India have all the reason to say Cheers!

Factors in the favour of growth of this equipment segment

Here is a list of six factors that will propel the industry forward in future:

· Demand from end-user industries: Demand from end users will continue to rise as a result of increased use of this equipment in traditional end-user industries, including construction and mining.

· Higher adoption in traditional applications: Increased use of this equipment in traditional applications such as digging and soil loading so as to increase the speed will propel the growth of the segment.

· Demand from new applicatio­­­­ns: Demand for this applications is also expected to grow in new segments such as agriculture which traditionally faced issued like lack of access.

· Growing urbanisation: Urbanisation will also drive the demand for construction activities and in turn this equipment segment so as to meet residential, commercial and infrastructure development needs.

· Increased affordability. New players entering the market will make competition stiffer, thereby, making this machine more affordable. This trend is supposed to deepen the markets to cover users with the machine needs and previously low access.

· Better availability of financing. More financing of this equipment will create wider use by encouraging users to opt for these machines.

Future challenges on the way

· Availability of skilled manpower: As the construction equipment industry will grow, the need for trained operators and mechanics will increase proportionately and this is likely to be a challenge for construction equipment companies in India.

· Stiff competition: The emergence of new construction equipment players in the coming years will make the competition stiff.

· Need for innovative solutions: With growing awareness, end-users will demand world-class technology for better fuel efficiency, higher productivity and profitability, thus, this equipment manufacturers will have to come up with innovative solutions to meet customer expectations.

· Resale of used equipment will be difficult: Since, the secondary market for used this equipment is not fully developed in India, the resale of used equipment will be a challenge for the construction equipment manufacturers in the country.

Despite these few challenges, the long-term potential for India’s this equipment market seems to be very significant due to various factors propelling the growth of the segment. Foreseeing large construction activities underway, the construction equipment industry is set to see a boom in coming days. Construction of highways, railways, bridges, ports, residential and commercial building seems to be on the cards of the government and private players and hence, the construction equipment industry is expected to remain bullish about the long-term prospects.

WiFi Vs. WiMax

Wi Fi Fo Fum, I think I smell the blood…oops wrong tale. This story doesn’t involve giants, but it does involve giant leaps forward in technology that will affect us all.

The other day I was watching two kids play. Each had a tin can up to their ear and they were speaking to each other on the ‘phone’. Talk about technological leaps. Yes, the string that I used as a kid to hook up this intricate communication system had disappeared, and they were now wireless!

When I was Batman back then, the string always kept me close enough to Robin so we could hear each other, even around the corner of a cinder block wall. Unrestricted by ‘the magic string’ these kids tended to drift out of range from time to time. Showing true genius, they engaged Billy’s little brother to position himself on middle ground, and he relayed wireless messages back and forth. They called him ‘tower’. I laughed.

It really is a reflection of a changing world. We’ve gone from HiFi to Wi-Fi, and next on the endless chain is WiMax. The transition from ‘High Fidelity’, which simply related to sound quality, to ‘Wireless Fidelity’ or Wi-Fi, took about thirty-eleven years. The transition to WiMax is already in play, yet most of us haven’t figured out what Wi-Fi is really all about.

According to the ‘Webopedia’, the term is promulgated by the Wi-Fi Alliance, and is short for Wireless Fidelity as I indicated above. What it means is that you can access the Internet from a laptop computer with the right stuff (wireless card) in various locations without the burden of a physical wire.

Hold it – Webopedia? Yikes! Yes, it’s real, and it defines and explains web ‘stuff’. I guess Babe Ruth probably thought that Encyclopedias were on the bleeding edge, yet I wrote my 7th grade essay all about him using that standard, great source of knowledge. Makes you wonder what ‘pedia’ is next doesn’t it?

It goes on to say that any products tested and approved as Wi-Fi certified (a registered trademark) by the Wi-Fi Alliance are certified as interoperable with each other, even if from different manufacturers.

That’s kind of like Fords & Toyotas use the same gas to make them go, and their owners use the same ramps and highways to pick up milk, or go to the cottage. Even Hudson Hornets used a leaded version of the same fuel.

An example where this wasn’t so well planned is the access to the electricity grids in Europe as opposed to North America. The same plugs don’t work in both places.

Rather than making that mistake, the Alliance has created an accepted standard so that manufacturers create equipment, and the like, that can be used in a similar fashion to access the web. That means that your laptop, regardless of brand, will use the same ‘hot-spots’ to get access. Hot-spots are areas where the facility, like Starbucks or the hotel that owns the lobby, has put in the proper equipment to provide access from your wireless card to the great big cloud called the Internet. The wireless card is the gas for the Fords & Toyotas, and the hot-spot is the on ramp.

And therein lie both the beauty and the problem. The beauty is that I can access the web from Starbucks in Atlanta, as well as a hotel lobby in Vancouver. If you’ve ever seen someone doing the hippy-hippy shake with their computer in their hands, you’re probably witnessing the problem. Wi-Fi access is limited in both speed and distance. The twisting person was probably trying to get a more consistent signal in the ‘hot-spot’.

Enter WiMax. That’s not Max Smart and his wireless shoe communications, but it is the next generation of Wi-Fi. According to WiMaxxed.com it “will connect you to the Internet at faster speeds and from much longer ranges than current wireless technology allows.” They go on to say “WiMax promises up to a ten mile range without wires, and broadband speeds without cable or T1.”

The result – we are absolved from the penance of viewing way too many hippy-hippy shakes. Well, not so fast, don’t throw out your dancin’ shoes quite yet. It’s not on the Wal-Mart shelves for next Christmas, but there are a lot of indicators that it’s real, and it’s just around the corner.

First of all, it is an acronym for Worldwide Interoperability For Microwave Access, and it has actually been in the works for quite a while now. An article titled ‘FCC Move Could Boost WiMax’, states “A number of vendors and carriers have announced products, testing, or support for the standard in the last month, including Intel, Nokia, AT&T, BellSouth, Sprint, and Motorola.” These companies aren’t akin to Duke’s Pool Room – these are the big boys.

The article continues to say, “Congress has been lobbied for months now to free more frequencies for wireless broadband.”

AlcaTel states that WiMax will “bridge the digital divide by delivering broadband in low-density areas.” If you really study that statement, you can see where we are in the world today. Where governments once ensured that all residents were able to receive phone service in the Ma Bell days, that lingo is now being used in relation to broadband access to the Internet. May everybody have equal access is the refrain, but only if it’s high speed!

So instead of hot-spot hopping, WiMax will provide true wireless mobility. And there’s more. In an article by Al Senia of America’s Network, he states that ‘Phone manufacturers such as Samsung and LG are expected to introduce Wi-Fi handsets compatible with this service by year’s end.”

O.K., so that’s VoIP, except it’s wireless VoIP in hot-spots. Next is WiMax, with wide-area wireless VoIP.

To be sure, there are quality and security issues to be resolved, whether that’s for surfing, voice applications, or a gazillion other Internet applications, before wider market acceptance is achieved. However, I attended a recent presentation by the Gartner Group, where the presenter stated emphatically that security is not an ‘if’ but rather ‘how much’. His meaning was clearly that the level of security required for business applications will be achieved, and that commercial providers will find the economic model that works. Ditto for quality.

We used to trade information at the speed of the Pony Express, when the air was just filled with farm smells. Now when the air is filled with zeros and ones, information is transferred at speeds faster than Clark Kent. If we’re to remain on competitive even ground, we had better pay attention to these applications that are on the horizon. We have to assume that our competitors are paying attention.

It took a century to transform from Alexander Bell’s basic invention to wireless phones. However, in the last decade alone, the Internet has met with wide acceptance by business, VoIP has become more common, Wi-Fi and Wi-Fi VoIP is now a reality, and WiMax and wide area wireless VoIP is very nearly on the market.

In the past, I’ve often used an example of future possibilities by alluding to a chip in our eyebrows that can transmit holographic images around the globe. That’s not even that far-fetched anymore, so I guess I’ll have to come up with a better example. I’m going to have to track down the Jetsons and Star Trek reruns.

“Grandpa, why is the sky blue?” That’s always been a puzzler. What on earth are you going to say when the question is “Grandpa, why is the sky zeros and ones?” That’s when you ask yourself, “Wi me?”

That begs another question. Where do all the zeros and ones go when they’re used up? Is there a big Z&O dump somewhere? Or should that be backwards – OZ. Oh, that Wizard, I knew he was up to something.