Thursday, June 30, 2011

Debug a microcontroller-to-FPGA interface from the FPGA side

Microcontrollers and FPGAs often work together in embedded systems. As more functions move into the FPGA, however, debugging the interface between the two devices becomes more difficult. The traditional debugging approach comes from the microcontroller side, which relies on a serial-port printout. This approach adds overhead and may cause timing problems. Furthermore, this approach cannot guarantee uninterrupted and exclusive access to certain addresses because of operating-system multitasking. Thus, a serial-port printout doesn’t accurately describe the actions on the microcontroller/FPGA interface.

Instead, you can approach the problem from the FPGA side using a JTAG (Joint Test Action Group) interface as a communication port. This approach uses the internal logic of the FPGA to capture the read/write transactions on the microcontroller/FPGA interface. This method is nonintrusive because the circuit that captures transactions sits between the microcontroller and the FPGA’s functioning logic and monitors the data without interfering with it. It stores the captured transaction in the FPGA’s RAM resources in real time. You can transfer the data to a PC through the JTAG port’s download cable.

The debugging tool comprises the data-capture circuit, the JTAG communication circuit, and the GUI (graphical user interface). The data-capture circuit uses standard HDL (hardware-description language) and instantiates a FIFO (first-in/first-out) buffer in the FPGA. Whenever you read or write to a register, the debugging tool records the corresponding value of the address and data on the bus and stores it in the FIFO buffer. You can retrieve the data through the JTAG’s download cable to the PC (Listing 1 - all listings are provided on subsequent pages or as a downloadable doc file from the links below).

Because the FPGA has limited on-chip RAM resources, you must keep the FIFO buffer shallow. To efficiently use the FIFO buffer, the design includes filter and trigger circuits. With inclusive address filtering, the circuit monitors only several discontinuous spans of addresses instead of the whole address space. Exclusive-address filters can filter out several smaller address spans from the inclusive-address spans, enabling finer control of the filter settings (Listing 2).

With transaction triggering, the circuit starts when you read from or write to a certain address. You can add certain data values to the triggering condition (Listing 3). You can dynamically reconfigure the settings of address filters and transaction triggers through the JTAG’s vendor-supplied, customizable communication circuit without recompilation of the FPGA design (Figure 1). The circuit has two interfaces, one of which is written in HDL to form a customized JTAG chain. It communicates with the user logic (listing 1, listing 2, and listing 3). The circuit is accessible through specific programming interfaces on the PC and communicates with the user program or GUI (Listing 4).



The FPGA-based circuit facilitates writing and reading functions from PC to FPGA logic, and it promotes the JTAG interface to a general communication port attached to the FPGA. FPGA manufacturers, including Actel, Altera, Lattice Semiconductor, and Xilinx, respectively, call this circuit UJTAG (user JTAG), Virtual JTAG, ORCAstra, and BScan (references 1 through 4).

The GUI for this circuit uses Tcl/Tk (tool-command-language tool kit). FPGA manufacturers provide vendor-specific APIs (application-programming interfaces) in Tcl for the PC side of the JTAG-communication circuit. The APIs include basic functions, such as JTAG-chain initialization, selection, and data reading and writing. With the data-read function, you can check the capturing status and get the transaction data from the FIFO buffer. With the data-writing function, you can send the filter and trigger configuration data to the capturing circuit in the FPGA (Listing 4). The JTAG-based debugging method provides dynamic visibility and controllability into the microcontroller-to-FPGA interface and the FPGA’s internal logic without the need to recompile and download FPGA code.

Information is shared by www.irvs.info

Tuesday, June 28, 2011

Use best practices to deploy IPv6 over broadband access

After more than a decade of forewarning, the IPv4 to IPv6 transition has finally reached critical mass. On Feb 1, 2011, the AINA allocated the last freely available block of IPv6 addresses. At the same time, the number of users and "endpoints" requiring Internet access - and thus a unique IP address--continues to explode. With exponential growth in global broadband deployments, next-gen wireless rollouts on the horizon, and fast-growing smart phones in the field, the industry is predicting an increase of 5 billion unique endpoints by 2015. In the meantime, service providers are struggling to prepare their networks for the influx of IPv6 addresses.

While the Internet is rich with IPv6 content and services- Google is already supporting IPv6 on its search, news, docs, maps and YouTube- IPv4 won't just "go away" as IPv6 comes on board. This creates a challenging situation for service providers that must upgrade their network infrastructure to handle IPv4 and IPv6 co-existence.

Network cores are well equipped for handling both IPv4 and IPv6, however broadband access networks are not. IPv4 and IPv6 co-existence puts tremendous stress on the underlying network systems, which can potentially introduce latency, degrade network responsiveness, and compromise service level agreements. The biggest transition concern is the impact on customers: will introducing IPv6 endpoints, forwarding tables, and services affect connectivity speed, service quality, and network reliability?

IPv6 Solutions for Broadband Access

An abrupt transition of the legacy IPv4 infrastructure to IPv6 is not practical because most Internet services are still based on IPv4 and many customers are still running operating systems that do not fully support IPv6. Service providers must support both IPv4 and IPv6 endpoints and service in order to guarantee the quality of service (QoS) defined in their service level agreements (SLA).

There are different methods that can be used to achieve this goal across broadband access networks including:
* Translation
* Tunneling
* Dual-Stack Network

Translation

The easiest way to conserve the depleting IPv4 address space is to use translation so that the outward facing interface uses a public interface while the private network uses IP addresses that are not routed on the Internet. However, the known performance and scalability issues compel most service providers to deploy either tunneling or dual-stack transition mechanisms in broadband access networks.

Information is shared by www.irvs.info

Wednesday, June 22, 2011

Bring big system features to small RTOS devices with downloadable app modules

The embedded systems used in consumer, medical and industrial applications often require real-time response to provide an effective user experience.

Whether a smartphone’s baseband radio communications, ultrasound image processing, or production line video inspection, all of these and many other such systems need to process inputs quickly and get some information or action back to the user whether human or another machine.

These systems run on low-power processors and often do all of their processing with relatively small amounts of memory—a combination of requirements that often leads developers to use a real-time operating system (RTOS). The RTOS manages application tasks or threads, handles interrupts, and provides a means of interthread communication and synchronization.

RTOSes come in all sizes and flavors, from the large, like Wind River’s VxWorks, to the super-compact, like Express Logic’s ThreadX. Robust RTOSes offer many features adapted from desktop systems that are typically not available in compact RTOSes because such features execute a larger amount of code that takes a larger memory footprint and causes a slower real-time response.

In contrast, the compact RTOS generally operates as a library of services, called by the application through direct function calls. Underlying these RTOS services is an infrastructure (Figure 1, below) of scheduling and communications facilities that support these functions.



Figure 1. The compact RTOS generally operates as a library of services, called by the application through direct function calls.

Most "small footprint" RTOSes employ an architecture in which the application code is directly linked with the RTOS services it uses, forming a single executable image (Figure 2 below).

The application explicitly references the services it needs, using function calls with an API defined by the RTOS. These service functions are linked with the application from the RTOS library. The result is a single executable image, usually in the .elf format.

For development, this image then is downloaded to target memory and run or, in production, it is flashed into ROM and run at device power-on.

This "monolithic" approach is efficient in both time and space, but it lacks flexibility. Any changes to the application or RTOS require re-linking and a new download/flash of the entire image. While this is routine during development, after production it can present some limitations.



Figure 2. Most "small footprint" RTOSes employ an architecture in which the application code is directly linked with the RTOS services it uses, forming a single executable image.

In contrast, desktop operating systems such as Windows and Linux, and larger RTOSes, such as VxWorks and QNX, have a two-piece "OS/Application" architecture.In this architecture, there is a resident kernel, containing all the OS services available to applications or that are needed by other services within the kernel, all linked into a distinct executable.

This kernel executable boots the system and runs continuously, providing a foundation for applications which dynamically load and run. Usually, virtual memory provides demand paging to and from mass storage on desktop systems or multi-user separation in embedded systems.

This approach is used in mobile devices such as Apple’s iPhone or iPad, where new "Apps" can be downloaded over the wireless network. The OS runs in the device and handles the user interface, which enables selection of any of the downloaded “Apps.”

The selected App then runs along with the OS on the CPU. Similarly, large RTOS-based systems segregate applications from the RTOS kernel, usually in distinct memory spaces, within a virtual memory environment.

A nice feature of the large RTOSes, shared by their desktop OS cousins, is the ability to dynamically download applications onto a running system. In such partitioned architectures, the kernel runs as an entity and the applications run independently, but makes use of OS services to access and use hardware resources.

Even within embedded systems, such downloading and the dynamic addition or change of applications is found where big RTOSes operate in telecommunications infrastructure and defense/aerospace applications. This capability enables a high degree of modularity and field update of running systems.

Information is shared by www.irvs.info

Monday, June 20, 2011

Software and hardware challenges due to the dynamic raw NAND market

NAND flash is the dominant type of non-volatile memory technology used today. Developers commonly face difficulties developing and maintaining firmware, middleware and hardware IP for interfacing with raw NAND devices. After reviewing the history and differentiated features of various memory devices, we’ll take a detailed look at common obstacles to NAND device development and maintenance, particularly for embedded and system-on-chip (SoC) developers, and provide some recommendations for handling these challenges.

Background on non-volatile memory technologies
There are many different non-volatile memory technologies in use today.

EEPROM
Electrically erasable programmable read-only memory, or EEPROM, is one of the oldest forms of technology still in use for user-modifiable, non-volatile memories. In modern usage, EEPROM means any non-volatile memory where individual bytes can be read, erased, or written independently of all other bytes in the memory device. This capability requires more chip area, as each memory cell requires its own read, write, and erase transistor. As a result, the size of EEPROM devices is small (64 Kbytes or less).

EEPROM devices are typically wrapped in a low-pin-count serial interface, such as I2C or SPI. Parallel interfaces are now uncommon due to required larger pin count, footprint, and layout costs. Like almost all available non-volatile memory types, EEPROMs use floating gate technology in a complementary metal-oxide-semiconductor (CMOS) process.

Flash
Flash memory is a modified form of EEPROM memory in which some operations happen on blocks of memory instead of on individual bytes. This allows higher densities to be achieved, as much of the circuitry surrounding each memory cell is removed and placed around entire blocks of memory cells.

There are two types of flash memory arrays in the marketplace — NOR flash and NAND flash. Though these names are derived from the internal organization and connection of the memory cells, the types have come to signify a particular external interface as well. Both types of memory use floating gates as the storage mechanism, though the operations used to erase and write the cells may be different.

NOR Flash
NOR flash was the first version of flash memory. Until about 2005, it was the most popular flash memory type (measured by market revenue).[1] In NOR flash, bytes of memory can be read or written individually, but erasures happen over a large block of bytes. Because of their ability to read and write individual bytes, NOR flash devices aren’t suitable for use with block error correction. Therefore, NOR memory must be robust to errors.

The capability to read individual bytes also means it can act as a random access memory (RAM), and NOR flash devices will typically employ an asynchronous parallel memory interface with separate address and data buses. This allows NOR flash devices to be used for storing code that can be directly executed by a processor. NOR flash can also be found wrapped in serial interfaces, where they act similar to SPI EEPROM implementations for reading and writing.

NAND Flash
The organization and interface of the NOR flash devices places limitations on how they can scale with process shrinks. With a goal of replacing spinning hard disk drives, the inventor of NOR flash later created NAND flash. He aimed to sacrifice some of the speed offered by NOR flash to gain compactness and a lower cost per byte [2]. This goal has largely been met in recent years, with NAND sizes increasing to multiple gigabytes per die while NOR sizes have stagnated at around 128 MB. This has come at a cost, as will be discussed later.

Raw NAND memory is organized into blocks, where each block is further divided into pages.




Figure 1: MT29F2G08AACWP NAND memory organization (courtesy Micron Inc.)


In NAND memories, read and write operations happen on a per-page basis, but erase operations happen per block. The fact that read and write operations are done block-wise means that it’s suitable to employ block error correction algorithms on the data. As a result, NAND manufacturers have built in spare bytes of memory for each page to be used for holding this and other metadata. NOR flash doesn’t have such spare bytes.

Also in contrast to NOR flash, the NAND flash interface isn’t directly addressable, and code cannot be executed from it. The NAND flash has a single bus for sending command and address information as well as for sending and receiving memory contents. Therefore, reading a NAND device requires a software device driver.

NAND flash is the underlying memory type for USB memory sticks, memory cards (e.g. SD cards and compact flash cards) and solid state hard drives. In all cases, the raw NAND flash devices are coupled with a controller that translates between the defined interface (e.g. USB, SD and SATA) and the NAND’s own interface. In addition, these controllers are responsible for handling a number of important tasks for maintaining the reliability of the NAND memory array.

Raw NAND issues and requirements
Let’s take a detailed look at the issues and challenges presented by incorporating raw NAND devices into an embedded system or SoC.

Errors and error correction
Despite being based on the same underlying floating gate technology, NAND flash has scaled in size quickly since overtaking NOR flash. This has come at a cost of introducing errors into the memory array.

To increase density, NAND producers have resorted to two main techniques. One is the standard process node and lithography shrinks, making each memory cell and the associated circuitry smaller. The other has been to store more than one bit per cell. Early NAND devices could store one of two states in a memory cell, depending on the amount of charge stored on the floating gate. Now, raw NAND comes in three flavors: single-level cell (SLC), multi-level cell (MLC) and tri-level cell (TLC). These differ in the number of charge levels possibly used in each cell, which corresponds to the number of bits stored in each cell. SLC, the original 2 levels per cell, stores 1 bit of information per cell. MLC uses 4 levels and stores 2 bits, and TLC uses 8 levels and stores 3 bits.

While reducing silicon feature sizes and storing more bits per cell reduces the cost of the NAND flash and allows for higher density, it increases the bit error rate (BER). Overcoming the increasing noisiness of this storage medium requires larger and larger error correcting codes (ECCs). An ECC is redundant data added to the original data. For example, the latest SLC NANDs in the market require 4 or 8 bits ECC per 512 bytes, while MLC NAND requires more than 16 bits ECC per 512 bytes. But four years ago, SLC NANDs only required 1 bit of ECC, and the first MLC NANDs only required 4 bits of ECC. In the event of errors, the combined data allows the recovery of the original data. The number of errors that can be recovered depends on the algorithm used.




Figure 2: Device issues versus process node shrinks (courtesy Micron)


Ideally, any ECC algorithm can be used to implement ECC as long as the encoder and decoder match. The popular algorithms used for NAND ECC are:

* Hamming Code: For 1-bit correction [3]
* Reed Solomon: For up to 4 bits of correction. This is less common [4].
* BCH : For 4 or more bits of correction [5].


Extra memory (called the "spare memory area" or "spare bytes region") is provided at the end of each page in NAND to store ECC. This area is similar to the main page and is susceptible to the same errors. For the present explanation, assume that the page size is 2,048 bytes, the ECC requirements are 4 bits per 512 bytes and the ECC algorithm generates 16 bytes of redundant data per 512 bytes. For a 2,048-byte page, 64 bytes of redundant data will be generated. For example, in current Texas Instruments (TI) embedded processors, the ECC data is generated for every 512 bytes, and the spare bytes area will be filled with the ECC redundant data. As ECC requirements have gone up, the size of the spare regions provided by the NAND manufacturers have increased as well. 


The manufacturers of NAND devices specify the data retention and the write/erase endurance cycles under the assumption of the specified ECC requirements being met. When insufficient ECC is used, the device’s usable lifetime is likely to be severely reduced. If more errors are detected than can be corrected, data will be unrecoverable.

Geometry detection
Before raw NAND operations can begin, the first step is to determine the NAND geometry and parameters. The following list is the minimum set of NAND parameters needed by a bootloader or other software layer to determine NAND geometry:

* 8-bit or 16-bit data width
* Page size
* Number of pages per block (block size)
* Number of address cycles (usually five in current NANDs)


Raw NAND provides various methods for NAND manufacturers to determine its geometry at run time:
4th byte ID: All raw NANDs have a READ ID (0x90 at Address 0x00) operation which returns 5 bytes of identifier code. The first and second byte (if the starting byte number is 1 aka "one based") are the manufacturer and device IDs, respectively. The fourth byte (one-based) has information on the NAND parameters discussed above, which can be used by the ROM bootloader.

This 4th byte information can be used to determine raw NAND geometry, yet the interpretation of the 4th byte ID changes from raw NAND manufacturer to manufacturer and between generations of raw NANDs. There are two noteworthy interpretations. The first is a format used by Toshiba, Fujitsu, Renesas, Numonyx, STMicroelectronics, National and Hynix, with certain bits used to represent the page size, data bus size, spare bytes size and number of pages per block. The second is a format particular to the latest Samsung NANDs, holding similar information to the first representation, but with different bit combinations representing different possible values. Since the 4th ID byte format isn’t standardized in any way, its use for parameter detection isn’t reliable.

ONFI: Many NAND manufacturers, including Hynix, Micron, STMicroelectronics, Spansion and Intel, have joined hands to simplify NAND flash integration and offer Open NAND Flash Interface (ONFI)-compliant NANDs. ONFI offers a standard approach to reading NAND parameters.

Other concerns
The physical connection between the embedded processor and the raw NAND device is also of concern. NAND devices can operate at either 3.3V or 1.8V, so it’s important to purchase NANDs with compatible voltage levels. It should be pointed out that 1.8V NAND devices are often specified with worse performance than 3.3V equivalent parts.

Another aspect that must be considered is whether asynchronous or synchronous NANDs will be used. The synchronous interface was something introduced with the ONFI 2.0 specification. Historically, NAND interfaces were asynchronous. However, to reach higher performance levels for data movement, clock synchronized data movement with DDR signaling was provided as an interface option. This type of interfacing may be common in SSD drives but isn’t common in the typical embedded processor or SoC.

Information is shared by www.irvs.info

Tuesday, June 14, 2011

The 'internet of things' is driving demand for mixed signal MCUs

There is increased activity around the ‘internet of things’; the ability to create networks of small devices that monitor and control our surroundings to create a sort of ‘augmented reality’.

A recent development is NXP’s intention to open-source its JenNET IP protocol stack, which uses the IEEE 802.15.4 MAC and PHY as its underlying platform and employs the IPv6 protocol to extend the network’s addressable nodes to what is often termed as ‘effectively limitless’. It is this potential to give any electronic device its own IP-addressable profile that will lead to the ‘internet of things’ concept becoming reality.

However, in addition to uniquely identifying these ‘things’, it follows that the ‘things’ should do something useful and, increasingly, the application that is most often cited is monitoring and control. Consequently, data gathering using some form of ‘smart’ sensor is expected to constitute a large part of activity for the ‘internet of things’.

A market report by analyst IDC and cited by Intel states that by 2015 there could be 15 billion devices with an embedded internet connection, more than two for every single person on the planet today. In reality, as smart sensor applications flourish, the number of connections could grow beyond this figure rapidly, and that will be enabled in large part by the falling cost of developing and deploying connected devices. A major element of that cost will be the embedded intelligence and it is here that many IDMs are focusing their attention, in developing low power, low cost MCUs that meet the commercial and technical requirements of this emerging application space.

Mixed signal MCUs which also integrate wireless connectivity are already available, they will likely become more prolific in the future. However, for many applications, integrating the wireless connectivity may be less appealing than a two-chip solution, at least while the battle over which wireless protocol will prevail still rages. For this reason, perhaps, there is more activity around developing ultra-low power MCUs that focus on interfacing to ever smarter sensors.

Marking its entry into the MCU market, ON Semiconductor recently introduced its first mixed-signal MCU which focuses on applications that demand precision, as well as low power. ON Semiconductor recently acquired Sanyo Semiconductor and, with it, a portfolio of 8 and 16-bit MCUs. However, for its first in-house development, ON Semi chose the ARM Cortex-M3 32-bit core, which it has married with some mixed signal elements to create the Q32M210. It claims the device has been developed to target portable sensing applications that require high accuracy, predictable operation and the ever-present power efficiency.

ON Semi is more accustomed to developing custom ASICs rather than its own products, however through a number of other acquisitions it believes it has accrued the expertise necessary to address the needs of ‘niche’ applications, where precision is valued. It is the company’s experience in developing niche mixed signal products that forms its credentials, not least in the development of hearing aids that use highly accurate ADCs and a bespoke DSP technology.

The analogue front-end (AFE) used in the Q32M210 features dual 16bit ADCs and configurable op-amps, which result in a true ENOB of 16bits across the power supply range. This is enabled, in part, by an uncommitted charge pump that can be used to extend the operational lifetime of the battery supply. ON Semi claims the charge pump can be used to deliver a consistent 3.6V to the AFE, even when the battery supply has dropped to 1.8V. This could significantly extend the usable lifetime of any device empowered by the Q32M210.

The additional peripherals provide a USB interface, as well as LED/LCD drivers and push-button interfaces. Coupled with the programmable sensor interface, this positions the device as a system-on-chip solution for a range of applications and specifically portable medical devices, where its accuracy will be valued.

The AFE used in the Q32M210 is clearly intended to differentiate it from the competition, in terms of both accuracy and power consumption. However ON Semi isn’t the only device manufacturer to acknowledge the importance of mixed signal performance.

Information is shared by www.irvs.info

Wednesday, June 8, 2011

Piezoelectric fans and their application In electronics cooling

Piezoelectric fans seem to represent an example of research and development that has culminated in a product that is deceptively simple. Although piezoelectric technology is capable of producing rotary motion, the fans operate quite differently from rotary fans, as they generate airflow with vibrating cantilevers instead of spinning blades.

Piezoelectric, as derived from Greek root words, means pressure and electricity. There are certain substances, both naturally occurring and man-made, which will produce an electric charge from a change in dimension and vice-versa. Such a device is known as a piezoelectric transducer (PZT), which is the prime mover of a piezoelectric fan. When electric power, such as AC voltage at 60 Hz is applied, it causes the PZT to flex back and forth, also at 60 Hz.



The magnitude of this motion is very tiny, so to amplify it, a flexible shim or cantilever, such as a sheet of Mylar, is attached and tuned to resonate at the electrical input frequency. Since piezoelectric fans must vibrate, they must use a pulsating or alternating current (AC) power source. Standard 120 V, 60 Hz electricity, just as it is delivered from the power company, is ideal for this application, since it requires no conversion.

[If direct current (DC), such as in battery-operated devices, is the power source, then an inverter circuit must be employed to produce an AC output. An inverter may be embodied in a small circuit board and is commercially available with frequency ranges from 50 to 450Hz.]

Driving the fan at resonance minimizes the power consumption of the fan while providing maximum tip deflection. The cantilever is tuned to resonate at a particular frequency by adjusting its length or thickness. The PZT itself also has a resonant frequency, so the simplistic concept of adjusting only the cantilever dimensions to suit any frequency may still not yield optimum performance. (Conceivably, tuning the electrical input frequency to match existing cantilever dimensions may work, though with the same caveat, that the resonant frequencies of all the components must match, within reason.

Applications for piezoelectric fans are just in their infancy and could really thrive through the imagination of designers. This article, which originally appeared in the April 2011 issue of Qpedia (published by Advanced Thermal Solutions, Inc. and used with permission here) explores the principles, construction, implementation, and installation of piezoelectric fans.

Information is shared by www.irvs.info

Tuesday, June 7, 2011

Benchmark automotive network applications

After establishing the first hardware for MOST25 technology, the release recommendations were introduced. This method was used to investigate and validate new products according to the requirements for automotive applications. In order to have a common basis for this work an application recommendation was written and released. Nevertheless, this approach also realizes some of the principles of robustness validation. Within the MOST community this approach is well proven, and mandatory not only for the German car manufacturers but for others as well.

Benchmarking
After having some experience with different products coming from diverse manufacturers, the concept of benchmarking was introduced. The goal of this method is to compare products with functionality with the best performance. The definition of best performance is based on the requirements for the automotive application. The challenge of this approach is that all relevant aspects are taken into account. In the relationship between these different limits is shown below.



Information is shared by www.irvs.info

Saturday, June 4, 2011

Combining FPGAs & Atom x86 CPUs for SBC design flexibility

Field Programmable Gate Array (FPGA) technology has been a useful design resource for quite some time and continues to be a mainstay because it delivers many of the same benefits of x86 processor architectures.

The various common advantages include multifunctionality, a healthy and broad-based ecosystem and a proven installed base of supported applications. Combining x86 processor boards with FPGA-controlled I/Os expands these benefits even further, allowing dedicated I/Os to support a wider range of application requirements.

Employing next-generation x86 processors with FPGAs onto a single hardware platform provides the ability to eliminate chipsets so that different areas of applications can be built on the same platform requiring only the exchange the IP cores.

New x86-based embedded computing platforms combined with FPGAs enable a new realm of applications – providing highly adaptable feature options for designs that have previously been restricted due to lack of interface or I/O support.

By understanding the collective advantages of this approach, designers can reduce Bill of Material (BOM) costs and maintain long-term availability with legacy interfaces and dedicated hardware-based I/O. Further, legacy systems now have a bridge to tap into the latest processor enhancements such as graphics media acceleration, hyperthreading and virtualization for greater success in matching exacting requirements.

This is a significant advancement in bridging newer technologies with older systems implemented in military, industrial automation and manufacturing environments.

Blending x86 and FPGAs for adaptable design options


Most x86 architecture designs are paired with a chipset, usually a two-piece component with a specific set of integrated features. Ethernet, memory control, audio, graphics, and a defined set of input/output interfaces such as PCI, PCI Express, SATA , and LPC are routinely integrated options.

However many of these chipsets are moving away from the legacy interconnects (e.g., IDE and PCI) commonly found in deeply established environments such as military, industrial automation or manufacturing systems for safety and operations.

As a result, these industries have not been able to take advantage of processor advancements and subsequent improvements in power and performance.

The availability of new x86 processors in combination with an FPGA presents an entirely new design approach that virtually take away embedded limitations from a predetermined feature set. These capabilities distinguish the Intel Atom E6x5C processor series as a milestone in x86 technologies, and a departure from using a chipset with a fixed definition.

Instead the Intel Atom E6x5 processor is combined with a flexible and programmable Altera FPGA on a single compact multi-chip module. By incorporating PCI Express rather than the Intel-specific Front Side Bus, the FPGA is connected directly to the processor rather than to a dedicated chipset, resulting in maximum flexibility and long-term design security.

Designers further have ready access to increased performance integrated into smaller form factors that offer very low thermal design power (TDP).

Because the FPGA can be programmed to support modern as well as legacy interfaces, OEMs now have a workable migration path from non-x86 to x86 architectures – enabling slower moving technology-based market applications to progress to next-generation processing technologies.

Cementing this approach as an appealing long-term design solution, Loring Wirbel of FPGA Gurus estimates that the CAGR for FPGAs will continue at a strong 8.6 percent which will put the FPGA market at US$7.5 billion worldwide by 2015.

Information is shared by www.irvs.info

Wednesday, June 1, 2011

Choosing the right OS for your medical device

Medical-device manufacturers understand the importance of the operating system (OS). In fact, contrary to common practice in the world of embedded systems, they often select the OS even before they choose the board. According to VDC Research, for example, in 2010, 36.4% of medical device projects chose the OS first, compared to 20.8% of telecommunications projects, and just 9.3% of transportation projects.

This anomaly underlines just how much medical devices depend on their OSs. It does not, however, help with OS selection, which is made more difficult thanks to constant innovation and development that combine to present a bewildering line-up of possibilities: Android, QNX Neutrino RTOS, myriad Linux flavors, Windows CE, and roll-your-own, to name just a few.

Of course, no serious engineer would formulate the question of OS selection as “which OS?” but rather, “what does the project need from its OS?” The answers to this question lead to a short list of viable candidates.

Though these answers will be unique to every project, we can make a few assumptions. The OS must support the project’s business requirements; it must support the device’s regulatory compliance requirements; and it must possess whatever characteristics the device requires of it, starting with, in most cases, dependability.

This article looks at OS requirements; characteristics] general purpose OS or real time OS (RTOS); RTOS architectures; key RTOS characteristics; protection against priority inversions; partitions; monitoring, stopping, and restarting processes. (For example, in Figure 1, we can see how the knotty problem of priority inversion is prevented by priority inheritance.)


Figure 1:Priority inheritance prevents priority inversion

Information is shared by www.irvs.info