Embedded Projects & Embedded Ideas
Monday, August 8, 2011
Spread-spectrum clocking reduces EMI in embedded systems
For years now, government institutions have been regulating the amount of EMI (electromagnetic interference) an electronic device or system can emit. Their efforts primarily target lowering dissipated power and eliminating any interference to the function of other surrounding devices as a result of EMI. Spread-spectrum clocking is a popular implementation for reducing EMI in synchronous systems.
Spread-spectrum-clocking benefits
EMI is the energy resulting from a periodic source in which most of the energy becomes a single fundamental frequency. The influence of these unwanted signals can manifest itself in the limited operation of other devices and systems. In some cases, the EMI-generated disturbance can make it impossible for these devices or systems to operate. Because an electromagnetic signal must have a source, synchronous systems are ideal candidates for generating excessive EMI. Within a system, the coupling paths in PCBs (printed-circuit boards) transmit the generated EMI that affects other system components. However, EMI can occur even in the absence of a conductive medium, such as an electric conductor or dielectric. In most cases, EMI results from a combination of conduction and radiation.
The primary PCIe (PCI Express) model implements a synchronous-clocking scheme. That is, the same 100-MHz clock source generates the reference clock for PCIe devices. Furthermore, in the case of a motherboard, the traces on the PCB can act as coupling paths to facilitate the transmission of EMI to the surrounding devices. The disturbance that occurs can affect not only the system but also other surrounding systems when EMI travels through the atmosphere in the form of radiation.
One method of minimizing the EMI that a device generates is to keep the disturbing signals below a certain level. You accomplish this goal by modulating the disturbing signals across a wider frequency range, thus spreading the energy across a range of frequencies rather than concentrating it at one frequency. In PCIe systems, the modulation of the reference clock is spread-spectrum clocking.
The most common modulation techniques are center-spread and down-spread. The center-spread approach applies the modulated signal in such a way that the nominal frequency sits in the center of the modulated frequency range. That is, half of the modulated signals deviate above the nominal frequency, and the other half deviate below it. A down-spread approach also results in a range of deviated frequencies. However, in the down-spread approach, the modulated signals deviate below the nominal frequency.
Spread-spectrum clocking reduces EMI in embedded systems figure 1Many PCIe systems implement EMI-minimizing spread-spectrum clocking by spreading the spectral energy of the clock signal over a wide frequency band. In spread-spectrum-clocking systems, PCIe components generally must use a reference clock from the same source. This approach allows a transmitter PLL (phase-locked-loop) and a receiver-clock-recovery function, or clock-data-recovery circuit, to track the modulation frequency and remain synchronous with each other. If only one side of the link uses a spread-spectrum-clocking reference clock, the transmitter and receiver circuits cannot properly track one another. For example, if a PCIe add-in card interfaces to a spread-spectrum-clocking system and also implements a cable connection to a downstream card that is using a constant-frequency-clock source, the downstream interface will be unable to connect.
The PCIe base specification provides guidelines for modulating the reference-clock input to PCIe devices. At a high level, the PCIe specification uses the down-spread approach when using a 30- to 33-kHz-wave signal as the modulating frequency to the 100-MHz clock, resulting in a frequency range of 99.5 to 100 MHz (Figure 1).
Information is shared by www.irvs.info
Friday, August 5, 2011
Open Embedded: An alternative way to build embedded Linux distributions
As embedded processors have grown more powerful and feature-rich, the popularity of the Linux operating system in embedded applications has grown in leaps and bounds. Although the fact that Linux is open source and free of licensing fees is one major driver of its popularity, another key driver is the wealth of application software and drivers available as a result of Linux’s widespread usage in the desktop and server arenas.
However, embedded developers cannot simply pick up desktop Linux distributions or applications for use in their systems. Broadly speaking, embedded Linux developers face three main challenges:
1. assembling a compatible combination of bootloader, kernel, library, application, and development tool components for the processor and peripherals used in their hardware;
2. correctly cross-building a multi-megabyte image; and
3. optimizing the various kernel and user-space components to reduce the footprint and associated memory cost.
Solutions to these challenges are far from straightforward and for a development team to achieve them requires significant effort and experience. Commercial embedded Linux vendors offer pre-tested solutions for particular embedded processors, but for developers who prefer a ‘roll your own’ approach to Linux, the Open Embedded (OE) build environment provides a methodology to reliably build customized Linux distributions for many embedded devices. A number of companies have been using OE to build embedded Linux kernel ports along with complete distributions, resulting in an increasing level of support to maintain and enhance key elements of the OE infrastructure.
In addition, a growing number of embedded Linux distributions (such as Angstrom) utilize OE. Although these distributions are not formally part of OE, they add to the attraction of using OE by providing ready-to-run starting points for developers. A final attraction is that some of the newer commercial distributions from companies such as MontaVista and Mentor Graphics are now based on OE. These provide additional tooling and commercially supported distributions.
In this article we present an overview of the key elements of the OE build environment and illustrate how these elements can be applied to build and customize Linux distributions. The Texas Instruments Arago distribution, which is based on the Angstrom distribution, will be used as example of how to create a new distribution based on OE and the distributions that already use it.
Most of the Arago- or Angstrom-based example scripts shown here have been modified slightly to more concisely demonstrate key concepts of OE. Developers should access the original scripts at the websites listed at the end of the article to view complete real-world implementations.
An Overview of Open Embedded
OE is based on BitBake, a cross-compilation and build tool developed for embedded applications. Developers use BitBake by creating various configuration and recipe files that instruct BitBake on which sources to build and how to build them. OE is essentially a database of these recipe (.bb) and configuration (.conf) files that developers can draw on to cross-compile combinations of components for a variety of embedded platforms.
OE has thousands of recipes to build both individual packages and complete images. A package can be anything from a bootloader through a kernel to a user-space application or set of development tools. The recipe knows where to access the source for a package, how to build it for a particular target, and ensures that a package’s dependencies are all built as well, relieving developers of the need to understand every piece of software required to add a particular capability to their application. OE can create packages in a variety of package formats (tar, rpm, deb, ipk) and can create package feeds for a distribution.
Most OE users will typically begin by selecting a particular distribution rather than building individual packages. The advantage of using an existing distribution is that it will often be necessary to select certain package versions to get a working combination. Distributions address this key function. They often provide a ‘stable’ build in addition to a ‘latest’ build to avoid the inherent instabilities that come from trying to combine the latest versions of everything.
A key benefit of OE is that it allows software development kit (SDK) generation. While some development teams may prefer to build their complete applications in OE, others may have a dedicated team that builds Linux platforms for application development teams to use. In these circumstances, the Linux platform team can generate a Linux distribution as a SDK that is easily incorporated into the build flow preferred by an application team. As a result, there is no need for application development teams to be OE experts.
A critical aspect of the OE database is that much of it is maintained by developers employed by parties with an interest in ensuring successful Linux distribution builds on embedded devices. This maintenance effort is critical given the amount of change occurring in the Linux kernel and application space.
A Quick Look at BitBake
The build tool developers are typically most familiar with is ‘make’, which is designed to build a single project based on a set of interdependent makefiles. This approach does not scale well to the task of creating a variety of Linux distributions each containing an arbitrary collection of packages (often hundreds of them), many of which are largely independent of each other, for an arbitrary set of platforms.
These limitations have led to the creation of a number of build tools for Linux distributions, such as Buildroot and BitBake. BitBake’s hierarchical recipes enable individual package build instructions to be maintained independently, but the packages themselves are easily aggregated and built in proper order to create large images. Thus it can build an individual package for later incorporation in a binary feed as well as complete images.
One important aspect of BitBake is that it does not dictate the way an individual package is built. The recipe (or associated class) for a package can specify any build tool, such as the GNU autotools, making it relatively straightforward to import packages into OE.
To address the need to select specific versions of packages that are known to work together and to specify the different embedded targets, BitBake uses configuration files.
BitBake fetches the package sources from the internet via wget (or any other Software Configuration Management tool such as Git or svn) using the location information in the recipe (Figure 1 below). It then applies any patches that are specified in the package description, which is a common requirement when cross-compiling packages for an embedded processor. The package collection is specified in the higher-level recipes such as those for images and tasks.
Since many developers will want to use an existing distribution, BitBake enables a developer to override distribution defaults by placing customized recipes or configuration files earlier in the BBPATH search path. This enables developers to tweak a distribution for their own needs without having to directly modify (and then subsequently maintain custom copies of) the existing distribution files. This approach in OE is called ‘layering’ and each collection of recipes is called an ‘overlay’.
We’ll now examine some of the different recipe and configuration files to shed a more detailed light on how OE and BitBake work. We will start by looking at the recipe types.
Information is shared by www.irvs.info
However, embedded developers cannot simply pick up desktop Linux distributions or applications for use in their systems. Broadly speaking, embedded Linux developers face three main challenges:
1. assembling a compatible combination of bootloader, kernel, library, application, and development tool components for the processor and peripherals used in their hardware;
2. correctly cross-building a multi-megabyte image; and
3. optimizing the various kernel and user-space components to reduce the footprint and associated memory cost.
Solutions to these challenges are far from straightforward and for a development team to achieve them requires significant effort and experience. Commercial embedded Linux vendors offer pre-tested solutions for particular embedded processors, but for developers who prefer a ‘roll your own’ approach to Linux, the Open Embedded (OE) build environment provides a methodology to reliably build customized Linux distributions for many embedded devices. A number of companies have been using OE to build embedded Linux kernel ports along with complete distributions, resulting in an increasing level of support to maintain and enhance key elements of the OE infrastructure.
In addition, a growing number of embedded Linux distributions (such as Angstrom) utilize OE. Although these distributions are not formally part of OE, they add to the attraction of using OE by providing ready-to-run starting points for developers. A final attraction is that some of the newer commercial distributions from companies such as MontaVista and Mentor Graphics are now based on OE. These provide additional tooling and commercially supported distributions.
In this article we present an overview of the key elements of the OE build environment and illustrate how these elements can be applied to build and customize Linux distributions. The Texas Instruments Arago distribution, which is based on the Angstrom distribution, will be used as example of how to create a new distribution based on OE and the distributions that already use it.
Most of the Arago- or Angstrom-based example scripts shown here have been modified slightly to more concisely demonstrate key concepts of OE. Developers should access the original scripts at the websites listed at the end of the article to view complete real-world implementations.
An Overview of Open Embedded
OE is based on BitBake, a cross-compilation and build tool developed for embedded applications. Developers use BitBake by creating various configuration and recipe files that instruct BitBake on which sources to build and how to build them. OE is essentially a database of these recipe (.bb) and configuration (.conf) files that developers can draw on to cross-compile combinations of components for a variety of embedded platforms.
OE has thousands of recipes to build both individual packages and complete images. A package can be anything from a bootloader through a kernel to a user-space application or set of development tools. The recipe knows where to access the source for a package, how to build it for a particular target, and ensures that a package’s dependencies are all built as well, relieving developers of the need to understand every piece of software required to add a particular capability to their application. OE can create packages in a variety of package formats (tar, rpm, deb, ipk) and can create package feeds for a distribution.
Most OE users will typically begin by selecting a particular distribution rather than building individual packages. The advantage of using an existing distribution is that it will often be necessary to select certain package versions to get a working combination. Distributions address this key function. They often provide a ‘stable’ build in addition to a ‘latest’ build to avoid the inherent instabilities that come from trying to combine the latest versions of everything.
A key benefit of OE is that it allows software development kit (SDK) generation. While some development teams may prefer to build their complete applications in OE, others may have a dedicated team that builds Linux platforms for application development teams to use. In these circumstances, the Linux platform team can generate a Linux distribution as a SDK that is easily incorporated into the build flow preferred by an application team. As a result, there is no need for application development teams to be OE experts.
A critical aspect of the OE database is that much of it is maintained by developers employed by parties with an interest in ensuring successful Linux distribution builds on embedded devices. This maintenance effort is critical given the amount of change occurring in the Linux kernel and application space.
A Quick Look at BitBake
The build tool developers are typically most familiar with is ‘make’, which is designed to build a single project based on a set of interdependent makefiles. This approach does not scale well to the task of creating a variety of Linux distributions each containing an arbitrary collection of packages (often hundreds of them), many of which are largely independent of each other, for an arbitrary set of platforms.
These limitations have led to the creation of a number of build tools for Linux distributions, such as Buildroot and BitBake. BitBake’s hierarchical recipes enable individual package build instructions to be maintained independently, but the packages themselves are easily aggregated and built in proper order to create large images. Thus it can build an individual package for later incorporation in a binary feed as well as complete images.
One important aspect of BitBake is that it does not dictate the way an individual package is built. The recipe (or associated class) for a package can specify any build tool, such as the GNU autotools, making it relatively straightforward to import packages into OE.
To address the need to select specific versions of packages that are known to work together and to specify the different embedded targets, BitBake uses configuration files.
BitBake fetches the package sources from the internet via wget (or any other Software Configuration Management tool such as Git or svn) using the location information in the recipe (Figure 1 below). It then applies any patches that are specified in the package description, which is a common requirement when cross-compiling packages for an embedded processor. The package collection is specified in the higher-level recipes such as those for images and tasks.
Since many developers will want to use an existing distribution, BitBake enables a developer to override distribution defaults by placing customized recipes or configuration files earlier in the BBPATH search path. This enables developers to tweak a distribution for their own needs without having to directly modify (and then subsequently maintain custom copies of) the existing distribution files. This approach in OE is called ‘layering’ and each collection of recipes is called an ‘overlay’.
We’ll now examine some of the different recipe and configuration files to shed a more detailed light on how OE and BitBake work. We will start by looking at the recipe types.
Information is shared by www.irvs.info
Thursday, August 4, 2011
Fleet data loggers tackle real-world testing
In order to simulate real situations for the communication networks in a vehicle it is necessary to perform extensive test drives in a real environment. Large amounts of data need to be acquired, recorded, and, afterwards, accessed.
Shortly before production maturity, in-depth testing in vehicles is typically conducted in the context of test drives. To achieve the greatest possible test coverage, some of these tests are performed under extreme environmental conditions. Whether they are winter tests in Finland at -30C, hot weather tests in Death Valley at over 50C, or week-long drives through the Brazilian rainforest at high humidity and on rough roads, in the end the vehicle and all of its components must operate smoothly. Iinstalled data loggers must be able to withstand these harsh conditions as well. This means that they must be mechanically rugged and operate reliably over a broad range of temperatures.
At first glance, it would seem reasonable to use a notebook-based solution for in-vehicle data logging. Together with a suitable network interface the notebook should be able to offer all required capabilities, because functionality can be implemented in software. However, commercially available notebooks cannot handle the required temperature range. Furthermore, the system must first be booted, which takes some time—even with fast notebooks. This implies another requirement for data loggers: Short startup times. Data must be acquired quickly enough for the first message on the bus to be logged.
Information is shared by www.irvs.info
Shortly before production maturity, in-depth testing in vehicles is typically conducted in the context of test drives. To achieve the greatest possible test coverage, some of these tests are performed under extreme environmental conditions. Whether they are winter tests in Finland at -30C, hot weather tests in Death Valley at over 50C, or week-long drives through the Brazilian rainforest at high humidity and on rough roads, in the end the vehicle and all of its components must operate smoothly. Iinstalled data loggers must be able to withstand these harsh conditions as well. This means that they must be mechanically rugged and operate reliably over a broad range of temperatures.
At first glance, it would seem reasonable to use a notebook-based solution for in-vehicle data logging. Together with a suitable network interface the notebook should be able to offer all required capabilities, because functionality can be implemented in software. However, commercially available notebooks cannot handle the required temperature range. Furthermore, the system must first be booted, which takes some time—even with fast notebooks. This implies another requirement for data loggers: Short startup times. Data must be acquired quickly enough for the first message on the bus to be logged.
Information is shared by www.irvs.info
Friday, July 29, 2011
Making your application code multicore ready
Many silicon vendors rely on multicore architectures to improve performance. However, some engineers might say that those vendors have not been able to deliver compilation tools that have kept pace with the architecture improvements. The tools that are available require both a good understanding of the application and a deep understanding of the target platform.
However, an alternative approach is possible. This article will highlight a way to parallelize complex applications in a short time span, without the need to understand neither the application nor the target platform.
This can be achieved with interactive mapping and partitioning design flow. The flow visualizes the application behavior and allows the user to interactively explore feasible multithreaded versions. The program semantics are guaranteed to be preserved under the proposed transformations.
In many cases the design of an embedded system starts with a software collection that has not yet been partitioned to match the multicore structure of the target hardware.
As a result, the software does not meet its performance requirements and hardware resources are left idle. To resolve this, an expert (or a team of experts) comes in to change the software so that it fits the target multicore structure.
Current multicore design flow practice
Figure 1 Traditional multicore design practice involves an iterative process of analysis, partitioning, loop parallelization, incorporation of semaphores, analysis, testing and retuning of code.
A typical approach, illustrated in Figure 1 above, would include the following steps:
1. Analyze the application. Find the bottlenecks.
2. Partition the software over the available cores. This requires a good understanding of data access patterns and data communication inside the application to match this with the cache architecture and available bandwidth of buses and channels on the target platform. Optimize some of the software kernels for the target instruction set (e.g. Intel SSE, ARM Neon).
3. Identify the loops that can be parallelized. This requires a good understanding of the application: find the data dependencies, find the anti- and output dependencies, and find the shared variables. The dependencies can be hidden very deeply, and to find them often requires complex pointer analysis.
4. Predict the speedup. Predict the overhead of synchronizing, the cost of creating and joining threads. Predict the impact of additional cache overhead introduced by distributing workload over multiple CPUs. If parallelizing a loop still seems worth it, go to the next step.
5. Change the software to introduce semaphores, FIFOs and other communication and synchronization means. Add thread calls to create and join threads. This requires a good understanding of the API’s available on the target platform. In this stage subtle bugs are often introduced, related to data races, deadlock or livelock that may only manifest themselves much later, e.g. after the product has been shipped to the customer.
6. Test. Does it seem to function correctly? Measure. Does the system achieve the required performance level? If not: observe and probe the system. Tooling exists to observe the system; The experts need to interpret these low-level observations in the context of their expert system knowledge, then draw conclusions.
7. Try again to improve performance or handle data races and deadlocks. This involves repeating the above from Step 1.
Close analysis of Figure 1 clearly shows there are many problems with this design flow. Experts that can successfully complete this flow are a rare-breed. Even if you can find them, at the start of a project it is hard to predict how many improvement and bug fix iterations the experts need to go through until the system stabilizes.
Multicore platforms are quickly becoming a very attractive option in terms of their cost-performance ratio. But they also become more complex every year, making it harder for developers to exploit their benefits. Therefore we need a new design flow that enables any software developer to program multicore platforms. This flow is depicted in Figure 2 below.
Figure 2 Multicore design flow that enables any software developer
In this alternative flow, a tool analyzes the program before it is partitioned. It finds the loops that can be parallelized and devises a synchronization strategy for these loops. The tool also has detailed knowledge of the target platform and it can estimate the cost of different partitioning and synchronization strategies.
Information is shared by www.irvs.info
However, an alternative approach is possible. This article will highlight a way to parallelize complex applications in a short time span, without the need to understand neither the application nor the target platform.
This can be achieved with interactive mapping and partitioning design flow. The flow visualizes the application behavior and allows the user to interactively explore feasible multithreaded versions. The program semantics are guaranteed to be preserved under the proposed transformations.
In many cases the design of an embedded system starts with a software collection that has not yet been partitioned to match the multicore structure of the target hardware.
As a result, the software does not meet its performance requirements and hardware resources are left idle. To resolve this, an expert (or a team of experts) comes in to change the software so that it fits the target multicore structure.
Current multicore design flow practice
Figure 1 Traditional multicore design practice involves an iterative process of analysis, partitioning, loop parallelization, incorporation of semaphores, analysis, testing and retuning of code.
A typical approach, illustrated in Figure 1 above, would include the following steps:
1. Analyze the application. Find the bottlenecks.
2. Partition the software over the available cores. This requires a good understanding of data access patterns and data communication inside the application to match this with the cache architecture and available bandwidth of buses and channels on the target platform. Optimize some of the software kernels for the target instruction set (e.g. Intel SSE, ARM Neon).
3. Identify the loops that can be parallelized. This requires a good understanding of the application: find the data dependencies, find the anti- and output dependencies, and find the shared variables. The dependencies can be hidden very deeply, and to find them often requires complex pointer analysis.
4. Predict the speedup. Predict the overhead of synchronizing, the cost of creating and joining threads. Predict the impact of additional cache overhead introduced by distributing workload over multiple CPUs. If parallelizing a loop still seems worth it, go to the next step.
5. Change the software to introduce semaphores, FIFOs and other communication and synchronization means. Add thread calls to create and join threads. This requires a good understanding of the API’s available on the target platform. In this stage subtle bugs are often introduced, related to data races, deadlock or livelock that may only manifest themselves much later, e.g. after the product has been shipped to the customer.
6. Test. Does it seem to function correctly? Measure. Does the system achieve the required performance level? If not: observe and probe the system. Tooling exists to observe the system; The experts need to interpret these low-level observations in the context of their expert system knowledge, then draw conclusions.
7. Try again to improve performance or handle data races and deadlocks. This involves repeating the above from Step 1.
Close analysis of Figure 1 clearly shows there are many problems with this design flow. Experts that can successfully complete this flow are a rare-breed. Even if you can find them, at the start of a project it is hard to predict how many improvement and bug fix iterations the experts need to go through until the system stabilizes.
Multicore platforms are quickly becoming a very attractive option in terms of their cost-performance ratio. But they also become more complex every year, making it harder for developers to exploit their benefits. Therefore we need a new design flow that enables any software developer to program multicore platforms. This flow is depicted in Figure 2 below.
Figure 2 Multicore design flow that enables any software developer
In this alternative flow, a tool analyzes the program before it is partitioned. It finds the loops that can be parallelized and devises a synchronization strategy for these loops. The tool also has detailed knowledge of the target platform and it can estimate the cost of different partitioning and synchronization strategies.
Information is shared by www.irvs.info
Wednesday, July 27, 2011
Touch-screen technologies enable greater user/device interaction
Introduction
Touch screens are not a new concept. Since the arrival of the iPhone® multimedia device, touch technology has become extremely popular and has benefited from a flood of innovations. Where previously touch screens were merely designed to replace keyboards and mice, today they convey a completely new operating experience. Featuring greater interaction between the user and device, this new “touch” experience has been achieved by a combination of different technologies. This article provides an overview of recent advances in touch-screen technology.
Resistive technology
Touch screens with resistive technology have been in common use for many years. They are inexpensive to manufacture and easy to interface.
In resistive-technology sensing, electrodes are connected to two opposite sides of a glass plate with a transparent, conductive coating. When a voltage is applied to the electrodes, the plate acts like a potentiometer, enabling the measurement of a voltage which depends on the position of the contact point, Figure 1.
Once a similar, second plate is arranged in close proximity to the first with electrodes offset by 90°, and with the coatings facing each other, a 4-wire touch screen is achieved. If the top plate, which can also consist of transparent plastic, is deflected by a finger or pen so that the plates touch at the contact point, the x- and y-position of the pressure mark can be determined by comparing the voltages.
Figure 1: Functionality of a resistive touch screen.
because resistive touch screens are pressure sensitive, they can be operated using any pen or a finger. In contrast to most capacitive technologies, they can also be operated if the user is wearing gloves. Due to the mechanical contact system and the DC operation, they have a high electromagnetic interference (EMI) tolerance. However, only one contact pressure point can be registered at a time, and no multitouch systems are possible.
Modern controllers like the Maxim MAX11800 autonomously calculate the X/Y coordinates, thus relieving the host CPU of this task. This facilitates rapid scanning which, for example, enables a clear and legible signature to be captured.
Capacitive styles
Another technology, termed “projected capacitive,” is gaining in popularity. The premise for this technology is straightforward: transparent conductor paths made of indium tin oxide (ITO) are applied to a transparent carrier plate mounted behind a glass plate. A finger touching the glass plate affects the electrical capacitor field and enables the contact to be detected and measured. Two fundamentally different approaches are used to implement a projected capacitive design.
Self-capacitance
With this design, a clearly defined charge is applied to a conductor path. An approaching finger conducts part of the charge to ground. The changing voltage at the conductor path is analyzed by the touch controller. The ITO conductor paths are arranged in X and Y directions in a diamond pattern (Figure 2).
Figure 2: Functionality and design of a self-capacitance touch screen.
The exact position of the finger on the glass plate is determined by the controller chip through interpolation. Only one finger position can be recorded at a time, so real multitouch functionality is not possible.
The MAX11855 is a controller for a self-capacitive touch screen. It can control up to 31 ITO paths. Its superior sensitivity enables touch detection from gloved hands, and its noise immunity allows for thin and simple ITO constructions without the need for special shielding or a safety gap between it and the LCD display.
Intelligent control also makes pseudo-multitouch possible, e.g., zooming through the spread of two fingers. The integrated microcontroller enables the device to be used flexibly for touch screens of different dimensions as well as for buttons or slider controls.
Mutual capacitance
With this technology, an activating finger changes the coupling capacity of two crossing conductor paths. The controller monitors each line, one after another, and analyzes the voltage curves at the columns (Figure 3). This technology is capable of multitouch; several finger contacts can be detected simultaneously.
Figure 3. Functionality of a mutual-capacitance touch screen.
The geometry of the conductor paths is optimized for several applications. The MAX11871 touch-screen controller is designed for this technology and is the first controller on the market to allow a capacitive touch screen to be operated with gloved hands or pens. Another advantage is that the screen can be mounted behind a thick glass plate, so very robust systems can be built.
Haptics rising quickly as a driving force
A user’s operating experience is changing significantly because of the emerging use of haptics. Haptic feedback enables the operator to “feel” a device execute a function.
Currently, cell phones use a very rudimentary method for this tactile feedback. An integrated vibration motor, which is primarily used to signal an incoming call when the device is in silent mode, is activated for a short moment with every touch contact, and the whole cell phone briefly vibrates.
Future haptic systems will further refine this experience. In the newest designs, the touch-screen glass plate is moved by special actuators that operate magnetically, e.g., the new linear-resonant-actuators (LRA), or are piezo-based. Piezo devices are available in single-layer versions. which require a high voltage (up to 300V) or in a more-elaborate multilayer technology which can reduce the required voltage.
The MAX11835 haptic driver has been designed for single-layer, high-voltage piezo actuation. It is controlled directly by a touch-screen controller like any of the devices mentioned above. It enables a nearly latency-free operation, which is very important for the touch experience. Its waveform memory allows for individual movement patterns, which can further optimize the operating experience.
As an example, buttons can be sensed by fingertips moving over the display. Pressing the button then executes a function and creates a different waveform, which is detected differently by the finger. In this way touch-screen devices can be operated safely without the operator having to constantly look at the display to verify an action. This significantly increases personal safety for navigational systems or medical devices.
Figure 4; The function generator built into the MAX11835 reduces the delay for a fast and individual haptic experience
Information is shared by wwww.irvs.info
Touch screens are not a new concept. Since the arrival of the iPhone® multimedia device, touch technology has become extremely popular and has benefited from a flood of innovations. Where previously touch screens were merely designed to replace keyboards and mice, today they convey a completely new operating experience. Featuring greater interaction between the user and device, this new “touch” experience has been achieved by a combination of different technologies. This article provides an overview of recent advances in touch-screen technology.
Resistive technology
Touch screens with resistive technology have been in common use for many years. They are inexpensive to manufacture and easy to interface.
In resistive-technology sensing, electrodes are connected to two opposite sides of a glass plate with a transparent, conductive coating. When a voltage is applied to the electrodes, the plate acts like a potentiometer, enabling the measurement of a voltage which depends on the position of the contact point, Figure 1.
Once a similar, second plate is arranged in close proximity to the first with electrodes offset by 90°, and with the coatings facing each other, a 4-wire touch screen is achieved. If the top plate, which can also consist of transparent plastic, is deflected by a finger or pen so that the plates touch at the contact point, the x- and y-position of the pressure mark can be determined by comparing the voltages.
Figure 1: Functionality of a resistive touch screen.
because resistive touch screens are pressure sensitive, they can be operated using any pen or a finger. In contrast to most capacitive technologies, they can also be operated if the user is wearing gloves. Due to the mechanical contact system and the DC operation, they have a high electromagnetic interference (EMI) tolerance. However, only one contact pressure point can be registered at a time, and no multitouch systems are possible.
Modern controllers like the Maxim MAX11800 autonomously calculate the X/Y coordinates, thus relieving the host CPU of this task. This facilitates rapid scanning which, for example, enables a clear and legible signature to be captured.
Capacitive styles
Another technology, termed “projected capacitive,” is gaining in popularity. The premise for this technology is straightforward: transparent conductor paths made of indium tin oxide (ITO) are applied to a transparent carrier plate mounted behind a glass plate. A finger touching the glass plate affects the electrical capacitor field and enables the contact to be detected and measured. Two fundamentally different approaches are used to implement a projected capacitive design.
Self-capacitance
With this design, a clearly defined charge is applied to a conductor path. An approaching finger conducts part of the charge to ground. The changing voltage at the conductor path is analyzed by the touch controller. The ITO conductor paths are arranged in X and Y directions in a diamond pattern (Figure 2).
Figure 2: Functionality and design of a self-capacitance touch screen.
The exact position of the finger on the glass plate is determined by the controller chip through interpolation. Only one finger position can be recorded at a time, so real multitouch functionality is not possible.
The MAX11855 is a controller for a self-capacitive touch screen. It can control up to 31 ITO paths. Its superior sensitivity enables touch detection from gloved hands, and its noise immunity allows for thin and simple ITO constructions without the need for special shielding or a safety gap between it and the LCD display.
Intelligent control also makes pseudo-multitouch possible, e.g., zooming through the spread of two fingers. The integrated microcontroller enables the device to be used flexibly for touch screens of different dimensions as well as for buttons or slider controls.
Mutual capacitance
With this technology, an activating finger changes the coupling capacity of two crossing conductor paths. The controller monitors each line, one after another, and analyzes the voltage curves at the columns (Figure 3). This technology is capable of multitouch; several finger contacts can be detected simultaneously.
Figure 3. Functionality of a mutual-capacitance touch screen.
The geometry of the conductor paths is optimized for several applications. The MAX11871 touch-screen controller is designed for this technology and is the first controller on the market to allow a capacitive touch screen to be operated with gloved hands or pens. Another advantage is that the screen can be mounted behind a thick glass plate, so very robust systems can be built.
Haptics rising quickly as a driving force
A user’s operating experience is changing significantly because of the emerging use of haptics. Haptic feedback enables the operator to “feel” a device execute a function.
Currently, cell phones use a very rudimentary method for this tactile feedback. An integrated vibration motor, which is primarily used to signal an incoming call when the device is in silent mode, is activated for a short moment with every touch contact, and the whole cell phone briefly vibrates.
Future haptic systems will further refine this experience. In the newest designs, the touch-screen glass plate is moved by special actuators that operate magnetically, e.g., the new linear-resonant-actuators (LRA), or are piezo-based. Piezo devices are available in single-layer versions. which require a high voltage (up to 300V) or in a more-elaborate multilayer technology which can reduce the required voltage.
The MAX11835 haptic driver has been designed for single-layer, high-voltage piezo actuation. It is controlled directly by a touch-screen controller like any of the devices mentioned above. It enables a nearly latency-free operation, which is very important for the touch experience. Its waveform memory allows for individual movement patterns, which can further optimize the operating experience.
As an example, buttons can be sensed by fingertips moving over the display. Pressing the button then executes a function and creates a different waveform, which is detected differently by the finger. In this way touch-screen devices can be operated safely without the operator having to constantly look at the display to verify an action. This significantly increases personal safety for navigational systems or medical devices.
Figure 4; The function generator built into the MAX11835 reduces the delay for a fast and individual haptic experience
Information is shared by wwww.irvs.info
Monday, July 25, 2011
Ultra-Low-Power RF
Even though the chips ship in their tens of millions each week, the market for short-range, low power RF technologies operating in the globally popular 2.4GHz ISM band - such as Wi-Fi, Bluetooth, ZigBee and a slew of proprietary solutions - is far from maturity. In the next few years, many impressive developments will emerge and wireless connectivity will pervade every aspect of our lives.
In particular, ultra low power (ULP) wireless applications – using tiny RF transceivers powered by coin cell batteries, waking up to send rapid “bursts” of data and then returning to nanoamp “sleep” states – are set to increase dramatically. For example, according to analysts ABI Research, the wireless sensor network (WSN) chips market grew by 300 percent in 2010. And the same company forecasts that no less than 467 million healthcare and personal fitness devices using Bluetooth low energy chips will ship in 2016.
ULP wireless connectivity can be added to any portable electronic product or equipment featuring embedded electronics, from tiny medical and fitness sensors, to cell phones, PCs, machine tools, cars and virtually everything in between. Tiny ULP transceivers can bestow the ability to communicate with thousands of other devices directly or as part of a network – dramatically increasing a product’s usefulness.
Yet, for the majority of engineers, RF design remains a black art. But while RF design is not trivial - with some assistance from the chip supplier and a decent development kit - it’s not beyond the design skills of a competent engineer. So, in this article I’ll lift the veil from ULP wireless technology, describe the chips, and take a look at how and where they’re used.
Inside ULP wireless
ULP wireless technology differs from so-called low power, short-range radios such as Bluetooth technology (now called Classic Bluetooth to differentiate it from the recently released Bluetooth v4.0 which includes ultra low power Bluetooth low energy technology) in that it requires significantly less power to operate. This dramatically increases the opportunity to add a wireless link to even the most compact portable electronic device.
The relatively high power demand of Classic Bluetooth - even for transmission of modest volumes of user data – dictates an almost exclusive use of rechargeable batteries. This power requirement means that Classic Bluetooth is not a good wireless solution for ‘low bandwidth - long lifetime’ applications and it’s typically used for periods of intense activity when frequent battery charging is not too inconvenient.
Classic Bluetooth technology, for example, finds use for wirelessly connecting a mobile phone to a headset or the transfer of stored digital images from a camera to a Bluetooth-enabled printer. Battery life in a Classic Bluetooth-powered wireless device is therefore typically measured in days, or weeks at most. (Note: There are some highly specialized Classic Bluetooth applications that can run on lower capacity primary batteries.)
In comparison, ULP RF transceivers can run from coin cell batteries (such as a CR2032 or CR2025) for periods of months or even years (depending on application duty cycle). These coin cell batteries are compact and inexpensive, but have limited energy capacity, typically in the range of 90 to 240mAh (compared to, for example, an AA cell which has 10 to 12x that capacity) - assuming a nominal average current drain of just 200µA.
This modest capacity significantly restricts the active duty cycle of a ULP wireless link. For example, a 220mAh CR2032 coin cell can sustain a maximum nominal current (or discharge rate) of just 25µA if it’s to last for at least a year (220mAh/(24hr x 365days)).
ULP silicon radios featuring peak currents of tens of milliamps - for example, current consumption of Nordic Semiconductor’s nRF24LE1 2.4GHz transceiver is 11.1mA (at 0dBm output power) when transmitting and 13.3mA (at 2Mbps) when receiving. If the average current over an extended period is to be restricted to tens of microamps, the duty cycle has to be very low (around 0.25 percent) with the chip quickly reverting to a sleep mode, drawing just nanoamps, for most of the time.
Information is shared by www.irvs.info
In particular, ultra low power (ULP) wireless applications – using tiny RF transceivers powered by coin cell batteries, waking up to send rapid “bursts” of data and then returning to nanoamp “sleep” states – are set to increase dramatically. For example, according to analysts ABI Research, the wireless sensor network (WSN) chips market grew by 300 percent in 2010. And the same company forecasts that no less than 467 million healthcare and personal fitness devices using Bluetooth low energy chips will ship in 2016.
ULP wireless connectivity can be added to any portable electronic product or equipment featuring embedded electronics, from tiny medical and fitness sensors, to cell phones, PCs, machine tools, cars and virtually everything in between. Tiny ULP transceivers can bestow the ability to communicate with thousands of other devices directly or as part of a network – dramatically increasing a product’s usefulness.
Yet, for the majority of engineers, RF design remains a black art. But while RF design is not trivial - with some assistance from the chip supplier and a decent development kit - it’s not beyond the design skills of a competent engineer. So, in this article I’ll lift the veil from ULP wireless technology, describe the chips, and take a look at how and where they’re used.
Inside ULP wireless
ULP wireless technology differs from so-called low power, short-range radios such as Bluetooth technology (now called Classic Bluetooth to differentiate it from the recently released Bluetooth v4.0 which includes ultra low power Bluetooth low energy technology) in that it requires significantly less power to operate. This dramatically increases the opportunity to add a wireless link to even the most compact portable electronic device.
The relatively high power demand of Classic Bluetooth - even for transmission of modest volumes of user data – dictates an almost exclusive use of rechargeable batteries. This power requirement means that Classic Bluetooth is not a good wireless solution for ‘low bandwidth - long lifetime’ applications and it’s typically used for periods of intense activity when frequent battery charging is not too inconvenient.
Classic Bluetooth technology, for example, finds use for wirelessly connecting a mobile phone to a headset or the transfer of stored digital images from a camera to a Bluetooth-enabled printer. Battery life in a Classic Bluetooth-powered wireless device is therefore typically measured in days, or weeks at most. (Note: There are some highly specialized Classic Bluetooth applications that can run on lower capacity primary batteries.)
In comparison, ULP RF transceivers can run from coin cell batteries (such as a CR2032 or CR2025) for periods of months or even years (depending on application duty cycle). These coin cell batteries are compact and inexpensive, but have limited energy capacity, typically in the range of 90 to 240mAh (compared to, for example, an AA cell which has 10 to 12x that capacity) - assuming a nominal average current drain of just 200µA.
This modest capacity significantly restricts the active duty cycle of a ULP wireless link. For example, a 220mAh CR2032 coin cell can sustain a maximum nominal current (or discharge rate) of just 25µA if it’s to last for at least a year (220mAh/(24hr x 365days)).
ULP silicon radios featuring peak currents of tens of milliamps - for example, current consumption of Nordic Semiconductor’s nRF24LE1 2.4GHz transceiver is 11.1mA (at 0dBm output power) when transmitting and 13.3mA (at 2Mbps) when receiving. If the average current over an extended period is to be restricted to tens of microamps, the duty cycle has to be very low (around 0.25 percent) with the chip quickly reverting to a sleep mode, drawing just nanoamps, for most of the time.
Information is shared by www.irvs.info
Thursday, July 21, 2011
Scalable SoC drives 'hybrid' cluster displays
With increased electronics content, increasingly connected cars, and computers taking more and more control in vehicles, it is only logical that instrument clusters massively change their appearance and functions.
Traditional instrument clusters are a key element of cars that are undergoing substantial changes. The time has arrived for an evolution in traditional main vehicle instrument cluster units. Between the group of mechanical instrument clusters and the growing group of free programmable clusters there is actually the huge area of hybrid dashboards, which combines traditional meters and at least one graphical display for driver information.
With the increasing number of electronic systems in cars, such as driver assistance systems, the number of information and status signals offered to the driver is increasing in parallel. Undoubtedly pictures and graphics can be grasped more easily and quickly by humans than written or digital information. The consequence is a strong trend towards displays within easy view of the driver, mostly as part of a hybrid cluster, but also—as a logical step—implemented as head-up displays (HUDs).
For the automotive industry the design of the driver's environment is a major differentiator from competitors, especially considering the difficult conditions for implementing advanced electronic systems in the car. Quality, robustness, functional safety, data security, low power consumption, etc, are the main criteria. From the cost perspective this means that display and semiconductor technologies have to be available at reasonable prices and have to offer the right amount of scalability in several key areas, such as LCD and TFT, graphics processors and controller units, sensors and LED modules.
New features and applications, with obvious possibilities for integration into instrument clusters, are being introduced into cars via entertainment, navigation, advanced driver assist systems (ADAS), and diagnostic systems. Although multi-purpose head units will still have the main display capability, clusters will be able to offer an auxiliary screen to the driver—especially for multimedia content, even if it were only to access main vehicle information and safety data from ADAS.
Information is shared by www.irvs.info
Traditional instrument clusters are a key element of cars that are undergoing substantial changes. The time has arrived for an evolution in traditional main vehicle instrument cluster units. Between the group of mechanical instrument clusters and the growing group of free programmable clusters there is actually the huge area of hybrid dashboards, which combines traditional meters and at least one graphical display for driver information.
With the increasing number of electronic systems in cars, such as driver assistance systems, the number of information and status signals offered to the driver is increasing in parallel. Undoubtedly pictures and graphics can be grasped more easily and quickly by humans than written or digital information. The consequence is a strong trend towards displays within easy view of the driver, mostly as part of a hybrid cluster, but also—as a logical step—implemented as head-up displays (HUDs).
For the automotive industry the design of the driver's environment is a major differentiator from competitors, especially considering the difficult conditions for implementing advanced electronic systems in the car. Quality, robustness, functional safety, data security, low power consumption, etc, are the main criteria. From the cost perspective this means that display and semiconductor technologies have to be available at reasonable prices and have to offer the right amount of scalability in several key areas, such as LCD and TFT, graphics processors and controller units, sensors and LED modules.
New features and applications, with obvious possibilities for integration into instrument clusters, are being introduced into cars via entertainment, navigation, advanced driver assist systems (ADAS), and diagnostic systems. Although multi-purpose head units will still have the main display capability, clusters will be able to offer an auxiliary screen to the driver—especially for multimedia content, even if it were only to access main vehicle information and safety data from ADAS.
Information is shared by www.irvs.info
Subscribe to:
Posts (Atom)