Friday, July 29, 2011

Making your application code multicore ready

Many silicon vendors rely on multicore architectures to improve performance. However, some engineers might say that those vendors have not been able to deliver compilation tools that have kept pace with the architecture improvements. The tools that are available require both a good understanding of the application and a deep understanding of the target platform.

However, an alternative approach is possible. This article will highlight a way to parallelize complex applications in a short time span, without the need to understand neither the application nor the target platform.

This can be achieved with interactive mapping and partitioning design flow. The flow visualizes the application behavior and allows the user to interactively explore feasible multithreaded versions. The program semantics are guaranteed to be preserved under the proposed transformations.

In many cases the design of an embedded system starts with a software collection that has not yet been partitioned to match the multicore structure of the target hardware.

As a result, the software does not meet its performance requirements and hardware resources are left idle. To resolve this, an expert (or a team of experts) comes in to change the software so that it fits the target multicore structure.

Current multicore design flow practice



Figure 1 Traditional multicore design practice involves an iterative process of analysis, partitioning, loop parallelization, incorporation of semaphores, analysis, testing and retuning of code.


A typical approach, illustrated in Figure 1 above, would include the following steps:

1. Analyze the application. Find the bottlenecks.

2. Partition the software over the available cores. This requires a good understanding of data access patterns and data communication inside the application to match this with the cache architecture and available bandwidth of buses and channels on the target platform. Optimize some of the software kernels for the target instruction set (e.g. Intel SSE, ARM Neon).

3. Identify the loops that can be parallelized. This requires a good understanding of the application: find the data dependencies, find the anti- and output dependencies, and find the shared variables. The dependencies can be hidden very deeply, and to find them often requires complex pointer analysis.

4. Predict the speedup. Predict the overhead of synchronizing, the cost of creating and joining threads. Predict the impact of additional cache overhead introduced by distributing workload over multiple CPUs. If parallelizing a loop still seems worth it, go to the next step.

5. Change the software to introduce semaphores, FIFOs and other communication and synchronization means. Add thread calls to create and join threads. This requires a good understanding of the API’s available on the target platform. In this stage subtle bugs are often introduced, related to data races, deadlock or livelock that may only manifest themselves much later, e.g. after the product has been shipped to the customer.

6. Test. Does it seem to function correctly? Measure. Does the system achieve the required performance level? If not: observe and probe the system. Tooling exists to observe the system; The experts need to interpret these low-level observations in the context of their expert system knowledge, then draw conclusions.

7. Try again to improve performance or handle data races and deadlocks. This involves repeating the above from Step 1.

Close analysis of Figure 1 clearly shows there are many problems with this design flow. Experts that can successfully complete this flow are a rare-breed. Even if you can find them, at the start of a project it is hard to predict how many improvement and bug fix iterations the experts need to go through until the system stabilizes.

Multicore platforms are quickly becoming a very attractive option in terms of their cost-performance ratio. But they also become more complex every year, making it harder for developers to exploit their benefits. Therefore we need a new design flow that enables any software developer to program multicore platforms. This flow is depicted in Figure 2 below.



Figure 2 Multicore design flow that enables any software developer

In this alternative flow, a tool analyzes the program before it is partitioned. It finds the loops that can be parallelized and devises a synchronization strategy for these loops. The tool also has detailed knowledge of the target platform and it can estimate the cost of different partitioning and synchronization strategies.

Information is shared by www.irvs.info

Wednesday, July 27, 2011

Touch-screen technologies enable greater user/device interaction

Introduction

Touch screens are not a new concept. Since the arrival of the iPhone® multimedia device, touch technology has become extremely popular and has benefited from a flood of innovations. Where previously touch screens were merely designed to replace keyboards and mice, today they convey a completely new operating experience. Featuring greater interaction between the user and device, this new “touch” experience has been achieved by a combination of different technologies. This article provides an overview of recent advances in touch-screen technology.

Resistive technology

Touch screens with resistive technology have been in common use for many years. They are inexpensive to manufacture and easy to interface.

In resistive-technology sensing, electrodes are connected to two opposite sides of a glass plate with a transparent, conductive coating. When a voltage is applied to the electrodes, the plate acts like a potentiometer, enabling the measurement of a voltage which depends on the position of the contact point, Figure 1.

Once a similar, second plate is arranged in close proximity to the first with electrodes offset by 90°, and with the coatings facing each other, a 4-wire touch screen is achieved. If the top plate, which can also consist of transparent plastic, is deflected by a finger or pen so that the plates touch at the contact point, the x- and y-position of the pressure mark can be determined by comparing the voltages.



Figure 1: Functionality of a resistive touch screen.

because resistive touch screens are pressure sensitive, they can be operated using any pen or a finger. In contrast to most capacitive technologies, they can also be operated if the user is wearing gloves. Due to the mechanical contact system and the DC operation, they have a high electromagnetic interference (EMI) tolerance. However, only one contact pressure point can be registered at a time, and no multitouch systems are possible.

Modern controllers like the Maxim MAX11800 autonomously calculate the X/Y coordinates, thus relieving the host CPU of this task. This facilitates rapid scanning which, for example, enables a clear and legible signature to be captured.

Capacitive styles

Another technology, termed “projected capacitive,” is gaining in popularity. The premise for this technology is straightforward: transparent conductor paths made of indium tin oxide (ITO) are applied to a transparent carrier plate mounted behind a glass plate. A finger touching the glass plate affects the electrical capacitor field and enables the contact to be detected and measured. Two fundamentally different approaches are used to implement a projected capacitive design.

Self-capacitance

With this design, a clearly defined charge is applied to a conductor path. An approaching finger conducts part of the charge to ground. The changing voltage at the conductor path is analyzed by the touch controller. The ITO conductor paths are arranged in X and Y directions in a diamond pattern (Figure 2).



Figure 2: Functionality and design of a self-capacitance touch screen.

The exact position of the finger on the glass plate is determined by the controller chip through interpolation. Only one finger position can be recorded at a time, so real multitouch functionality is not possible.

The MAX11855 is a controller for a self-capacitive touch screen. It can control up to 31 ITO paths. Its superior sensitivity enables touch detection from gloved hands, and its noise immunity allows for thin and simple ITO constructions without the need for special shielding or a safety gap between it and the LCD display.

Intelligent control also makes pseudo-multitouch possible, e.g., zooming through the spread of two fingers. The integrated microcontroller enables the device to be used flexibly for touch screens of different dimensions as well as for buttons or slider controls.

Mutual capacitance

With this technology, an activating finger changes the coupling capacity of two crossing conductor paths. The controller monitors each line, one after another, and analyzes the voltage curves at the columns (Figure 3). This technology is capable of multitouch; several finger contacts can be detected simultaneously.



Figure 3. Functionality of a mutual-capacitance touch screen.

The geometry of the conductor paths is optimized for several applications. The MAX11871 touch-screen controller is designed for this technology and is the first controller on the market to allow a capacitive touch screen to be operated with gloved hands or pens. Another advantage is that the screen can be mounted behind a thick glass plate, so very robust systems can be built.

Haptics rising quickly as a driving force

A user’s operating experience is changing significantly because of the emerging use of haptics. Haptic feedback enables the operator to “feel” a device execute a function.

Currently, cell phones use a very rudimentary method for this tactile feedback. An integrated vibration motor, which is primarily used to signal an incoming call when the device is in silent mode, is activated for a short moment with every touch contact, and the whole cell phone briefly vibrates.

Future haptic systems will further refine this experience. In the newest designs, the touch-screen glass plate is moved by special actuators that operate magnetically, e.g., the new linear-resonant-actuators (LRA), or are piezo-based. Piezo devices are available in single-layer versions. which require a high voltage (up to 300V) or in a more-elaborate multilayer technology which can reduce the required voltage.

The MAX11835 haptic driver has been designed for single-layer, high-voltage piezo actuation. It is controlled directly by a touch-screen controller like any of the devices mentioned above. It enables a nearly latency-free operation, which is very important for the touch experience. Its waveform memory allows for individual movement patterns, which can further optimize the operating experience.

As an example, buttons can be sensed by fingertips moving over the display. Pressing the button then executes a function and creates a different waveform, which is detected differently by the finger. In this way touch-screen devices can be operated safely without the operator having to constantly look at the display to verify an action. This significantly increases personal safety for navigational systems or medical devices.



Figure 4; The function generator built into the MAX11835 reduces the delay for a fast and individual haptic experience

Information is shared by wwww.irvs.info

Monday, July 25, 2011

Ultra-Low-Power RF

Even though the chips ship in their tens of millions each week, the market for short-range, low power RF technologies operating in the globally popular 2.4GHz ISM band - such as Wi-Fi, Bluetooth, ZigBee and a slew of proprietary solutions - is far from maturity. In the next few years, many impressive developments will emerge and wireless connectivity will pervade every aspect of our lives.

In particular, ultra low power (ULP) wireless applications – using tiny RF transceivers powered by coin cell batteries, waking up to send rapid “bursts” of data and then returning to nanoamp “sleep” states – are set to increase dramatically. For example, according to analysts ABI Research, the wireless sensor network (WSN) chips market grew by 300 percent in 2010. And the same company forecasts that no less than 467 million healthcare and personal fitness devices using Bluetooth low energy chips will ship in 2016.

ULP wireless connectivity can be added to any portable electronic product or equipment featuring embedded electronics, from tiny medical and fitness sensors, to cell phones, PCs, machine tools, cars and virtually everything in between. Tiny ULP transceivers can bestow the ability to communicate with thousands of other devices directly or as part of a network – dramatically increasing a product’s usefulness.

Yet, for the majority of engineers, RF design remains a black art. But while RF design is not trivial - with some assistance from the chip supplier and a decent development kit - it’s not beyond the design skills of a competent engineer. So, in this article I’ll lift the veil from ULP wireless technology, describe the chips, and take a look at how and where they’re used.

Inside ULP wireless
ULP wireless technology differs from so-called low power, short-range radios such as Bluetooth technology (now called Classic Bluetooth to differentiate it from the recently released Bluetooth v4.0 which includes ultra low power Bluetooth low energy technology) in that it requires significantly less power to operate. This dramatically increases the opportunity to add a wireless link to even the most compact portable electronic device.

The relatively high power demand of Classic Bluetooth - even for transmission of modest volumes of user data – dictates an almost exclusive use of rechargeable batteries. This power requirement means that Classic Bluetooth is not a good wireless solution for ‘low bandwidth - long lifetime’ applications and it’s typically used for periods of intense activity when frequent battery charging is not too inconvenient.

Classic Bluetooth technology, for example, finds use for wirelessly connecting a mobile phone to a headset or the transfer of stored digital images from a camera to a Bluetooth-enabled printer. Battery life in a Classic Bluetooth-powered wireless device is therefore typically measured in days, or weeks at most. (Note: There are some highly specialized Classic Bluetooth applications that can run on lower capacity primary batteries.)

In comparison, ULP RF transceivers can run from coin cell batteries (such as a CR2032 or CR2025) for periods of months or even years (depending on application duty cycle). These coin cell batteries are compact and inexpensive, but have limited energy capacity, typically in the range of 90 to 240mAh (compared to, for example, an AA cell which has 10 to 12x that capacity) - assuming a nominal average current drain of just 200µA.

This modest capacity significantly restricts the active duty cycle of a ULP wireless link. For example, a 220mAh CR2032 coin cell can sustain a maximum nominal current (or discharge rate) of just 25µA if it’s to last for at least a year (220mAh/(24hr x 365days)).

ULP silicon radios featuring peak currents of tens of milliamps - for example, current consumption of Nordic Semiconductor’s nRF24LE1 2.4GHz transceiver is 11.1mA (at 0dBm output power) when transmitting and 13.3mA (at 2Mbps) when receiving. If the average current over an extended period is to be restricted to tens of microamps, the duty cycle has to be very low (around 0.25 percent) with the chip quickly reverting to a sleep mode, drawing just nanoamps, for most of the time.

Information is shared by www.irvs.info

Thursday, July 21, 2011

Scalable SoC drives 'hybrid' cluster displays

With increased electronics content, increasingly connected cars, and computers taking more and more control in vehicles, it is only logical that instrument clusters massively change their appearance and functions.

Traditional instrument clusters are a key element of cars that are undergoing substantial changes. The time has arrived for an evolution in traditional main vehicle instrument cluster units. Between the group of mechanical instrument clusters and the growing group of free programmable clusters there is actually the huge area of hybrid dashboards, which combines traditional meters and at least one graphical display for driver information.

With the increasing number of electronic systems in cars, such as driver assistance systems, the number of information and status signals offered to the driver is increasing in parallel. Undoubtedly pictures and graphics can be grasped more easily and quickly by humans than written or digital information. The consequence is a strong trend towards displays within easy view of the driver, mostly as part of a hybrid cluster, but also—as a logical step—implemented as head-up displays (HUDs).

For the automotive industry the design of the driver's environment is a major differentiator from competitors, especially considering the difficult conditions for implementing advanced electronic systems in the car. Quality, robustness, functional safety, data security, low power consumption, etc, are the main criteria. From the cost perspective this means that display and semiconductor technologies have to be available at reasonable prices and have to offer the right amount of scalability in several key areas, such as LCD and TFT, graphics processors and controller units, sensors and LED modules.

New features and applications, with obvious possibilities for integration into instrument clusters, are being introduced into cars via entertainment, navigation, advanced driver assist systems (ADAS), and diagnostic systems. Although multi-purpose head units will still have the main display capability, clusters will be able to offer an auxiliary screen to the driver—especially for multimedia content, even if it were only to access main vehicle information and safety data from ADAS.

Information is shared by www.irvs.info

Monday, July 18, 2011

Exposing the hidden costs of using off-the-shelf analog ICs

An ASIC does not necessarily have to be a custom integrated circuit. There are many standard analog chips in the market that are simply priced too high. It may make good economic sense to consider using an analog ASIC company developed a chip that mimics a standard product.

Analog ASICs are not for everyone. Like any component choice, they must offer the best economic value for the application. Any associated up-front NRE costs (non-recurring engineering) must be factored into the equation along with hard tooling (wafer fabrication masks, test hardware and software and more). In addition, there is the issue of time. Analog ASICs can take from six months up to a year or more to be ready to use in a production environment. And of course, there is a minimum quantity that must be consumed to assure the value is received. These must all align properly to justify development of an Analog ASIC

Why do Standard Analog ICs Cost So Much?

No one designs and tools production for ICs for free. The OEM pays for this one way or another. When you buy a standard analog IC, some portion of the price you pay is used to cover the development cost of that chip. The real question becomes, what portion of the price you pay is actually the cost to make the chip? A simplified analysis is derived by viewing a chip company’s financial statement. The critical metric is Gross Profit Margin (GPM).

Gross Profit = Company Annual Sales – Actual Cost to Build the Products Sold

When viewed in their annual report, reflecting sales over a 12 month period, GPM is an average, meaning half of the chip company’s sales during that year achieved more than the reported GPM and half were below the reported GPM.



Depending on the GPM of the products you selected for your new design, it may be cost advantageous to consider replacing them with an analog ASIC. For example, a circuit uses several off-the-shelf analog ICs, including a Linear Tech gain programmable precision instrumentation amplifier, a National micro power ultra low-dropout regulator, an Analog Devices 40 µA micropower instrumentation amplifier, and much more. The combined high volume bill of materials cost was $3.56 and was easy to integrate into an analog ASIC. By integrating the equivalent functions into an analog ASIC, it was possible to reduce the $3.56 cost to well under one dollar. The product lifetime is expected to be ten years, with monthly volumes averaging 15K units.

After amortizing in the NRE and tooling costs associated with the development of the ASIC, the following sensitivity analysis was developed. It is expected that during the lifetime of the ASIC that there may be some degradation to the prices of the standard analog ICs. The analysis projects lifetime savings based on not only under and over achievement of the lifetime volumes of the chip but also the fact the future cost savings may be less than today’s based standard product price changes.



While cost is a compelling reason to move to an analog ASIC because it is an easily measured metric, do not underestimate the value of IP protection and unique differentiation. Many times these critical aspects of an analog ASIC’s economic value are overlooked.

Information is shared by www.irvs.info

Thursday, July 14, 2011

3D modeling integrates flexible PCB design

Over the last decade, electronic products have become increasingly complex and dense as they support more functions into dramatically reduced footprints. The need for flexible circuits has grown exponentially, since they are often the preferred solution to achieve package weight-reduction, compared to rigid planar boards.

They are also easier to manufacture, reducing total assembly time while driving down cost and errors. Through their proven suitability for handling more than 25 point to point wires connections, flexible PCBs also provide greater system reliability.

Additionally, their main advantage is their ability to bend in order to accommodate the most cramped environments, enabling denser component layouts within the specified mechanical constraints of consumer products.

This makes flexible PCBs suitable for use in almost all electronics-based equipment, from consumer products such as digital cameras, computers and hard drives, to internal medical devices and military equipment.

Several generations of notebooks, tablet computers and other devices have been able to slim down while increasing their functionalities thanks to flexible layouts and interconnects.

Reducing the design cycle

Looking at how some flexible PCBs are designed today, and considering their development cycles, it is clear that there is considerable room for improvement. When Dassault Systèmes started to work on this subject with a leading Japanese worldwide consumer electronics company, we soon realized that their design process was slow, extremely complex and time consuming.

The first steps of the development process were purely manual and involved placing the flexible PCB assembly within the product. Even today, some companies are still making paper PCBs by hand, and check the components’ positions manually throughout the product’s physical mock up stages.

Following this procedure, 2D drawings were generated and shared with the ECAD designer for component placement and routing.

Within this outdated methodology, mechanical and electronic design processes were conducted separately. Only late in the development cycle was it possible to exchange critical design data between MCAD and ECAD systems. The limitations in data exchange and the lack of co-design functionality resulted in the need for additional design iterations, driving up development times and costs.

Information is shared by www.irvs.info

Tuesday, July 12, 2011

Microcontroller provides an alternative to DDS

Audio and low-frequency circuit systems often require a signal source with a pure spectrum. DDS (direct-digital-synthesis) devices often perform the signal generation by using these specialized integrated circuits. A DDS device uses a DAC but often with no more than 16-bit resolution, limiting the SNR (signal-to-noise ratio).

You can perform the same task with a microcontroller programmed as a DDS and use an external high-resolution DAC. To achieve 18 to 24 bits of resolution requires a large memory table containing the cosine function for any values of phase progression.

An alternative approach lets you use a standard microcontroller with a small memory and still implement an effective synthesizer. You can design a circuit to produce a sine wave using a scalable digital oscillator built with adder and multiplier block functions in a simple structure.



Figure 1 shows a microcontroller driving an audio DAC. To develop your code to generate a sine wave, the circuit in Figure 2 comprises two integrators with an analog feedback loop equivalent to that of an ideal resonator.



Parameter F defines the frequency and ranges from 0 to –0.2, and Parameter A sets the amplitude of the output signal with a single initial pulse at start-up. The following equation derives the frequency of generated signals:


where T denotes the time for computing an entire sequence to obtain output data.

The firmware for implementing this system is relatively straightforward. It requires just a few additions and one multiplication. Thus, you can use a slow microcontroller. Remember, though, that the precision of every operation must be adequate to warrant a complete signal reconstruction. Processing data with 8 or 16 bits isn’t sufficient. You must write your firmware to emulate a greater number of bits, which requires accurate code implementation.

If you properly develop your code, then you should generate the DAC output codes that produce a sine wave (Figure 3). Remember that Parameter F is nonlinear with respect to the output frequency. If you need a directly proportional rate, you can square the value of F before applying it to the input. You’ll find it useful when you need to make an easy frequency setting.



You can use just about any microcontroller to implement the oscillator, together with a high-performance DAC. You can achieve an output SNR greater than 110 dB. Many audio DACs operating in monophonic mode have 20- to 24-bit resolution at a 192-kHz sampling rate. They also offer a dynamic range of 120 dB or more.

Information is shared by www.irvs.info

Saturday, July 9, 2011

Creating video that mimics human visual perception

Recent significant breakthroughs in core video processing techniques have nurtured video technology into one that looks aptly placed to contest the capabilities of the human visual system. For one, the last couple of decades have witnessed a phenomenal increase in the number of pixels accommodated by display systems, enabling the transition from standard-definition (SD) video to high-definition (HD) video.

Another noteworthy evolution is the stark enhancement in pixel quality, characterized by high dynamic range (HDR) systems as they elegantly displace their low dynamic range (LDR) equivalents.

Moreover, the intuitive approaches developed in the understanding of images to replicate the perceptual abilities of the human brain have met with encouraging successes, as have 3D video systems in their drive toward a total eclipse of their 2D counterparts.

These advanced techniques coerce toward a common purpose—to ensure the disappearance of boundaries between the real and digital worlds, achieved through the capture of videos that mimic the various aspects of human visual perception. These aspects fundamentally relate to video processing research in the fields of video capture, display technologies, data compression as well as understanding video content.

Video capture in 3D, HD and HDR
The two distinct technologies used in the capture of digital videos are the charge-coupled devices (CCD) and complementary metal-oxide-semiconductor (CMOS) image sensors, both of which convert light intensities into appropriate values of electric charges to be later processed as electronic signals.

Leveraging on a remarkable half-century of continued development, these technologies enable the capture of HD videos of exceptional quality. Nevertheless, in terms of HDR videos, these technologies pale in comparison to the capabilities of a typical human eye, itself boasting a dynamic range (the ratio of the brightest to darkest parts visible) of about 10000:1.

Existing digital camcorders can either only capture the brighter portions of a scene using short exposure durations or the darker portions using longer exposure durations.

Practically, this shortcoming can be circumvented with the use of multiple camcorders with one or two beam splitters, in which several video sequences are captured concurrently under different exposure settings.

Beam splitters allow for the simultaneous capture of identical LDR scenes, the best portions of which are then used to synthesize HDR videos. From a research perspective, the challenge is to achieve this feat of a higher dynamic range with the use of a single camcorder, albeit with an unavoidable but reasonable reduction in quality that is insignificantly perceivable.

Moreover, it is envisioned that HDR camcorders equipped with advanced image sensors may serve this purpose in the near future.

3D capture technologies widely employ stereoscopic techniques of obtaining stereo pairs using a two-view setup. Cameras are mounted side by side, with a separation typically equal to the distance between a person's pupils.

Exploiting the idea that views from distant objects arrive at each eye along the same line of sight, while those from closer objects arrive at different angles, realistic 3D images can be obtained from the stereoscopic image pair.

Multi-view technology, an alternative to stereoscopy, captures 3D scenes by recording several independent video streams using an array of cameras. Additionally, plenoptic cameras, which capture the light field of a scene, can also be used for multiview capture with a single main lens. The resulting views can then either be shown on multiview displays or stored for further processing.

Information is shared by www.irvs.info

Friday, July 8, 2011

Meters evolve to bring the smart grid home

The arrival of the truly connected digital home is one of the most anticipated events of the coming decade, promising tremendous opportunities across multiple markets. With stakeholders as diverse as consumer electronics manufacturers, connectivity solution providers, governmental agencies, utility companies and semiconductor suppliers, it is not surprising that there is a diversity of views on which direction the digital home should take.

Current stakeholders are looking to maintain existing business models, while new entrants see the connected home as an opportunity to create new revenue streams with products and services. Expect the digital home to be a battleground for years to come.

Many essentials, elemental building blocks are required to achieve the connected digital home. In the smart energy realm, the shift from mechanical meters to electronic meters is well under way.

Adding remote communications and automated service applications is the popular view of smart energy. The next envisioned frontier is the implementation of time-of-use plans, based on existing infrastructure and generation facilities. That limited vision, however, misses the opportunity to revolutionize the entire system, from power generation and distribution to effective energy consumption management.

"Smart meters will allow you to actually monitor how much energy your family is using by the month, by the week, by the day, or even by the hour," President Obama said in October 2009 during a speech in support of Recovery Act funding for smart grid technology. "Coupled with other technologies, this is going to help you manage your electricity use and your budget at the same time, allowing you to converse electricity during times when prices are highest."

That's a good starting point, but it will have no real impact on efficiency or consumption rates. Without fundamental energy generation and distribution innovation, consumers' behavior is unlikely to change in any significant way. Standardization and dynamic pricing are required to make it worthwhile for energy providers and consumers to monitor usage at so granular level.

The current, monopoly-driven infrastructure, however, is so inefficient that energy measurement and monitoring provide little opportunity for savings.

To achieve the benefits envisioned for the smart grid, full standards-based deployment is required.

A fully deployed smart grid will create a competitive energy service environment, with multiple providers and dynamic pricing, much as the telecom revolution has broadened customers' options over the past 25 years. A standards-based approach to supplying consumers' energy needs will drive investment in next-generation technologies and business models oriented to demographic profiles that match the efficiency generation profiles.

For example, people who work from home and consume a majority of their energy during the day may be offered an attractive package from an energy distributor partnered with a solar power generation company.

Information is shared by www.irvs.info

Tuesday, July 5, 2011

Technological advances simplify personal healthcare and peak-performance training

Over the past few decades, medical electronics has played a key role in supporting personal disease management and simple and advanced diagnostics. Examples range from blood-glucose and blood-pressure monitoring devices to fever management with an electronic thermometer. Several innovations that focus on increasing quality of life for users are being made in this space. The considerable progress made in this area has prompted developers to look beyond personal healthcare for medical electronics applications.

A number of applications are emerging that use both conventional and new medical electronics in conjunction with advanced software intelligence known as biofeedback. Devices that incorporate this technology enable users to maintain health or to train for peak performance. Biofeedback already encompasses a diverse range of applications. Simple to complex biofeedback systems and modern semiconductor devices such as ultra-low-power microcontrollers (MCUs), high-end embedded processors, and high-performance analog front ends (AFEs) can contribute to unlimited innovations in the field of biofeedback.



Personal biofeedback device categories include neurofeedback with electroencephalogram (EEG) and hemoencephalogram (HEG), heart rate variability (HRV), stress and relaxation, electromyogram (EMG) muscle-activity feedback, skin temperature and core temperature measurement, and pulse oximetry. Notice that these are reuse and new-use versions of time-tested diagnostic technologies known to the healthcare industry. An increasing number of emerging fitness products are now geared toward enhancing performance as opposed to general-purpose fitness applications.

Information is shared by www.irvs.info