Most modern electronics require some form of dynamic, random access memory (RAM). By far, the most common today is Stub Series Terminated Logic (SSTL)-driven Double Data Rate (DDR) memory. While DDR memory is very popular for meeting modern electronics demand for large amounts of high-speed memory in a small form-factor, providing power to DDR memory can pose some difficulties.
Conventional logic and I/Os utilize standard system bus voltages; however, DDR memory devices need the precision that can only be obtained with local point of load (POL) regulators. What’s more, two of the five supply voltages are required to reference other voltages for sufficient noise margin. The VDD I/O voltage, VDDQ core and VDDL logic voltages, along with a precision tracking reference, VTTREF and a high-current capable mid-rail termination voltage, VTT, make up the power requirements for a DDR memory solution.
VDDQ
Most DDR memory devices use a common supply for core (VDDQ), I/O (VDD) and logic (VDDL) voltages, commonly combined and referred to as simply VDDQ. Current standards include 2.5V (DDR1 or just DDR), 1.8V (DDR2) and 1.5V (DDR3). DDR4, currently slated for release in 2014, is expected to have a voltage between 1.05 – 1.2V, depending on how far the technology advances before the standard is released.
DDR memory’s VDDQ voltages is the simplest supply rail. With most DDR memory devices allowing three to five percent tolerance, it can be supplied through a variety of POL power solutions. Single chip, “on-board” memory solutions for smaller embedded applications might only require a linear regulator to provide an amp or two of current. Larger multi-chip solutions or small banks of DDR modules typically require several amperes of current and demand a small switch mode regulator to meet efficiency and power dissipation needs. Larger multi-module banks, such as high performance processing systems, large data-logging applications and testers may demand 60 or more amperes of VDDQ current, driving designers to develop processor-core-like, multi-phase power solutions just to meet memory needs.
While VDDQ can typically be supported by a conventional converter, it generally requires pre-bias support and the ability to regulate through high-speed transients as the memory switches states.
There is no defined standard for “pre-bias support”, but it implies that the POL converter providing the VDDQ voltage is designed to prevent sinking current out of the VDDQ supply if there is any voltage already stored on the VDDQ bypass and output capacitors during VDDQ power-up. This is critical, because SSTL logic devices commonly contain parasitic and protection diodes between VDDQ and other supply voltages, which can be damaged if the VDDQ supply sinks current through them during start-up.
Additionally, high speed memory cells switch states rapidly. A memory chip or module may transition from a low intensity sleep state, stand-by or self-refresh state to a highly demanding read-write cycle in just a few clock cycles. This places another strong demand on the POL supply providing the VDDQ voltage. Generally, VDDQ supplies are expected to transition from only 10 percent of their maximum load current to 90 percent in a micro-second or two. Faster, cycle-by-cycle transitions are typically provided by an array of small, local bypass capacitors (near each VDD, VDDQ and VDDL input) to the memory device while a combination of large output capacitors and high-speed control loops provide for sustained mode transitions while meeting the tight accuracy requirements of DDR memory devices.
VTTREF
Where VDDQ is a high current supply that powers the core, I/O and logic of the memory, VTTREF is a very low current, precision reference voltage that provides a threshold between a logic high (1) and a logic low (0) that adapts to changes in the I/O supply voltage. By providing a precision threshold that adapts to the supply voltage, wider noise margins are realized than possible with a fixed threshold and normal variations in termination and drive impedance. Again, specifications vary from device manufacturer to manufacturer, but the most common specification is 0.49x VDDQ to 0.51x VDDQ and only draws tens to hundreds of micro-amperes.
Smaller memory systems using a single to a few ICs will typically use a simple resistor divider, counting on the low leakage currents of the reference input voltages to minimize any variation in the threshold voltage and achieve the two percent tolerance necessary to realize the best possible noise margins.
Larger systems using multiple memory modules, such as standard DIMM modules, will typically elect a less sensitive, active VTTREF solution, such as an operational amplifier (op amp) buffer after the resistor divider or a voltage supplied by a dedicated DDR memory solution (see Figure 1), such as TI’s TPS51116, TPS51100 or TPS51200.
VTTREF should always be locally generated, referencing the VDDQ voltage at the source device to provide the most accurate threshold voltage and widest possible noise margin. That requires the memory’s VTTREF voltage to reference the processor’s VDDQ voltage and the processor’s VTTREF voltage to reference the memory’s VDDQ voltage.
Information is shared by www.irvs.info
Monday, May 30, 2011
Friday, May 27, 2011
HW/SW co-verification basics:Determining what & how to verify
The process of embedded system design generally starts with a set of requirements for what the product must do and ends with a working product that meets all of the requirements. Figure 6.1 below contains a list of the steps in the process and a short summary of what happens at each state of the design.
The requirements and product specification phase documents and defines the required features and functionality of the product. Marketing, sales, engineering, or any other individuals who are experts in the field and understand what customers need and will buy to solve a specific problem, can document product requirements.
Capturing the correct requirements gets the project off to a good start, minimizes the chances of future product modifications, and ensures there is a market for the product if it is designed and built. Good products solve real needs. have tangible benefits. and are easy to use.
Figure 6.1: Embedded System Design Process Requirements System Architecture
System architecture defines the major blocks and functions of the system. Interfaces. bus structure, hardware functionality. and software functionality are determined. System designers use simulation tools, software models, and spreadsheets to determine the architecture that best meets the system requirements. System architects provide answers to questions such as, "How many packets/sec can this muter design handle'?" or "What is the memory bandwidth required to support two simultaneous MPEG streams?"
Microprocessor Selection. One of the most difficult steps in embedded system design can be the choice of the microprocessor. There are an endless number of ways to compare microprocessors, both technical and nontechnical. Important factors include performance. cost. power, software development tools, legacy software, RTOS choices. and available simulation models.
Information is shared by www.irvs.info
Benchmark data is generally available. though apples-to-apples comparisons are often difficult to obtain. Creating a feature matrix is a good way to sift through the data to make comparisons. Software investment is a major consideration for switching the processor. Embedded guru Jack Ganssle says the rule of thumb is to decide if 70% of the software can be reused: if so. don't change the processor.
Most companies will not change processors unless there is something seriously deficient with the current architecture. When in doubt, the best practice is to stick with the current architecture.
Hardware Design. Once the architecture is set and the processor(s) have been selected, the next step is hardware design. component selection. Verilog and VHDL coding. synthesis. timing analysis. and physical design of chips and boards.
The hardware design team will generate some important data for the software team Such as the CPU address map(s) and the register definitions for all software programmable registers. As we will see, the accuracy of this information is crucial to the success of the entire project.
Software Design. Once the memory map is defined and the hardware registers are documented, work begins to develop many different kinds of software. Examples include boot code to start up the CPU and initialize the system, hardware diagnostics, real-time operating system (RTOS), device drivers, and application software. During this phase, tools for compilation and debugging are selected and coding is done.
Hardware and Software Integration. The most crucial step in embedded system design is the integration of hardware and software. Somewhere during the project, the newly coded software meets the newly designed hardware. How and when hardware and software will meet for the first time to resolve bugs should be decided early in the project. There are numerous ways to perform this integration. Doing it sooner is better than later, though it must be done smartly to avoid wasted time debugging good software on broken hardware or debugging good hardware running broken software.
Two important concepts of' integrating hardware and software are verification and validation. These are the final steps to ensure that a working system meets the design requirements.
Verification: Does It Work?
Embedded system verification refers to the tools and techniques used to verify that a system does not have hardware or software bugs. Software verification aims to execute the software and observe its behavior, while hardware verification involves making sure the hardware performs correctly in response to outside stimuli and the executing software.
The oldest form of' embedded system verification is to build the system, run the software. and hope for the best. If by chance it does not work, try to do what you can to modify the software and hardware to get the system to work.
This practice is called testing and it is not as comprehensive as verification. Unfortunately, finding out what is not working while the system is running is not always easy. Controlling and observing the system while it is running may not even be possible.
To cope with the difficulties of debugging the embedded system many tools and techniques have been introduced to help engineers get embedded systems working sooner and in a more systematic way. Ideally, all of this verification is done before the hardware is built. The-earlier in the process problems are discovered. the easier and cheaper they are to correct. Verification answers the question, "Does the thing we built work'?"
Validation: Did We Build the Right Thing?
Embedded system validation refers to the tools and techniques used to validate that the system meets or exceeds the requirements. Validation aims to confirm that the requirements in areas such as functionality, performance, and power are satisfied. It answers the question, "Did we build the right thing?' Validation confirms that the architecture is correct and the system is performing optimally.
I once worked with an embedded project that used a common MIPS processor and a real-time operating system (RTOS) for system software. For various reasons it was decided to change the RTOS for the next release of the product. The new RTOS was well suited for the hardware platform and the engineers were able to bring it up without much difficulty.
All application tests appeared to function properly and everything looked positive for an on-schedule delivery of the new release. Just before the product was ready to ship, it was discovered that the applications were running about 10 times slower than with the previous RTOS.
Suddenly. panic set in and the project schedule was in danger. Software engineers who wrote the application software struggled to figure out why the performance was so much lower since not much had changed in the application code. Hardware engineers tried to study the hardware behavior, but using logic analyzers that are better suited for triggering on errors than providing wide visibility over a long range of time, it was difficult to even decide where to look.
The RTOS vendor provided most of the system software and so there was little source code to study. Finally, one of the engineers had a hunch that the cache of the MIPS processor was not being properly enabled. This indeed turned out to be the case and after the problem was corrected, system performance was confirmed. This example demonstrates the importance of validation. Like verification. it is best to do this before the hardware is built. Tools that provide good visibility make validation easier.
The requirements and product specification phase documents and defines the required features and functionality of the product. Marketing, sales, engineering, or any other individuals who are experts in the field and understand what customers need and will buy to solve a specific problem, can document product requirements.
Capturing the correct requirements gets the project off to a good start, minimizes the chances of future product modifications, and ensures there is a market for the product if it is designed and built. Good products solve real needs. have tangible benefits. and are easy to use.
Figure 6.1: Embedded System Design Process Requirements System Architecture
System architecture defines the major blocks and functions of the system. Interfaces. bus structure, hardware functionality. and software functionality are determined. System designers use simulation tools, software models, and spreadsheets to determine the architecture that best meets the system requirements. System architects provide answers to questions such as, "How many packets/sec can this muter design handle'?" or "What is the memory bandwidth required to support two simultaneous MPEG streams?"
Microprocessor Selection. One of the most difficult steps in embedded system design can be the choice of the microprocessor. There are an endless number of ways to compare microprocessors, both technical and nontechnical. Important factors include performance. cost. power, software development tools, legacy software, RTOS choices. and available simulation models.
Information is shared by www.irvs.info
Benchmark data is generally available. though apples-to-apples comparisons are often difficult to obtain. Creating a feature matrix is a good way to sift through the data to make comparisons. Software investment is a major consideration for switching the processor. Embedded guru Jack Ganssle says the rule of thumb is to decide if 70% of the software can be reused: if so. don't change the processor.
Most companies will not change processors unless there is something seriously deficient with the current architecture. When in doubt, the best practice is to stick with the current architecture.
Hardware Design. Once the architecture is set and the processor(s) have been selected, the next step is hardware design. component selection. Verilog and VHDL coding. synthesis. timing analysis. and physical design of chips and boards.
The hardware design team will generate some important data for the software team Such as the CPU address map(s) and the register definitions for all software programmable registers. As we will see, the accuracy of this information is crucial to the success of the entire project.
Software Design. Once the memory map is defined and the hardware registers are documented, work begins to develop many different kinds of software. Examples include boot code to start up the CPU and initialize the system, hardware diagnostics, real-time operating system (RTOS), device drivers, and application software. During this phase, tools for compilation and debugging are selected and coding is done.
Hardware and Software Integration. The most crucial step in embedded system design is the integration of hardware and software. Somewhere during the project, the newly coded software meets the newly designed hardware. How and when hardware and software will meet for the first time to resolve bugs should be decided early in the project. There are numerous ways to perform this integration. Doing it sooner is better than later, though it must be done smartly to avoid wasted time debugging good software on broken hardware or debugging good hardware running broken software.
Two important concepts of' integrating hardware and software are verification and validation. These are the final steps to ensure that a working system meets the design requirements.
Verification: Does It Work?
Embedded system verification refers to the tools and techniques used to verify that a system does not have hardware or software bugs. Software verification aims to execute the software and observe its behavior, while hardware verification involves making sure the hardware performs correctly in response to outside stimuli and the executing software.
The oldest form of' embedded system verification is to build the system, run the software. and hope for the best. If by chance it does not work, try to do what you can to modify the software and hardware to get the system to work.
This practice is called testing and it is not as comprehensive as verification. Unfortunately, finding out what is not working while the system is running is not always easy. Controlling and observing the system while it is running may not even be possible.
To cope with the difficulties of debugging the embedded system many tools and techniques have been introduced to help engineers get embedded systems working sooner and in a more systematic way. Ideally, all of this verification is done before the hardware is built. The-earlier in the process problems are discovered. the easier and cheaper they are to correct. Verification answers the question, "Does the thing we built work'?"
Validation: Did We Build the Right Thing?
Embedded system validation refers to the tools and techniques used to validate that the system meets or exceeds the requirements. Validation aims to confirm that the requirements in areas such as functionality, performance, and power are satisfied. It answers the question, "Did we build the right thing?' Validation confirms that the architecture is correct and the system is performing optimally.
I once worked with an embedded project that used a common MIPS processor and a real-time operating system (RTOS) for system software. For various reasons it was decided to change the RTOS for the next release of the product. The new RTOS was well suited for the hardware platform and the engineers were able to bring it up without much difficulty.
All application tests appeared to function properly and everything looked positive for an on-schedule delivery of the new release. Just before the product was ready to ship, it was discovered that the applications were running about 10 times slower than with the previous RTOS.
Suddenly. panic set in and the project schedule was in danger. Software engineers who wrote the application software struggled to figure out why the performance was so much lower since not much had changed in the application code. Hardware engineers tried to study the hardware behavior, but using logic analyzers that are better suited for triggering on errors than providing wide visibility over a long range of time, it was difficult to even decide where to look.
The RTOS vendor provided most of the system software and so there was little source code to study. Finally, one of the engineers had a hunch that the cache of the MIPS processor was not being properly enabled. This indeed turned out to be the case and after the problem was corrected, system performance was confirmed. This example demonstrates the importance of validation. Like verification. it is best to do this before the hardware is built. Tools that provide good visibility make validation easier.
Thursday, May 26, 2011
NATO experiences of modeling military embedded systems
Introduction
This paper describes a lightweight but rigorous system and software development process that has been successfully deployed for the specification and realization of a “plug-and-play”, data driven architecture for manned and unmanned military vehicles. The examples shown here are drawn from two NATO Industrial Advisory Group (NIAG) studies, the results of which have been adopted by a number of global defense suppliers, and incorporated into defense standards.
The objectives of these studies were to:
• Develop a common architecture in the form of a domain model, for weaponized aircraft, both manned and unmanned;
• Construct a set of generic models in which aircraft platform-specific capabilities could be specified in the form of data configurations, reducing the need to make code changes as requirements evolve, or when porting to a new platform;
• Identify all Information Exchange Requirements (IERs) between the Unmanned Air Systems (UAS) nodes and between the UAS and external systems, taking into account the range of UAV capabilities and allocation decisions that might be made;
• Minimize development and through-life costs by automatically generating code and documentation from the models, so that the models become the primary maintained artefacts.
The NIAG model outlined in this paper has now been incorporated into the Office of the Secretary of Defense Unmanned Aerial Systems Control Segment (UCS) Architecture (www.ucs.architecture.org) as a mission effects service. The UCS Architecture is intended to apply to all DoD Unmanned Aircraft Systems where the vehicle weighs more than 20 lbs.
An Association for Unmanned Vehicle Systems International Announcement in August 2010 stated:
• “The UAS Control Segment (UCS) Architecture was developed with broad and open participation from industry, academia and Government and is seen as a major achievement in technology and innovative thought in the employment of unmanned systems.”, and
• “The UCS Architecture is a software architecture that is agile to evolving Service requirements and is supportive of affordable safety/airworthiness certification and affordable Information Assurance (IA) certification.”
The studies employed the principles of the Object Management Group (OMG) Model Driven Architecture® (MDA) to satisfy the objectives of the study. This methodology is based on the Unified Modeling Language (UML).
Information is shared by www.irvs.info
This paper describes a lightweight but rigorous system and software development process that has been successfully deployed for the specification and realization of a “plug-and-play”, data driven architecture for manned and unmanned military vehicles. The examples shown here are drawn from two NATO Industrial Advisory Group (NIAG) studies, the results of which have been adopted by a number of global defense suppliers, and incorporated into defense standards.
The objectives of these studies were to:
• Develop a common architecture in the form of a domain model, for weaponized aircraft, both manned and unmanned;
• Construct a set of generic models in which aircraft platform-specific capabilities could be specified in the form of data configurations, reducing the need to make code changes as requirements evolve, or when porting to a new platform;
• Identify all Information Exchange Requirements (IERs) between the Unmanned Air Systems (UAS) nodes and between the UAS and external systems, taking into account the range of UAV capabilities and allocation decisions that might be made;
• Minimize development and through-life costs by automatically generating code and documentation from the models, so that the models become the primary maintained artefacts.
The NIAG model outlined in this paper has now been incorporated into the Office of the Secretary of Defense Unmanned Aerial Systems Control Segment (UCS) Architecture (www.ucs.architecture.org) as a mission effects service. The UCS Architecture is intended to apply to all DoD Unmanned Aircraft Systems where the vehicle weighs more than 20 lbs.
An Association for Unmanned Vehicle Systems International Announcement in August 2010 stated:
• “The UAS Control Segment (UCS) Architecture was developed with broad and open participation from industry, academia and Government and is seen as a major achievement in technology and innovative thought in the employment of unmanned systems.”, and
• “The UCS Architecture is a software architecture that is agile to evolving Service requirements and is supportive of affordable safety/airworthiness certification and affordable Information Assurance (IA) certification.”
The studies employed the principles of the Object Management Group (OMG) Model Driven Architecture® (MDA) to satisfy the objectives of the study. This methodology is based on the Unified Modeling Language (UML).
Information is shared by www.irvs.info
Wednesday, May 25, 2011
Advances in integration for base station receivers
The increasing demand for data services on mobile phones puts continuous pressure on base station designs for more bandwidth and lower cost. Many factors influence the overall cost to install and operate additional base stations to serve the increased demand. Smaller, lower power electronics within a macrocell base station help to lower the initial costs as well as the ongoing cost of real estate rental and electrical power consumption for the tower. New architectures such as remote radio heads (RRH) promise to decrease costs even further. Tiny picocell and femtocell base stations extend the services to areas not covered by the larger macrocells. To realize these gains, base station designers need new components with very high levels of integration and yet they cannot compromise performance.
Integration in the RF portion of the radio is especially challenging because of the performance requirement. Over a decade ago, the typical base station architecture required several stages of low noise amplification, down-conversion to an intermediate frequency (IF), filtering and further amplification. Higher performance mixers, amplifiers and higher dynamic range analog-to-digital converters (ADCs) with higher sampling rates have enabled designers to eliminate down-conversion stages to a single IF stage today. However, component integration remains somewhat limited. Mixers are available with buffered IF outputs, integrated balun transformers, LO switches and dividers. A device with a mixer and a PLL for the LO represents a recent advance of integration. Dual mixers and dual amplifiers are available. As yet, no device is available that integrates any portion of the RF chain with the ADC on the same silicon. This is primarily because each component requires unique semiconductor processes. The performance trade-off associated with choosing a common process has been unacceptable for the application.
In parallel, the handset radio has evolved to highly integrated baseband and transceiver ICs and integrated RF front-end modules (FEM). RF functional blocks between the transceiver and antenna include filtering, amplification and switching (with impedance matching incorporated between components where needed). The transceiver integrates the receiver ADC, the transmit DAC and the associated RF blocks. Here the performance requirement is at a level such that a common process is viable. The FEM utilizes a system-in-package (SiP) technology to integrate various ICs and passives, including multi-mode filters and the RF switches for transmit and receive. Here, a common process was not viable but integration was still required.
The performance requirements for the RF/IF, ADC and DAC components in picocell and femtocell base stations tends to be much lower than for macrocell base stations because their range, power output and number of users per sector are lower than for macrocells. In some cases, modified versions of components for handsets can be used for picocell or femtocell base stations, providing the necessary integration, low power and low cost. Here, a common semiconductor process provides sufficient level of performance for all of the functional blocks in the signal chain.
Information is shared by www.irvs.info
Integration in the RF portion of the radio is especially challenging because of the performance requirement. Over a decade ago, the typical base station architecture required several stages of low noise amplification, down-conversion to an intermediate frequency (IF), filtering and further amplification. Higher performance mixers, amplifiers and higher dynamic range analog-to-digital converters (ADCs) with higher sampling rates have enabled designers to eliminate down-conversion stages to a single IF stage today. However, component integration remains somewhat limited. Mixers are available with buffered IF outputs, integrated balun transformers, LO switches and dividers. A device with a mixer and a PLL for the LO represents a recent advance of integration. Dual mixers and dual amplifiers are available. As yet, no device is available that integrates any portion of the RF chain with the ADC on the same silicon. This is primarily because each component requires unique semiconductor processes. The performance trade-off associated with choosing a common process has been unacceptable for the application.
In parallel, the handset radio has evolved to highly integrated baseband and transceiver ICs and integrated RF front-end modules (FEM). RF functional blocks between the transceiver and antenna include filtering, amplification and switching (with impedance matching incorporated between components where needed). The transceiver integrates the receiver ADC, the transmit DAC and the associated RF blocks. Here the performance requirement is at a level such that a common process is viable. The FEM utilizes a system-in-package (SiP) technology to integrate various ICs and passives, including multi-mode filters and the RF switches for transmit and receive. Here, a common process was not viable but integration was still required.
The performance requirements for the RF/IF, ADC and DAC components in picocell and femtocell base stations tends to be much lower than for macrocell base stations because their range, power output and number of users per sector are lower than for macrocells. In some cases, modified versions of components for handsets can be used for picocell or femtocell base stations, providing the necessary integration, low power and low cost. Here, a common semiconductor process provides sufficient level of performance for all of the functional blocks in the signal chain.
Information is shared by www.irvs.info
Thursday, May 19, 2011
Wireless Ethernet: Tired of installation and configuration
When the goal is to link two devices or networks which may be remote, there is an important decision to be made up front: Do you go with a wired solution that guarantees speed but is expensive and potentially damage-prone, or do you opt for a wireless solution, which may cost less, but could slow down your network?
Suppose you choose wireless, and purchase two 802.11n routers and two high-gain antennas plus specialty RF cables to create a wireless Ethernet bridge. What might you encounter when you begin to implement the solution?
#1: Is there power at the site?
Installing outlets at one or both sites could be a fun several hours of do-it-yourself work, or expensive (approximately $100 per hour) to hire a union electrician. Don't forget to add the $100 of materials (enclosure, outlet, wire, conduit etc) on each end
#2: Configuring is easy, right?
Configuring your bridge means wading through access point/router network configuration fields, assigning a static IP, creating SSID, and establishing security settings and naming conventions. If you're a glutton for punishment and want to take this on, it'll probably take 45 minutes to 2 hours, depending on your technical expertise. If not, paying someone else to do it will be in the $100 per hour range.
#3: Hmmm. This antenna should work...
You've gone through all the settings to turn your access point into a bridge. The next challenge is to make the antennas work. Your access point may not be outdoor-rated, so you'll have to weatherproof it to keep it close to the antenna. You'll also need to identify and purchase the right mounting bracket and accessories.
Many designers, however, are either light on network expertise, or simply don't want to spend time struggling through configuring and installing a wireless solution. What if you could wirelessly link remote sites, support mobile devices, overcome physical obstacles and distances, and still maintain the speed of your network without these pains?
A viable solution is an out-of-the-box, point-and-play wireless Ethernet bridge called GhostBridge, that leverages 802.11n chipsets, Multiple-input Multiple-output (MIMO), and Power Over Ethernet (PoE)--all pre-configured and with a built-in sector antenna and a pole mount included as well.
Here's how to take care of the basic configuration pains with GhostBridge:
Having 24V PoE adapters for both AC and DC power gives users the freedom to choose installation sites, and no power outlets are required. The cat5 cable handles both data and power. GhostBridge units inject 24VDC onto the unused pair. A LAN2 port bridges the 24V from the main port so that the secondary port can output 24VDC on the unused pair.
The GhostBridge units are pre-configured into a secure bridge. There's an available web server, but it's really not necessary. Users simply mount the units, plug them into the network using the PoE adapters, point them at each other, and in less than 20 seconds they automatically pair up into a transparent, secure, high-speed (up to 150 Mpbs) Ethernet link. The bridge is also pre-configured with WPA2, 128-bit security.
An 80-degree antenna with 15-dBi gain is fully integrated into the housing and functions as a sector antenna. With 80 degrees of coverage, it's easy to line up the units even without a clear line of sight. A pole mount is also part of the molded, outdoor-rated (IP54 for water and dust), UV-stabilized plastic enclosure. Users can easily mount it onto a pole or use an optional wall-mount kit.
A 5 GHz radio built from the 802.11n 2x2 MIMO technology provides the long-range (up to 15km, about 9 miles) connectivity between remote stations and a central office. Or, users can easily bridge two networks or hard-to-reach nodes together. A PoE pass-through Ethernet port allows for the connection of remote devices (IP camera, or other) to a LAN.
Case-in-Point
When Walter Horigan, president of Vortechs Automation, a Huntingdon Valley, PA-based system integrator, needed high-speed Ethernet/Internet connectivity in a remote building across state park land, he looked to GhostBridge as a solution.
Horigan installed one GhostBridge unit locally and the other on his remote building, and then pointed them at each other through medium-dense foliage.
"I went from Dixie cups and strings to super-fast Ethernet with the GhostBridge punching through 700 ft. of trees to beam the signal to my remote site," said Horigan. "The GhostBridge was wicked fast, pumping the full 60 Mbps available from the ISP on my first test out of the box. The cat5 cable carrying both data and power saved me from having to install power, which would've easily taken 5 to 6 hours, and cost about $100 of materials, for each site," said Horigan.
Horigan estimates that the solution cost approximately a dollar per foot, far http://www.blogger.com/img/blank.gifcheaper and easier than laying cable, not to mention that laying a cable across a state park would have potentially invited some trouble. "Installing GhostBridge was as easy as hanging a picture," said Horigan.
The GhostBridge includes a pre-configured base station and node, two AC PoE adapters with power cords, two DC PoE adapters with barrel plug adapters, and four heavy-duty cable ties.
Information is shared by www.irvs.info
Suppose you choose wireless, and purchase two 802.11n routers and two high-gain antennas plus specialty RF cables to create a wireless Ethernet bridge. What might you encounter when you begin to implement the solution?
#1: Is there power at the site?
Installing outlets at one or both sites could be a fun several hours of do-it-yourself work, or expensive (approximately $100 per hour) to hire a union electrician. Don't forget to add the $100 of materials (enclosure, outlet, wire, conduit etc) on each end
#2: Configuring is easy, right?
Configuring your bridge means wading through access point/router network configuration fields, assigning a static IP, creating SSID, and establishing security settings and naming conventions. If you're a glutton for punishment and want to take this on, it'll probably take 45 minutes to 2 hours, depending on your technical expertise. If not, paying someone else to do it will be in the $100 per hour range.
#3: Hmmm. This antenna should work...
You've gone through all the settings to turn your access point into a bridge. The next challenge is to make the antennas work. Your access point may not be outdoor-rated, so you'll have to weatherproof it to keep it close to the antenna. You'll also need to identify and purchase the right mounting bracket and accessories.
Many designers, however, are either light on network expertise, or simply don't want to spend time struggling through configuring and installing a wireless solution. What if you could wirelessly link remote sites, support mobile devices, overcome physical obstacles and distances, and still maintain the speed of your network without these pains?
A viable solution is an out-of-the-box, point-and-play wireless Ethernet bridge called GhostBridge, that leverages 802.11n chipsets, Multiple-input Multiple-output (MIMO), and Power Over Ethernet (PoE)--all pre-configured and with a built-in sector antenna and a pole mount included as well.
Here's how to take care of the basic configuration pains with GhostBridge:
Having 24V PoE adapters for both AC and DC power gives users the freedom to choose installation sites, and no power outlets are required. The cat5 cable handles both data and power. GhostBridge units inject 24VDC onto the unused pair. A LAN2 port bridges the 24V from the main port so that the secondary port can output 24VDC on the unused pair.
The GhostBridge units are pre-configured into a secure bridge. There's an available web server, but it's really not necessary. Users simply mount the units, plug them into the network using the PoE adapters, point them at each other, and in less than 20 seconds they automatically pair up into a transparent, secure, high-speed (up to 150 Mpbs) Ethernet link. The bridge is also pre-configured with WPA2, 128-bit security.
An 80-degree antenna with 15-dBi gain is fully integrated into the housing and functions as a sector antenna. With 80 degrees of coverage, it's easy to line up the units even without a clear line of sight. A pole mount is also part of the molded, outdoor-rated (IP54 for water and dust), UV-stabilized plastic enclosure. Users can easily mount it onto a pole or use an optional wall-mount kit.
A 5 GHz radio built from the 802.11n 2x2 MIMO technology provides the long-range (up to 15km, about 9 miles) connectivity between remote stations and a central office. Or, users can easily bridge two networks or hard-to-reach nodes together. A PoE pass-through Ethernet port allows for the connection of remote devices (IP camera, or other) to a LAN.
Case-in-Point
When Walter Horigan, president of Vortechs Automation, a Huntingdon Valley, PA-based system integrator, needed high-speed Ethernet/Internet connectivity in a remote building across state park land, he looked to GhostBridge as a solution.
Horigan installed one GhostBridge unit locally and the other on his remote building, and then pointed them at each other through medium-dense foliage.
"I went from Dixie cups and strings to super-fast Ethernet with the GhostBridge punching through 700 ft. of trees to beam the signal to my remote site," said Horigan. "The GhostBridge was wicked fast, pumping the full 60 Mbps available from the ISP on my first test out of the box. The cat5 cable carrying both data and power saved me from having to install power, which would've easily taken 5 to 6 hours, and cost about $100 of materials, for each site," said Horigan.
Horigan estimates that the solution cost approximately a dollar per foot, far http://www.blogger.com/img/blank.gifcheaper and easier than laying cable, not to mention that laying a cable across a state park would have potentially invited some trouble. "Installing GhostBridge was as easy as hanging a picture," said Horigan.
The GhostBridge includes a pre-configured base station and node, two AC PoE adapters with power cords, two DC PoE adapters with barrel plug adapters, and four heavy-duty cable ties.
Information is shared by www.irvs.info
Tuesday, May 17, 2011
MOST and Ethernet: Payload efficiency and network considerations
MOST Technology is nowadays dominating the upper class infotainment systems due to its support of high bandwidth data. Fostered by the integration of consumer devices and the worldwide success of the Internet Protocol (IP), the research for the usage of IP as the common network layer in an automotive environment has already started. The results presented in this paper have been prepared within the publicly funded project SEIS (1). In combination with IP, Ethernet is the most commonly used physical layer.
Actually, the usage of a cost efficient and automotive-qualified Ethernet solution is already scheduled for implementation in series production (2). So the competition between MOST and Ethernet is already fully ongoing. This paper will focus on a specific part of this competition. The payload efficiency of MOST and certain transport protocols of Ethernet AVB (3) are compared, since Ethernet AVB defines provisions to achieve Quality of Service (QoS) within an Ethernet network.
However, simply looking at the payload efficiency is not sufficient, since MOST is a bus system, while today's Ethernet is a switched network that leads to a multiplication of the system-wide available bandwidth by the number of point-to-point links in the system. Hence, network utilization will also be discussed in this paper.
Description of problem/challenge
Automotive infotainment networks are becoming more open to non-automotive devices like mobile phones and are supporting IP/Web based applications. High-definition video and camera based applications create higher data rates that already need to be handled today.
The bandwidth requirements of certain applications, compared to the bandwidth offered by networks, is shown in the figure below. The continuously increasing bandwidth requirement is a clearly visible trend.
Figure 1: Bandwidth requirements and network capabilities over time.
One of the core requirements for a network is that it will deliver application data reliably and will provide reasonable response times between any nodes. In order to support QoS in asynchronous Ethernet networks, AVB extends the standard with three additional sub-standards.
IEEE 802.1Qav (4) uses methods described in IEEE 802.1Q to separate time critical and non-time critical traffic into different traffic classes. Output port buffers are separated into different queues, each allocated to a specific class. This ensures a separation of low priority traffic from high priority traffic. Moreover, all output ports have a credit-based shaping mechanism to prevent bursty behavior.
IEEE 802.1Qat (5) defines a protocol to signal reservation requests and reserve resources for media streams. This is actually implemented by allocating buffers within switches.
IEEE 802.1AS (6) is responsible for the precise time synchronization of the network nodes to a reference time. IEEE 802.1AS synchronizes distributed local clocks, referred to as slave clocks, with a reference that has an accuracy of better than one microsecond. Additionally, transport protocols like IEEE P1722 (7), IEEE P1733 (8) are used for the actual transfer of the media streams.
Payload efficiency PE is defined here as ratio between the payload P and the effectively sent data D.
PE = P/D. (Eq 1)
The available data rate of a network for media streams is defined as B.
For MOST150, the effectively sent data is the content itself without additional headers. Therefore PMOST150 is generally equal to DMOST150. MOST150 has an actual line speed of just 147.5 Mbit/s at a sampling rate of 48 kHz. MOST frames are sent with administrative data including the control channel, which is not related to the streaming data. Hence, the maximum available data rate for streaming data is reduced to BMOST150 = 142.9 Mbit/s.
information is shared by www.irvs.info
Actually, the usage of a cost efficient and automotive-qualified Ethernet solution is already scheduled for implementation in series production (2). So the competition between MOST and Ethernet is already fully ongoing. This paper will focus on a specific part of this competition. The payload efficiency of MOST and certain transport protocols of Ethernet AVB (3) are compared, since Ethernet AVB defines provisions to achieve Quality of Service (QoS) within an Ethernet network.
However, simply looking at the payload efficiency is not sufficient, since MOST is a bus system, while today's Ethernet is a switched network that leads to a multiplication of the system-wide available bandwidth by the number of point-to-point links in the system. Hence, network utilization will also be discussed in this paper.
Description of problem/challenge
Automotive infotainment networks are becoming more open to non-automotive devices like mobile phones and are supporting IP/Web based applications. High-definition video and camera based applications create higher data rates that already need to be handled today.
The bandwidth requirements of certain applications, compared to the bandwidth offered by networks, is shown in the figure below. The continuously increasing bandwidth requirement is a clearly visible trend.
Figure 1: Bandwidth requirements and network capabilities over time.
One of the core requirements for a network is that it will deliver application data reliably and will provide reasonable response times between any nodes. In order to support QoS in asynchronous Ethernet networks, AVB extends the standard with three additional sub-standards.
IEEE 802.1Qav (4) uses methods described in IEEE 802.1Q to separate time critical and non-time critical traffic into different traffic classes. Output port buffers are separated into different queues, each allocated to a specific class. This ensures a separation of low priority traffic from high priority traffic. Moreover, all output ports have a credit-based shaping mechanism to prevent bursty behavior.
IEEE 802.1Qat (5) defines a protocol to signal reservation requests and reserve resources for media streams. This is actually implemented by allocating buffers within switches.
IEEE 802.1AS (6) is responsible for the precise time synchronization of the network nodes to a reference time. IEEE 802.1AS synchronizes distributed local clocks, referred to as slave clocks, with a reference that has an accuracy of better than one microsecond. Additionally, transport protocols like IEEE P1722 (7), IEEE P1733 (8) are used for the actual transfer of the media streams.
Payload efficiency PE is defined here as ratio between the payload P and the effectively sent data D.
PE = P/D. (Eq 1)
The available data rate of a network for media streams is defined as B.
For MOST150, the effectively sent data is the content itself without additional headers. Therefore PMOST150 is generally equal to DMOST150. MOST150 has an actual line speed of just 147.5 Mbit/s at a sampling rate of 48 kHz. MOST frames are sent with administrative data including the control channel, which is not related to the streaming data. Hence, the maximum available data rate for streaming data is reduced to BMOST150 = 142.9 Mbit/s.
information is shared by www.irvs.info
Tuesday, May 10, 2011
Mobile RF Design: Pros and cons of SMPS using a DC/DC converter
Whether you are sending e-mails with your smart phone, playing a game with a friend on your wireless tablet, or simply calling home, each wireless device represents a critical link in our connected world. With wireless connectivity being extended into mobile broadband by 3G and 4G technologies, the challenges that RF engineers face are rapidly evolving.
These challenges are overcome by device designers and engineers who are able to craft the RF section of each device to fit specific requirements and help create an optimal user experience. Before discussing battery life and power techniques for the power amplifier (PA), it is important to review the foundation of a working RF section. Therefore, the first step in any design is to ensure that the device is able to establish and maintain a high-quality wireless connection.
Key Signal Performance Requirements
The three main power amplifier criteria associated with connection quality are linearity, gain, and antenna power. Linearity is the ability of the power amplifier to accurately reproduce the frequency and amplitude variation in the RF input. This is an important specification, since it helps determine the ability of the device to maintain a robust wireless connection. The adjacent channel leakage ratio (ACLR) is often used as a measure of power amplifier linearity for modern wireless systems (Figure 1), such as WCDMA, HSPA and HSPA+. ACLR is defined as the ratio of the power in the adjacent channel to the power in the user channel.
Figure 1
Linearity specifies the behavior of the power amplifier to drive the output in a proportional manner to the input, and gain is the slope of this proportional curve. Fundamentally, gain represents the ratio of output to input power. The power amplifier must take the output of the transceiver, typically less than +3 dBm and amplify the signal without distortion.
Having selected a power amplifier that provides a linear signal with sufficient gain, RF designers must ensure that the end result is sufficient antenna power. The signal from the transceiver must pass through the power amplifier, switches, and filters, before reaching the antenna. Therefore, the power amplifier gain should provide sufficient RF power to overcome loss in other selected components and ensure antenna power of +24 dBm.
Current Consumption is Important
Achieving these key RF performance specifications is necessary, but it’s not sufficient in meeting the overall performance demands of the device. Users want to have a stable connection that allows for the maximum throughput available for their devices, however, they also want to ensure that battery life is able to meet their usage patterns. Therefore, with a properly designed RF section that is able to achieve optimal signal performance, the next challenge is meeting efficiency targets.
Mobile phones and other wireless devices operate at RF power levels that are determined both by the signal-to-noise ratio in their environment and the requirements of the network where they are attached. Typical operating power levels range from +24dBm while the phone is searching for a network connection to -20dBm or less while operating in regions of a cell with excellent SNR. A power distribution profile like the one published by the GSM Association in their DG09 procedure for measuring battery life is a good guide to the relative amounts of time that a WCDMA device spends at different power levels on a typical network [1]. The DG09 profile shows that mobile devices typically operate across a wide range of output power levels, with the greatest probability at low-range to mid-range power. While multi-function devices, such as smart phones, tend to operate at mid-range power levels, data devices tend to operate at higher output power levels.
Figure 2
With this power distribution defined, it is clear that optimizing the power amplifier’s current consumption at multiple power levels is important to prolonging battery life.
Power amplifiers used in early CDMA and WCDMA mobile phone designs had only one or two power modes. They were designed to be efficient at high output levels, but suffered from reduced efficiency at lower (backed-off) power levels.
Using a DC-DC converter or switched-mode power supply (SMPS) is a proven approach to reducing battery current in this kind of simple PA. The concept is to use a power supply that incorporates a switching regulator, in order to adjust the source for the power amplifier. The most common method of SMPS is the DC/DC converter, which converts the direct current (DC) voltage from the battery of the mobile device to a lower DC voltage level that is required by the power amplifier. The voltage supplied to the power amplifier is adjusted to the power requirements of the device in each environment. Therefore, if the power amplifier is in an environment where it does not need to operate at full output power, then a switched-mode DC/DC converter will reduce the voltage.
Information is shared by www.irvs.info
These challenges are overcome by device designers and engineers who are able to craft the RF section of each device to fit specific requirements and help create an optimal user experience. Before discussing battery life and power techniques for the power amplifier (PA), it is important to review the foundation of a working RF section. Therefore, the first step in any design is to ensure that the device is able to establish and maintain a high-quality wireless connection.
Key Signal Performance Requirements
The three main power amplifier criteria associated with connection quality are linearity, gain, and antenna power. Linearity is the ability of the power amplifier to accurately reproduce the frequency and amplitude variation in the RF input. This is an important specification, since it helps determine the ability of the device to maintain a robust wireless connection. The adjacent channel leakage ratio (ACLR) is often used as a measure of power amplifier linearity for modern wireless systems (Figure 1), such as WCDMA, HSPA and HSPA+. ACLR is defined as the ratio of the power in the adjacent channel to the power in the user channel.
Figure 1
Linearity specifies the behavior of the power amplifier to drive the output in a proportional manner to the input, and gain is the slope of this proportional curve. Fundamentally, gain represents the ratio of output to input power. The power amplifier must take the output of the transceiver, typically less than +3 dBm and amplify the signal without distortion.
Having selected a power amplifier that provides a linear signal with sufficient gain, RF designers must ensure that the end result is sufficient antenna power. The signal from the transceiver must pass through the power amplifier, switches, and filters, before reaching the antenna. Therefore, the power amplifier gain should provide sufficient RF power to overcome loss in other selected components and ensure antenna power of +24 dBm.
Current Consumption is Important
Achieving these key RF performance specifications is necessary, but it’s not sufficient in meeting the overall performance demands of the device. Users want to have a stable connection that allows for the maximum throughput available for their devices, however, they also want to ensure that battery life is able to meet their usage patterns. Therefore, with a properly designed RF section that is able to achieve optimal signal performance, the next challenge is meeting efficiency targets.
Mobile phones and other wireless devices operate at RF power levels that are determined both by the signal-to-noise ratio in their environment and the requirements of the network where they are attached. Typical operating power levels range from +24dBm while the phone is searching for a network connection to -20dBm or less while operating in regions of a cell with excellent SNR. A power distribution profile like the one published by the GSM Association in their DG09 procedure for measuring battery life is a good guide to the relative amounts of time that a WCDMA device spends at different power levels on a typical network [1]. The DG09 profile shows that mobile devices typically operate across a wide range of output power levels, with the greatest probability at low-range to mid-range power. While multi-function devices, such as smart phones, tend to operate at mid-range power levels, data devices tend to operate at higher output power levels.
Figure 2
With this power distribution defined, it is clear that optimizing the power amplifier’s current consumption at multiple power levels is important to prolonging battery life.
Power amplifiers used in early CDMA and WCDMA mobile phone designs had only one or two power modes. They were designed to be efficient at high output levels, but suffered from reduced efficiency at lower (backed-off) power levels.
Using a DC-DC converter or switched-mode power supply (SMPS) is a proven approach to reducing battery current in this kind of simple PA. The concept is to use a power supply that incorporates a switching regulator, in order to adjust the source for the power amplifier. The most common method of SMPS is the DC/DC converter, which converts the direct current (DC) voltage from the battery of the mobile device to a lower DC voltage level that is required by the power amplifier. The voltage supplied to the power amplifier is adjusted to the power requirements of the device in each environment. Therefore, if the power amplifier is in an environment where it does not need to operate at full output power, then a switched-mode DC/DC converter will reduce the voltage.
Information is shared by www.irvs.info
Friday, May 6, 2011
2G module with integrated A-GPS receiver claims to be world's smallest
The GE864-GPS from Telit Wireless Solutions, is a quad band module and claims to be the smallest, most efficient GSM/GPRS M2M module on the market with embedded GPS receiver.
In a compact BGA form factor, the module is especially suited for highly integrated positioning solutions in automotive, tracking or security applications requiring 2G network connectivity in a very small footprint.
The GE864-GPS shares the identical form factor and is pin-to-pin compatible with Telit's successful GE864 family, making it the smallest GSM/GPRS module in the market with full 48-channels A-GPS functionality. It combines the high performance of the company's proven GSM/GPRS core technology with the latest SiRFstarIV™ high sensitivity single-chip A-GPS receiver.
The assisted GPS receiver features an optimized power management function, which allows to maintain hot start capability at minimal energy consumption, offering a position resolution accuracy of less than 2.5-m. Moreover, the GE864-GPS supports Satellite Based Augmentation Systems, such as WAAS, EGNOS, MSAS and GAGAN.
With a dedicated power supply circuit, the GPS chipset can work independently from the GSM chipset and still operates when the cellular part is in power saving mode or switched off. This function is very helpful for battery operated solutions that activate the communication function only upon triggering events such as, for example, location changes.
The GPS receiver is equipped with a flash-based memory, so the firmware can be upgraded. The ultra small Ball-Grid-Array package (30- x 30- x 2.9-mm) allows the end application to have a very low profile and small overall dimensions, facilitating the design of extremely compact location based services solutions. Since connectors are eliminated, the cost is significantly reduced as compared to conventional mounting technologies.
These features, combined with the embedded Python™ script interpreter result in a very cost effective and well equipped platform, quite capable of becoming an integrated solution for the complete customer application. Additional features including jamming detection, integrated TCP/IP protocol stack, and Easy Scan® offer valuable benefits to the application developer without adding cost.
All the company's modules support Over-the-Air firmware update by Premium FOTA Management. By embedding RedBend's vCurrent® agent, a proven and battle-tested technology powering hundreds of millions of cellular handsets world-wide, Telit is able to update its products by transmitting only a delta file, which represents the difference between one firmware version and another. FOTA service is available for the GSM firmware and will be available in the future for the GPS firmware as well.
This information is shared by www.irvs.info
In a compact BGA form factor, the module is especially suited for highly integrated positioning solutions in automotive, tracking or security applications requiring 2G network connectivity in a very small footprint.
The GE864-GPS shares the identical form factor and is pin-to-pin compatible with Telit's successful GE864 family, making it the smallest GSM/GPRS module in the market with full 48-channels A-GPS functionality. It combines the high performance of the company's proven GSM/GPRS core technology with the latest SiRFstarIV™ high sensitivity single-chip A-GPS receiver.
The assisted GPS receiver features an optimized power management function, which allows to maintain hot start capability at minimal energy consumption, offering a position resolution accuracy of less than 2.5-m. Moreover, the GE864-GPS supports Satellite Based Augmentation Systems, such as WAAS, EGNOS, MSAS and GAGAN.
With a dedicated power supply circuit, the GPS chipset can work independently from the GSM chipset and still operates when the cellular part is in power saving mode or switched off. This function is very helpful for battery operated solutions that activate the communication function only upon triggering events such as, for example, location changes.
The GPS receiver is equipped with a flash-based memory, so the firmware can be upgraded. The ultra small Ball-Grid-Array package (30- x 30- x 2.9-mm) allows the end application to have a very low profile and small overall dimensions, facilitating the design of extremely compact location based services solutions. Since connectors are eliminated, the cost is significantly reduced as compared to conventional mounting technologies.
These features, combined with the embedded Python™ script interpreter result in a very cost effective and well equipped platform, quite capable of becoming an integrated solution for the complete customer application. Additional features including jamming detection, integrated TCP/IP protocol stack, and Easy Scan® offer valuable benefits to the application developer without adding cost.
All the company's modules support Over-the-Air firmware update by Premium FOTA Management. By embedding RedBend's vCurrent® agent, a proven and battle-tested technology powering hundreds of millions of cellular handsets world-wide, Telit is able to update its products by transmitting only a delta file, which represents the difference between one firmware version and another. FOTA service is available for the GSM firmware and will be available in the future for the GPS firmware as well.
This information is shared by www.irvs.info
Wednesday, May 4, 2011
Adopting C programming conventions
Today's competitive world forces us to introduce products at an increasingly faster rate due to one simple fact of business life: having a product out first may mean acquiring a major share of the market. One way to help make this possible is to assure that the mechanics of writing code become second nature. All project members should clearly understand where each file resides on the company's file server, what each file should be named, what style to use, and how to name variables and functions.
The topic of coding conventions is controversial because we all have our own ways of doing things. One way is not necessarily better than the other. However, it's important that all team members adopt a single set of rules and that these rules are followed religiously and consistently by all participants. The worse thing that you can do is to leave each programmer to do his or her own thing. Such an undisciplined activity will certainly lead to chaos. When you consider that close to half of the development effort of a software-based system comes after its release, why not make the sometimes unpleasant task of supporting code less painful?
In this paper, I'll share some of the conventions I've been using for years and I hope that you'll find some of them useful for your own organization. I urge you to document your own conventions because it makes life easier for everyone, especially when it comes to supporting someone else's code.
Directory structures
One of the first rules to establish is how files are organized. Do you place all the source files in a single directory or do you create different directories for different pieces? I like to use a structure similar to that shown in Table 1.
Each product (such as ProdName) has its own directory under PRODUCTS\. If a product requires more than one microprocessors then each has its own directory under ProdName\. All products that contain a microprocessor has a SOFTWARE\ directory. The SOURCE\ directory contains all the source files that are specific to the product. If you have highly modular code and strive to reuse as much code as possible from product to product, the SOURCE\ directory should generally only contain about 10% to 20% of the software which makes the product unique. The remaining 80% to 90% of the code should be located in the \SOFTWARE directory.
The DOC\ directory contains documentation files specific to the software aspects of the product (such as specifications, state diagrams, flow diagrams, and software description). The TEST\ directory contains product build files (such as batch files, make files, and IDE project) for creating a test version of the product. A test version will build the product using the source files located in the SOURCE\ directory of the product, any reusable code (building blocks), and any test-specific source code you may want to include to verify the proper operation of the application.
The latter files generally reside in the TEST\ directory because they don't belong in the final product. The PROD\ directory contains build files for retrieving any released versions of your product. The other directories under ProdName\ are provided to show you where other disciplines within your organization can store their product-related files. In fact, this scheme makes it easy to backup or archive all the files related to a given product whether they're related to software or not.
Information is shared by www.irvs.info
The topic of coding conventions is controversial because we all have our own ways of doing things. One way is not necessarily better than the other. However, it's important that all team members adopt a single set of rules and that these rules are followed religiously and consistently by all participants. The worse thing that you can do is to leave each programmer to do his or her own thing. Such an undisciplined activity will certainly lead to chaos. When you consider that close to half of the development effort of a software-based system comes after its release, why not make the sometimes unpleasant task of supporting code less painful?
In this paper, I'll share some of the conventions I've been using for years and I hope that you'll find some of them useful for your own organization. I urge you to document your own conventions because it makes life easier for everyone, especially when it comes to supporting someone else's code.
Directory structures
One of the first rules to establish is how files are organized. Do you place all the source files in a single directory or do you create different directories for different pieces? I like to use a structure similar to that shown in Table 1.
Each product (such as ProdName) has its own directory under PRODUCTS\. If a product requires more than one microprocessors then each has its own directory under ProdName\. All products that contain a microprocessor has a SOFTWARE\ directory. The SOURCE\ directory contains all the source files that are specific to the product. If you have highly modular code and strive to reuse as much code as possible from product to product, the SOURCE\ directory should generally only contain about 10% to 20% of the software which makes the product unique. The remaining 80% to 90% of the code should be located in the \SOFTWARE directory.
The DOC\ directory contains documentation files specific to the software aspects of the product (such as specifications, state diagrams, flow diagrams, and software description). The TEST\ directory contains product build files (such as batch files, make files, and IDE project) for creating a test version of the product. A test version will build the product using the source files located in the SOURCE\ directory of the product, any reusable code (building blocks), and any test-specific source code you may want to include to verify the proper operation of the application.
The latter files generally reside in the TEST\ directory because they don't belong in the final product. The PROD\ directory contains build files for retrieving any released versions of your product. The other directories under ProdName\ are provided to show you where other disciplines within your organization can store their product-related files. In fact, this scheme makes it easy to backup or archive all the files related to a given product whether they're related to software or not.
Information is shared by www.irvs.info
Tuesday, May 3, 2011
Android, Linux & Real-time Development for Embedded Systems
At first sight, Android appears to be (yet) another operating system for smart phones, joining all of the others that are vying for supremacy " such as Symbian, Windows Mobile, WebOS and various flavors of Linux.
However, it would be better to think of Android as being a software platform for the construction of smart phones, as it is freely available and highly configurable. To be more precise, it is a software platform for building connected devices.
Android is an application framework on top of Linux. We will look at the details of the layers of the framework, shortly. It is supplied as open source code, but does not bind the user with the constraints of the GPL " there is no requirement for developers to make public any code developed using Android.
Another way to look at Android is to take a historical perspective. In the early days of the PC, the operating system was DOS. A programmer writing applications for the PC had some significant challenges, as the services provided by the operating system were quite limited.
For example, the developer of a spreadsheet application would need to provide drivers for every possible printer that users might wish to deploy. This was a significant development and support overhead.
In due course, the release and wide acceptance of Windows (version 3 onwards) addressed this problem very effectively. In many ways, Android does for Linux what Windows did for DOS: it provides an intermediary layer between the application program and the operating system.
Android History
As Android seems to be a hot topic of discussion at this time, it is hard to remember that it is quite new. It really started when Google acquired Android Inc. in 2005. They established the Open Handset Alliance and announced Android in 2007, with the first handset appearing the following year. The source code was released at that time.
Android has now reached version 2.1 and enjoys widespread support, as more devices " mainly handsets " have been announced. The latest, and certainly the most talked about, being Google's own Nexus One device.
Android Architecture
An Android system consists essentially of five software layers: 1) Linux; 2) Libraries; 3) Runtime; 4) Application Framework; 5) Applications
Linux . The bottom layer is the Linux OS itself " version 2.6.3x with 115 patches, to be precise. This provides process and memory management, security, networking and an array of relevant device drivers.
Libraries. A set of libraries reside on top of the OS. This includes Google's version of libc, called bionic, along with media and graphics libraries and a lightweight database " SQLite.
Runtime. Alongside the libraries, on top of the OS, is the Android runtime " the Dalvik VM. This is not strictly a Java virtual machine, though it serves that purpose. It was designed specifically for Android and is register based to conserve memory and maximize performance. A separate instance of the Dalvik VM is used to execute each Android application. The underlying OS is used for memory management and multi-threading.
Application Framework. This layer provides a number of services to applications: views, content providers and resource, notification and activity managers. These are all implemented as Java classes. Any application can "publish" its capabilities for use by other applications.
Applications. A number of applications are routinely distributed with Android, which may include email, SMS, calendar, contacts, and Web browser. All applications have the same status " the supplied ones are not "special".
Applications are generally written in Java and processed with the standard Java tools with a converter being used to translate to the Dalvik VM bytecodes.
Information is shared by www.irvs.info
However, it would be better to think of Android as being a software platform for the construction of smart phones, as it is freely available and highly configurable. To be more precise, it is a software platform for building connected devices.
Android is an application framework on top of Linux. We will look at the details of the layers of the framework, shortly. It is supplied as open source code, but does not bind the user with the constraints of the GPL " there is no requirement for developers to make public any code developed using Android.
Another way to look at Android is to take a historical perspective. In the early days of the PC, the operating system was DOS. A programmer writing applications for the PC had some significant challenges, as the services provided by the operating system were quite limited.
For example, the developer of a spreadsheet application would need to provide drivers for every possible printer that users might wish to deploy. This was a significant development and support overhead.
In due course, the release and wide acceptance of Windows (version 3 onwards) addressed this problem very effectively. In many ways, Android does for Linux what Windows did for DOS: it provides an intermediary layer between the application program and the operating system.
Android History
As Android seems to be a hot topic of discussion at this time, it is hard to remember that it is quite new. It really started when Google acquired Android Inc. in 2005. They established the Open Handset Alliance and announced Android in 2007, with the first handset appearing the following year. The source code was released at that time.
Android has now reached version 2.1 and enjoys widespread support, as more devices " mainly handsets " have been announced. The latest, and certainly the most talked about, being Google's own Nexus One device.
Android Architecture
An Android system consists essentially of five software layers: 1) Linux; 2) Libraries; 3) Runtime; 4) Application Framework; 5) Applications
Linux . The bottom layer is the Linux OS itself " version 2.6.3x with 115 patches, to be precise. This provides process and memory management, security, networking and an array of relevant device drivers.
Libraries. A set of libraries reside on top of the OS. This includes Google's version of libc, called bionic, along with media and graphics libraries and a lightweight database " SQLite.
Runtime. Alongside the libraries, on top of the OS, is the Android runtime " the Dalvik VM. This is not strictly a Java virtual machine, though it serves that purpose. It was designed specifically for Android and is register based to conserve memory and maximize performance. A separate instance of the Dalvik VM is used to execute each Android application. The underlying OS is used for memory management and multi-threading.
Application Framework. This layer provides a number of services to applications: views, content providers and resource, notification and activity managers. These are all implemented as Java classes. Any application can "publish" its capabilities for use by other applications.
Applications. A number of applications are routinely distributed with Android, which may include email, SMS, calendar, contacts, and Web browser. All applications have the same status " the supplied ones are not "special".
Applications are generally written in Java and processed with the standard Java tools with a converter being used to translate to the Dalvik VM bytecodes.
Information is shared by www.irvs.info
Subscribe to:
Posts (Atom)