Tuesday, April 26, 2011

Architecting the smart grid for security

The smart grid is an important, emerging source of embedded systems with critical security requirements. One obvious concern is financial: for example, attackers could manipulate metering information and subvert control commands to redirect consumer power rebates to false accounts. However, smart grids imply the addition of remote connectivity, from millions of homes, to the back end systems that control power generation and distribution. The ability to impact power distribution has obvious safety ramifications, and the potential to impact a large population increases the attractiveness of the target.

These back end systems are protected by the same security technologies (firewalls, network access authentication, intrusion detection and protection systems) that today defend banks and governments against Internet-borne attacks. Successful intrusions into these systems are a daily occurrence. The smart grid, if not architected properly for security, may provide hostile nation states and cyber terrorists with an attack path from the comfort of their living rooms. Every embedded system on this path – from the smart appliance to the smart meter to the network concentrators – must be secure.

The good news is that utilities and their suppliers are still early in the development of security strategy and network architectures for smart grids; a golden opportunity now exists to build security in from the start.

Sophisticated attackers

The increasing reliance of embedded systems in commerce, critical infrastructure, and life-critical function makes them attractive to attackers. Embedded industrial control systems managing nuclear reactors, oil refineries, and other critical infrastructure present opportunity for widespread damage. To get an idea of the kinds of sophisticated attacks we can expect on the smart grid, look no further than the recent Stuxnet attack on nuclear power infrastructure.

Stuxnet infiltrated Siemens process control systems at nuclear power plants by first subverting the Microsoft Windows workstations operators use to configure and monitor the embedded control electronics (Figure 1). The Stuxnet worm is likely the first malware to directly target embedded process control systems and illustrates the incredible damage potential in modern smart grid security attacks.



Figure 1 - Stuxnet infiltration of critical power control system via operator PC

information is shared by www.irvs.info

Monday, April 25, 2011

Facilitating at-speed test at RTL

Production testing for complex chips usually involves multiple test methods. Scan-based automatic test pattern generation (ATPG) for the stuck-at defect model has been the standard for many years, but experience as well as a number of theoretical analyses have shown that the stuck-at fault model is incomplete. Many devices pass high coverage stuck-at tests and still fail to operate in system mode.

Analysis of the defective chips often reveals that speed or timing problems are the culprits. At 90nm and smaller processes, the percentage of timing related defects is so high that static testing is no longer considered sufficient. Functional tests have been used to check (cheque for banks) for at-speed operation. But generating functional at-speed test patterns is difficult and running this volume of tests on the automatic test equipment (ATE) is expensive. As an alternative, scan test has been adapted to detect timing-related defects. Like standard stuck-at scan tests, high coverage at-speed scan test vectors can be automatically generated by ATPG tools. Manufacturing testing of deep subµm designs now routinely includes "at-speed" tests along with stuck-at tests.

Little has been done so far to make front end designers aware of at-speed test solutions at the register transfer language (RTL) level of abstraction. This document is intended to present basic concepts and issues for at-speed testing, as well as demonstrate the at-speed coverage estimation and diagnosis capability built-in to the SpyGlass-DFT DSM product for RTL designers and test engineers.

Information is shared by www.irvs.info

Tuesday, April 19, 2011

Mobile WiMAX system operation

This chapter provides a detailed description of the operation of IEEE 802.16m entities (i.e., mobile station, base station, femto base station, and relay station) through use of state diagrams and call flows. An attempt has been made to characterize the behavior of IEEE 802.16m systems in various operating conditions such as system entry/re-entry, cell selection/reselection, intra/inter-radio access network handover, power management, and inactivity intervals.

This chapter describes how the IEEE 802.16m system entities operate and what procedures or protocols are involved, without going through the implementation details of each function or protocol. The detailed algorithmic description of each function and protocol will be provided in following chapters. Several scattered call flows and state diagrams were used in reference [1] to demonstrate the behavior of the legacy mobile and base stations, making it difficult to coherently understand the system behavior.

The IEEE 802.16 standards have not generally been developed with a system-minded view; rather, they specify components and building blocks that can be integrated to build a working and performing system. An example is the mobile WiMAX profiles where a specific set of IEEE 802.16-2009 standard features (one out of many possible configurations) were selected to form a mobile broadband wireless access system.

Detailed IEEE 802.16m entities' state transition diagrams comprising states, constituent functions, and protocols within each state, and inter-state transition paths conditioned to certain events would help the understanding and implementation of the standards specification [2–5]. It further helps to understand the behavior of the system without struggling with the distracting details of each constituent function.

State diagrams are used to describe the behavior of a system. They can describe possible states of a system and transitions between them as certain events occur. The system described by a state diagram must be composed of a finite number of states. However, in some cases, the state diagram may represent a reasonable abstraction of the system.

There are many forms of state diagrams which differ slightly and have different semantics. State diagrams can be used to graphically represent finite state machines (i.e., a model of behavior composed of a finite number of states, transitions between those states, and actions). A state is defined as a finite set of procedures or functions that are executed in a unique order. In the state diagram, each state may have some inputs and outputs, where deterministic transitions to other states or the same state happen based on certain conditions.

In this chapter, the notion of mode is used to describe a sub-state or a collection of procedures/protocols that are associated with a certain state. The unique definition of states and their corresponding modes and protocols, and internal and external transitions, is imperative to the unambiguous behavior of the system. Also, it is important to show the reaction of the system to an unsuccessful execution of a certain procedure. The state diagrams described in the succeeding sections are used to characterize the behavior of IEEE 802.16m system entities.

This chapter provides a top-down systematic description of IEEE 802.16m entities' state transition models and corresponding procedures, starting at the most general level and working toward the details or specifics of the protocols and transition paths. An overview of 3GPP LTE/LTE-Advanced states and user equipment state transitions is further provided to enable readers to contrast the corresponding terminal and base station behaviors, protocols, and functionalities. Such contrast is crucial in the design of inter-system interworking functions.

information is shared by www.irvs.info

Friday, October 8, 2010

Digital Electronics

Introduction

The first single chip microprocessor came in 1971 by Intel Corporation. It was called Intel 4004 and that was the first single chip CPU ever built. We can say that was the first general purpose processor. Now the term microprocessor and processor are synonymous. The 4004 was a 4-bit processor, capable of addressing 1K data memory and 4K program memory. It was meant to be used for a simple calculator. The 4004 had 46 instructions, using only 2,300 transistors in a 16-pin DIP. It ran at a clock rate of 740kHz (eight clock cycles per CPU cycle of 10.8 microseconds). In 1975, Motorola introduced the 6800, a chip with 78 instructions and probably the first microprocessor with an index register. In 1979, Motorola introduced the 68000. With internal 32-bit registers and a 32-bit address space, its bus was still 16 bits due to hardware prices. On the other hand in 1976, Intel designed 8085 with more instructions to enable/disable three added interrupt pins (and the serial I/O pins).

They also simplified hardware so that it used only +5V power, and added clock-generator and bus-controller circuits on the chip. In 1978, Intel introduced the 8086, a 16-bit processor which gave rise to the x86 architecture. It did not contain floating-point instructions. In 1980 the company released the 8087, the first math co-processor they'd developed. Next came the 8088, the processor for the first IBM PC. Even though IBM engineers at the time wanted to use the Motorola 68000 in the PC, the company already had the rights to produce the 8086 line (by trading rights to Intel for its bubble memory) and it could use modified 8085-type components (and 68000-style components were much more scarce).



The development history of Intel family of processors is shown in Table 1. The Very Large Scale Integration (VLSI) technology has been the main driving force behind the development.

information shared by www.irvs.info

Thursday, October 7, 2010

FIR filter

General Purpose Processor

loop:

lw x0, (r0)
lw y0, (r1)
mul a, x0,y0
add b,a,b
inc r0
inc r1
dec ctr
tst ctr
jnz loop
sw b,(r2)
inc r2


This program assumes that the finite window of input signal is stored at the memory location starting from the address specified by r1 and the equal number filter coefficients are stored at the memory location starting from the address specified by r0. The result will be stored at the memory location starting from the address specified by r2. The program assumes the content of the register b as 0 before the start of the loop.

lw x0, (r0)
lw y0, (r1)


These two instructions load x0 and y0 registers with values from the memory location specified by the registers r0 and r1 with values x0 and y0.

mul a, x0,y0
This instruction multiplies x0 with y0 and stores the result in a.

add b,a,b
This instruction adds a with b (which contains already accumulated result from the previous operation) and stores the result in b.

inc r0
inc r1
dec ctr
tst ctr
jnz loop


The above portion of the program increment the registers to point to the next memory location, decrement the counters, to see if the filter order has been reached and tests for 0. It jumps to the start of the loop.

sw b,(r2)
inc r2


This stores the final result and increments the register r2 to point to the next location.

Let us see the program for an early DSP TMS32010 developed by Texas

Instruments in 80s.
It has got the following features
• 16-bit fixed-point
• Harvard architecture separate instruction and data memories
• Accumulator
• Specialized instruction set Load and Accumulate
• 390 ns Multiple-Accumulate(MAC)

TI TMS32010 (Ist DSP) 1982



The program for the FIR filter (for a 3rd order) is given as follows:

Here X4, H4, ... are direct (absolute) memory addresses:
LT X4 ;Load T with x(n-4)
MPY H4 ;P = H4*X4
;Acc = Acc + P
LTD X3 ;Load T with x(n-3); x(n-4) = x(n-3);
MPY H3 ; P = H3*X3
; Acc = Acc + P
LTD X2
MPY H2
...


• Two instructions per tap, but requires unrolling.
; for comment lines.
LT X4 Loading from direct address X4.
MPY H4 Multiply and accumulate.
LTD X3 Loading and shifting in the data points in the memory.

The advantages of the DSP over the General Purpose Processor can be written as Multiplication and Accumulation takes place at a time. Therefore this architecture supports filtering kind of tasks. The loading and subsequent shifting is also takes place at a time.

information shared by www.irvs.info

Tuesday, October 5, 2010

Comparison of DSP with General Purpose Processor

Take an Example of FIR filtering both by a General Purpose Processor as well as DSP




An FIR (Finite Impulse Response filter) is represented as shown in the following figure.

The output of the filter is a linear combination of the present and past values of the input. It has several advantages such as:
- Linear Phase.
- Stability.
- Improved Computational Time.




information shared by www.irvs.info

Friday, October 1, 2010

What is Digital Signal Processing?

Application of mathematical operations to digitally represented signals:
- Signals represented digitally as sequences of samples.
- microphones) and analog-to- digital converters (ADC).
- Digital signals converted back to physical signals via digital-to-analog converters (DAC).
- Digital Signal Processor (DSP): electronic system that processes digital signals.



The above figure represents a Real Time digital signal processing system. The measurand can be temperature, pressure or speech signal which is picked up by a sensor (may be a thermocouple, microphone, a load cell etc). The conditioner is required to filter, demodulate and amplify the signal. The analog processor is generally a low-pass filter used for anti-aliasing effect.

The ADC block converts the analog signals into digital form. The DSP block represents the signal processor. The DAC is for Digital to Analog Converter which converts the digital signals into analog form. The analog low-pass filter eliminates noise introduced by the interpolation in the DAC.



The performance of the signal processing system depends to the large extent on the ADC. The ADC is specified by the number of bits which defines the resolution. The conversion time decides the sampling time. The errors in the ADC are due to the finite number of bits and finite conversion time. Some times the noise may be introduced by the switching circuits.

Similarly the DAC is represented by the number of bits and the settling time at the output.

A DSP tasks requires:
- Repetitive numeric computations.
- Attention to numeric fidelity.
- High memory bandwidth, mostly via array accesses.
- Real-time processing.

And the DSP Design should minimize.
- Cost .
- Power.
- Memory use.
- Development time.

information shared by www.irvs.info