Große Auswahl an günstigen Büchern
Schnelle Lieferung per Post und DHL

Bücher der Reihe The Springer International Series in Engineering and Computer Science

Filter
Filter
Ordnen nachSortieren Reihenfolge der Serie
  • 14% sparen
    von Klaas-Jan de Langen
    185,00 - 185,00 €

    Compact Low-Voltage and High-Speed CMOS, BiCMOS and Bipolar Operational Amplifiers discusses the design of integrated operational amplifiers that approach the limits of low supply voltage or very high bandwidth. The resulting realizations span the whole field of applications from micro-power CMOS VLSI amplifiers to 1-GHz bipolar amplifiers. The book presents efficient circuit topologies in order to combine high performance with simple solutions. In total twelve amplifier realizations are discussed. Two bipolar amplifiers are discussed, a 1-GHz operational amplifier and an amplifier with a high ratio between the maximum output current and the quiescent current. Five amplifiers have been designed in CMOS technology, extremely compact circuits that can operate on supply voltages down to one gate-source voltage and two saturation voltages which equals about 1.4 V and, ultimate-low-voltage amplifiers that can operate on supply voltages down to one gate-source voltage and one saturation voltage which amounts to about 1.2 V. In BiCMOS technology five amplifiers have been designed. The first two amplifiers are based on a compact topology. Two other amplifiers are designed to operate on low supply voltages down to 1.3 V. The final amplifier has a unity-gain frequency of 200 MHz and can operate down to 2.5 V. Compact Low-Voltage and High-Speed CMOS, BiCMOS and Bipolar Operational Amplifiers is intended for the professional analog designer. Also, it is suitable as a text book for advanced courses in amplifier design.

  • 13% sparen
    von J. Craninckx
    139,00 - 146,00 €

    The recent boom in the mobile telecommunication market has trapped the interest of almost all electronic and communication companies worldwide. New applications arise every day, more and more countries are covered by digital cellular systems and the competition between the several providers has caused prices to drop rapidly. The creation of this essentially new market would not have been possible without the ap- pearance of smalI, low-power, high-performant and certainly low-cost mobile termi- nals. The evolution in microelectronics has played a dominant role in this by creating digital signal processing (DSP) chips with more and more computing power and com- bining the discrete components of the RF front-end on a few ICs. This work is situated in this last area, i. e. the study of the full integration of the RF transceiver on a single die. Furthermore, in order to be compatible with the digital processing technology, a standard CMOS process without tuning, trimming or post-processing steps must be used. This should flatten the road towards the ultimate goal: the single chip mobile phone. The local oscillator (LO) frequency synthesizer poses some major problems for integration and is the subject of this work. The first, and also the largest, part of this text discusses the design of the Voltage- Controlled Oscillator (VCO). The general phase noise theory of LC-oscillators is pre- sented, and the concept of effective resistance and capacitance is introduced to char- acterize and compare the performance of different LC-tanks.

  • von Robert J. Hilderman
    49,00 €

    Knowledge Discovery and Measures of Interest is a reference book for knowledge discovery researchers, practitioners, and students. The knowledge discovery researcher will find that the material provides a theoretical foundation for measures of interest in data mining applications where diversity measures are used to rank summaries generated from databases. The knowledge discovery practitioner will find solid empirical evidence on which to base decisions regarding the choice of measures in data mining applications. The knowledge discovery student in a senior undergraduate or graduate course in databases and data mining will find the book is a good introduction to the concepts and techniques of measures of interest. In Knowledge Discovery and Measures of Interest, we study two closely related steps in any knowledge discovery system: the generation of discovered knowledge; and the interpretation and evaluation of discovered knowledge. In the generation step, we study data summarization, where a single dataset can be generalized in many different ways and to many different levels of granularity according to domain generalization graphs. In the interpretation and evaluation step, we study diversity measures as heuristics for ranking the interestingness of the summaries generated. The objective of this work is to introduce and evaluate a technique for ranking the interestingness of discovered patterns in data. It consists of four primary goals: To introduce domain generalization graphs for describing and guiding the generation of summaries from databases. To introduce and evaluate serial and parallel algorithms that traverse the domain generalization space described by the domain generalization graphs. To introduce and evaluate diversity measures as heuristic measures of interestingness for ranking summaries generated from databases. To develop the preliminary foundation for a theory of interestingness within the context of ranking summaries generated from databases. Knowledge Discovery and Measures of Interest is suitable as a secondary text in a graduate level course and as a reference for researchers and practitioners in industry.

  • 13% sparen
    von Yossi Rubner
    139,00 €

    The increasing amount of information available in today's world raises the need to retrieve relevant data efficiently. Unlike text-based retrieval, where keywords are successfully used to index into documents, content-based image retrieval poses up front the fundamental questions how to extract useful image features and how to use them for intuitive retrieval. We present a novel approach to the problem of navigating through a collection of images for the purpose of image retrieval, which leads to a new paradigm for image database search. We summarize the appearance of images by distributions of color or texture features, and we define a metric between any two such distributions. This metric, which we call the "e;Earth Mover's Distance"e; (EMD), represents the least amount of work that is needed to rearrange the mass is one distribution in order to obtain the other. We show that the EMD matches perceptual dissimilarity better than other dissimilarity measures, and argue that it has many desirable properties for image retrieval. Using this metric, we employ Multi-Dimensional Scaling techniques to embed a group of images as points in a two- or three-dimensional Euclidean space so that their distances reflect image dissimilarities as well as possible. Such geometric embeddings exhibit the structure in the image set at hand, allowing the user to understand better the result of a database query and to refine the query in a perceptually intuitive way.

  • von Piet Wambacq
    321,00 €

    The analysis and prediction of nonlinear behavior in electronic circuits has long been a topic of concern for analog circuit designers. The recent explosion of interest in portable electronics such as cellular telephones, cordless telephones and other applications has served to reinforce the importance of these issues. The need now often arises to predict and optimize the distortion performance of diverse electronic circuit configurations operating in the gigahertz frequency range, where nonlinear reactive effects often dominate. However, there have historically been few sources available from which design engineers could obtain information on analysis tech- niques suitable for tackling these important problems. I am sure that the analog circuit design community will thus welcome this work by Dr. Wambacq and Professor Sansen as a major contribution to the analog circuit design literature in the area of distortion analysis of electronic circuits. I am personally looking forward to hav- ing a copy readily available for reference when designing integrated circuits for communication systems.

  • 12% sparen
    von Koen Lampaert
    94,00 €

    Analog integrated circuits are very important as interfaces between the digital parts of integrated electronic systems and the outside world. A large portion of the effort involved in designing these circuits is spent in the layout phase. Whereas the physical design of digital circuits is automated to a large extent, the layout of analog circuits is still a manual, time-consuming and error-prone task. This is mainly due to the continuous nature of analog signals, which causes analog circuit performance to be very sensitive to layout parasitics. The parasitic elements associated with interconnect wires cause loading and coupling effects that degrade the frequency behaviour and the noise performance of analog circuits. Device mismatch and thermal effects put a fundamental limit on the achievable accuracy of circuits. For successful automation of analog layout, advanced place and route tools that can handle these critical parasitics are required. In the past, automatic analog layout tools tried to optimize the layout without quantifying the performance degradation introduced by layout parasitics. Therefore, it was not guaranteed that the resulting layout met the specifications and one or more layout iterations could be needed. In Analog Layout Generation for Performance and Manufacturability, the authors propose a performance driven layout strategy to overcome this problem. In this methodology, the layout tools are driven by performance constraints, such that the final layout, with parasitic effects, still satisfies the specifications of the circuit. The performance degradation associated with an intermediate layout solution is evaluated at runtime using predetermined sensitivities. In contrast with other performance driven layout methodologies, the tools proposed in this book operate directly on the performance constraints, without an intermediate parasitic constraint generation step. This approach makes a complete and sensible trade-off between the different layout alternatives possible at runtime and therefore eliminates the possible feedback route between constraint derivation, placement and layout extraction. Besides its influence on the performance, layout also has a profound impact on the yield and testability of an analog circuit. In Analog Layout Generation for Performance and Manufacturability, the authors outline a new criterion to quantify the detectability of a fault and combine this with a yield model to evaluate the testability of an integrated circuit layout. They then integrate this technique with their performance driven routing algorithm to produce layouts that have optimal manufacturability while still meeting their performance specifications. Analog Layout Generation for Performance and Manufacturability will be of interest to analog engineers, researchers and students.

  • 13% sparen
    von Bengt E. Jonsson
    139,00 €

    Switched-Current Signal Processing and A/D Conversion Circuits: Design and Implementation describes the design and implementation of switched-current (SI) circuits with emphasis on signal processing and data-conversion applications. The work includes theoretical analysis, high-level and circuit-level simulation results as well as measurement results from a few of the author's circuit implementations. An extensive overview of the SI field of research is also given. The book contains an extensive overview of the switched-current field of research, and can therefore be used as a quick-reference to the field. The description of each design example has been organized to describe the entire design flow from system level design and simulation, to circuit simulation, layout and measurement as accurately as possible. Thus it is possible to follow each step in the design process. Switched-Current Signal Processing and A/D Conversion Circuits: Design and Implementation is an invaluable reference for researchers and circuit designers working with one-chip mixed-signal system solutions, and low-voltage analog CMOS design. It will also be appreciated by anyone requiring a quick overview of what has been done in the SI field.

  • 13% sparen
    von Anne van den Bosch
    139,00 €

    Static and Dynamic Performance Limitations for High Speed D/A Converters discusses the design and implementation of high speed current-steering CMOS digital-to-analog converters. Starting from the definition of the basic specifications for a D/A converter, the elements determining the static and dynamic performance are identified. Different guidelines based on scientific derivations are suggested to optimize this performance. Furthermore, a new closed formula has been derived to account for the influence of the transistor mismatch on the achievable resolution of the current-steering D/A converter. To allow a thorough understanding of the dynamic behavior, a new factor has been introduced. Moreover, the frequency dependency of the output impedance introduces harmonic distortion components which can limit the maximum attainable spurious free dynamic range. Finally, the last part of the book gives an overview on different existing transistor mismatch models and the link with the static performance of the D/A converter.

  • 12% sparen
    von Wouter A. Serdijn
    94,00 - 98,00 €

    The area of analog integrated circuits is facing some serious challenges due to the ongoing trends towards low supply voltages, low power consumption and high-frequency operation. The situation is becoming even more complicated by the fact that many transfer functions have to be tunable or controllable. A promising approach to facing these challenges is given by the class of dynamic translinear circuits, which are, as a consequence, receiving increasing interest. Several different names are used in literature: log-domain, exponential state-space, current-mode companding, instantaneous companding, tanh-domain, sinh-domain, polynomial state-space, square-root domain and translinear filters. In fact, all these groups are (overlapping) subclasses of the overall class of dynamic translinear circuits. Research Perspectives on Dynamic Translinear and Log-Domain Circuits is a compilation of research findings in this growing field. It comprises ten contributions, coming from recognized `dynamic-translinear' researchers in Europe and North America. Research Perspectives on Dynamic Translinear and Log-Domain Circuits is an edited volume of original research.

  • 14% sparen
    von James A. Cherry
    185,00 €

    Among analog-to-digital converters, the delta-sigma modulator has cornered the market on high to very high resolution converters at moderate speeds, with typical applications such as digital audio and instrumentation. Interest has recently increased in delta-sigma circuits built with a continuous-time loop filter rather than the more common switched-capacitor approach. Continuous-time delta-sigma modulators offer less noisy virtual ground nodes at the input, inherent protection against signal aliasing, and the potential to use a physical rather than an electrical integrator in the first stage for novel applications like accelerometers and magnetic flux sensors. More significantly, they relax settling time restrictions so that modulator clock rates can be raised. This opens the possibility of wideband (1 MHz or more) converters, possibly for use in radio applications at an intermediate frequency so that one or more stages of mixing might be done in the digital domain. Continuous-Time Delta-Sigma Modulators for High-Speed A/D Conversion: Theory, Practice and Fundamental Performance Limits covers all aspects of continuous-time delta-sigma modulator design, with particular emphasis on design for high clock speeds. The authors explain the ideal design of such modulators in terms of the well-understood discrete-time modulator design problem and provide design examples in Matlab. They also cover commonly-encountered non-idealities in continuous-time modulators and how they degrade performance, plus a wealth of material on the main problems (feedback path delays, clock jitter, and quantizer metastability) in very high-speed designs and how to avoid them. They also give a concrete design procedure for a real high-speed circuit which illustrates the tradeoffs in the selection of key parameters. Detailed circuit diagrams, simulation results and test results for an integrated continuous-time 4 GHz band-pass modulator for A/D conversion of 1 GHz analog signals are also presented. Continuous-Time Delta-Sigma Modulators for High-Speed A/D Conversion: Theory, Practice and Fundamental Performance Limits concludes with some promising modulator architectures and a list of the challenges that remain in this exciting field.

  • 10% sparen
    von Wah Chun Chan
    48,00 €

    Performance Analysis of Telecommunications and Local Area Networks presents information on teletraffic engineering, with emphasis on modeling techniques, queuing theory, and performance analysis for the public-switched telephone network and computer communication networks. Coverage includes twisted pair cables and coaxial cables, subscriber loops, multistage network switching, modeling techniques for traffic flow and service time, random access networks, and much more. End-of-chapter problems with solutions are also included. Performance Analysis of Telecommunications and Local Area Networks is also a useful reference for practicing engineers but is intended as a textbook in advanced- level courses.

  • 13% sparen
    von Kimmo Koli
    139,00 €

    CMOS Current Amplifiers; Speed versus Nonlinearity is intended as a current-amplifier cookbook containing an extensive review of different current amplifier topologies realisable with modern CMOS integration technologies. The seldom-discussed issue of high-frequency distortion performance is derived for all reviewed amplifier topologies using as simple and intuitive mathematical methods as possible. The topologies discussed are also useful as building blocks for high-performance voltage-mode amplifiers. So the reader can apply the discussed techniques to both voltage- and current-mode analogue integrated circuit design.For the most popular open-loop current-mode amplifier, the second-generation current-conveyor (CCII), a macro model is derived that, unlike other reported macromodels, can accurately predict the common-mode behaviour in differential applications. Similarly, this model is used to describe the nonidealities of several other current-mode amplifiers. With modern low-voltage CMOS-technologies, the current-mode operational amplifier and the high-gain current-conveyor (CCIIINFINITY perform better than various open-loop current-amplifiers. Similarly, unlike with conventional voltage-mode operational amplifiers, the large-signal settling behaviour of these two amplifier types does not degrade as CMOS-processes are scaled down.This book contains application examples with experimental results in three different fields: instrumentation amplifiers, continuous-time analogue filters and logarithmic amplifiers. The instrumentation amplifier example shows that using unmatched off-the-self components very high CMRR can be reached even at relatively high frequencies. As a filter application, two 1 MHz 3rd-order low-pass continuous-time filters are realised with a 1.2 mum CMOS-process. These filters use a differential CCIIINFINITY with linearised, dynamically biased output stages resulting in outstanding performance when compared to most OTA-C filter realisations reported.As an application example of nonlinear circuits, two logarithmic amplifier chips are designed and fabricated. The first circuit, implemented with a 1.2 m BiCMOS-process, uses again a CCII8 and a pn-junction as a logarithmic feedback element. With a CCII8 the constant gain-bandwidth product, typical of voltage-mode operational amplifiers, is avoided resulting in a constant 1 MHz bandwidth within a 60 dB signal amplitude range. The second current-mode logarithmic amplifier, realised in a 1.2 m CMOS-process, is based on piece-wise linear approximation of the logarithmic function. In this logarithmic amplifier, using limiting current amplifiers instead of limiting voltage amplifiers results in exceptionally low temperature dependency of the logarithmic output signal. Additionally, along with this logarithmic amplifier a new current peak detector is developed.

  • 12% sparen
    von Philip Robinson
    94,00 €

    Privacy, Security and Trust within the Context of Pervasive Computing is an edited volume based on a post workshop at the second international conference on Pervasive Computing. The workshop was held April18-23, 2004, in Vienna, Austria.The goal of the workshop was not to focus on specific, even novel mechanisms, but rather on the interfaces between mechanisms in different technical and social problem spaces. An investigation of the interfaces between the notions of context, privacy, security, and trust will result in a deeper understanding of the "e;atomic"e; problems, leading to a more complete understanding of the social and technical issues in pervasive computing.

  • - Engineering and Applications
    von Michael Schiebe
    204,00 €

    Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.

  • 13% sparen
    von Toru Ishida
    139,00 €

    Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.

  • 13% sparen
    von Allen M. Dewey
    139,00 €

    This book describes a new type of computer aided VLSI design tool, called a VLSI System Planning, that is meant to aid designers dur- ing the early, or conceptual, state of design. During this stage of design, the objective is to define a general design plan, or approach, that is likely to result in an efficient implementation satisfying the initial specifications, or to determine that the initial specifications are not realizable. A design plan is a collection of high level design decisions. As an example, the conceptual design of digital filters involves choosing the type of algorithm to implement (e. g. , finite impulse response or infinite impulse response), the type of polyno- mial approximation (e. g. , Equiripple or Chebyshev), the fabrication technology (e. g. , CMOS or BiCMOS), and so on. Once a particu- lar design plan is chosen, the detailed design phase can begin. It is during this phase that various synthesis, simulation, layout, and test activities occur to refine the conceptual design, gradually filling more detail until the design is finally realized. The principal advantage of VLSI System Planning is that the increasingly expensive resources of the detailed design process are more efficiently managed. Costly redesigns are minimized because the detailed design process is guided by a more credible, consistent, and correct design plan.

  • 13% sparen
    von N. Bouden-Romdhane
    139,00 €

    From the Foreword..... Modern digital signal processing applications provide a large challenge to the system designer. Algorithms are becoming increasingly complex, and yet they must be realized with tight performance constraints. Nevertheless, these DSP algorithms are often built from many constituent canonical subtasks (e.g., IIR and FIR filters, FFTs) that can be reused in other subtasks. Design is then a problem of composing these core entities into a cohesive whole to provide both the intended functionality and the required performance. In order to organize the design process, there have been two major approaches. The top-down approach starts with an abstract, concise, functional description which can be quickly generated. On the other hand, the bottom-up approach starts from a detailed low-level design where performance can be directly assessed, but where the requisite design and interface detail take a long time to generate. In this book, the authors show a way to effectively resolve this tension by retaining the high-level conciseness of VHDL while parameterizing it to get good fit to specific applications through reuse of core library components. Since they build on a pre-designed set of core elements, accurate area, speed and power estimates can be percolated to high- level design routines which explore the design space. Results are impressive, and the cost model provided will prove to be very useful. Overall, the authors have provided an up-to-date approach, doing a good job at getting performance out of high-level design. The methodology provided makes good use of extant design tools, and is realistic in terms of the industrial design process. The approach is interesting in its own right, but is also of direct utility, and it will give the existing DSP CAD tools a highly competitive alternative. The techniques described have been developed within ARPAs RASSP (Rapid Prototyping of Application Specific Signal Processors) project, and should be of great interest there, as well as to many industrial designers. Professor Jonathan Allen, Massachusetts Institute of Technology

  • 13% sparen
    von Thaddeus J. Kowalski
    140,00 €

    Rule-Based Programming is a broad presentation of the rule-based programming method with many example programs showing the strengths of the rule-based approach. The rule-based approach has been used extensively in the development of artificial intelligence systems, such as expert systems and machine learning. This rule-based programming technique has been applied in such diverse fields as medical diagnostic systems, insurance and banking systems, as well as automated design and configuration systems. Rule-based programming is also helpful in bridging the semantic gap between an application and a program, allowing domain specialists to understand programs and participate more closely in their development. Over sixty programs are presented and all programs are available from an ftp site. Many of these programs are presented in several versions allowing the reader to see how realistic programs are elaborated from `back of envelope' models. Metaprogramming is also presented as a technique for bridging the `semantic gap'. Rule-Based Programming will be of interest to programmers, systems analysts and other developers of expert systems as well as to researchers and practitioners in artificial intelligence, computer science professionals and educators.

  • 12% sparen
    von Debashis Bhattacharya
    94,00 €

    Test generation is one of the most difficult tasks facing the designer of complex VLSI-based digital systems. Much of this difficulty is attributable to the almost universal use in testing of low, gate-level circuit and fault models that predate integrated circuit technology. It is long been recognized that the testing prob- lem can be alleviated by the use of higher-level methods in which multigate modules or cells are the primitive components in test generation; however, the development of such methods has proceeded very slowly. To be acceptable, high-level approaches should be applicable to most types of digital circuits, and should provide fault coverage comparable to that of traditional, low-level methods. The fault coverage problem has, perhaps, been the most intractable, due to continued reliance in the testing industry on the single stuck-line (SSL) fault model, which is tightly bound to the gate level of abstraction. This monograph presents a novel approach to solving the foregoing problem. It is based on the systematic use of multibit vectors rather than single bits to represent logic signals, including fault signals. A circuit is viewed as a collection of high-level components such as adders, multiplexers, and registers, interconnected by n-bit buses. To match this high-level circuit model, we introduce a high-level bus fault that, in effect, replaces a large number of SSL faults and allows them to be tested in parallel. However, by reducing the bus size from n to one, we can obtain the traditional gate-level circuit and models.

  • 13% sparen
    von Robert A. Iannucci
    139,00 - 140,00 €

    It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro- gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex- pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "e;What can dataflow architectures buy us that von Neumann ar- chitectures can't?"e; In the following I explain in a round about way how Bob and I arrived at this question.

  • 13% sparen
    von Stewart G. Smith
    139,00 - 140,00 €

    This book is concerned with advances in serial-data computa- tional architectures, and the CAD tools for their implementation in silicon. The bit-serial tradition at Edinburgh University (EU) stretches back some 6 years to the conception of the FIRST silicon compiler. FIRST owes much of its inspiration to Dick Lyon, then at Xerox P ARC, who proposed a 'structured-design' methodology for construction of signal processing systems from bit-serial building blocks. Based on an nMOS cell-library, FIRST automates much of Lyon's physical design process. More recently, we began to feel that FIRST should be able to exploit more modern technologies. Before this could be achieved, we were faced with a massive manual re-design task, i. e. the porting of FIRST cell-library to a new technology. As it was to avoid such tasks that FIRST was conceived in the first place, we decided to move the level of user-specification much nearer to the silicon level (while still hiding details of transistor circuit design, place and route etc. , from the user), and by so doing, enable the specification of more functionally powerful libraries in technology-free form. The results of this work are in evidence as advances in serial-data design techniques, and the SECOND silicon compiler, introduced later in this book. These achievements could not have been accomplished without help from various sources. We take this opportunity to thank Profs.

  • 13% sparen
    - For Authentication in an E-World
    von David D. Zhang
    140,00 €

    In today's complex, geographically mobile, electronically wired information society, the problem of identifying a person continues to pose a great challenge. Since the conventional technology of using a Personal Identification Number (PIN) or password hardly meets the requirements of an authentication system, biometric-based authentication is emerging as the most reliable method. Biometric Solutions for Authentication in an E-World provides a collection of sixteen chapters containing tutorial articles and new material in a unified manner. This includes the basic concepts, theories, and characteristic features of integrating/formulating different facets of biometric solutions for authentication, with recent developments and significant applications in an E-world. This book provides the reader with a basic concept of biometrics, an in-depth discussion exploring biometric technologies in various applications in an E-world. It also includes a detailed description of typical biometric-based security systems and up-to-date coverage of how these issues are developed. Experts from all over the world demonstrate the various ways this integration can be made to efficiently design methodologies, algorithms, architectures, and implementations for biometric-based applications in an E-world. Biometric Solutions for Authentication in an E-World meets the needs of a professional audience composed of researchers and practitioners in industry and graduate-level students in computer science and engineering. Researchers and practitioners in research and development laboratories working in fields of security systems design, biometrics, immigration, law enforcement, control, pattern recognition, and the Internet will benefit from this book.

  • von Alistair Moffat
    53,00 - 71,00 €

    An authoritative reference to the whole area of source coding algorithms, Compression and Coding Algorithms will be a primary resource for both researchers and software engineers. The book also will be interest for people in broader area of design and analysis of algorithms and data structure. Practitioners, especially those who work in the software development and independent consulting industries creating compression software or other applications systems, in which compression plays a part, will benefit from techniques that are described. Compression and Coding Algorithms describes in detail the coding mechanisms that are available for use in data compression systems. The well known Huffman coding technique is one mechanism, but there have been many others developed over the past few decades, and this book describes, explains and assesses them. People undertaking research of software development in the areas of compression and coding algorithms will find this book an indispensable reference. In particular, the careful and detailed description of algorithms and their implementation, plus accompanying pseudo-code that can be readily implemented on computer, make this book a definitive reference in an area currently without one.The detailed pseudo-code presentation of over thirty algorithms, and careful explanation of examples, make this book approachable and authoritative. Compression and throughput results are presented where appropriate, and serve as a validation of the assessments and recommendation made in the text. The combination of implementation detail, thoughtful discussions, and careful presentation means that this book will occupy a pivotal role in this area for many years. In-depth coverage of the crucial areas of minimum-redundancy coding, arithmetic coding, adaptive coding make Compression and Coding Algorithms unique in its field.

  • 12% sparen
    von Thorsten Joachims
    94,00 €

    Text Classification, or the task of automatically assigning semantic categories to natural language text, has become one of the key methods for organizing online information. Since hand-coding classification rules is costly or even impractical, most modern approaches employ machine learning techniques to automatically learn text classifiers from examples. However, none of these conventional approaches combines good prediction performance, theoretical understanding, and efficient training algorithms.Based on ideas from Support Vector Machines (SVMs), Learning To Classify Text Using Support Vector Machines presents a new approach to generating text classifiers from examples. The approach combines high performance and efficiency with theoretical understanding and improved robustness. In particular, it is highly effective without greedy heuristic components. The SVM approach is computationally efficient in training and classification, and it comes with a learning theory that can guide real-world applications.Learning To Classify Text Using Support Vector Machines gives a complete and detailed description of the SVM approach to learning text classifiers, including training algorithms, transductive text classification, efficient performance estimation, and a statistical learning model of text classification. In addition, it includes an overview of the field of text classification, making it self-contained even for newcomers to the field. This book gives a concise introduction to SVMs for pattern recognition, and it includes a detailed description of how to formulate text-classification tasks for machine learning.Learning To Classify Text Using Support Vector Machines is designed as a reference for researchers and practitioners, and is suitable as a secondary text for graduate-level students in Computer Science within Machine Learning and Language Technology.

  • 10% sparen
    von Daniele Micciancio
    213,00 €

    Lattices are geometric objects that can be pictorially described as the set of intersection points of an infinite, regular n-dimensional grid. De- spite their apparent simplicity, lattices hide a rich combinatorial struc- ture, which has attracted the attention of great mathematicians over the last two centuries. Not surprisingly, lattices have found numerous ap- plications in mathematics and computer science, ranging from number theory and Diophantine approximation, to combinatorial optimization and cryptography. The study of lattices, specifically from a computational point of view, was marked by two major breakthroughs: the development of the LLL lattice reduction algorithm by Lenstra, Lenstra and Lovasz in the early 80's, and Ajtai's discovery of a connection between the worst-case and average-case hardness of certain lattice problems in the late 90's. The LLL algorithm, despite the relatively poor quality of the solution it gives in the worst case, allowed to devise polynomial time solutions to many classical problems in computer science. These include, solving integer programs in a fixed number of variables, factoring polynomials over the rationals, breaking knapsack based cryptosystems, and finding solutions to many other Diophantine and cryptanalysis problems.

  • 12% sparen
    von Roy Ladner
    94,00 €

    We are facing a rapidly growing capability to collect more and more data regarding our environment. With that, we must have the ability to extract more insightful knowledge about the environmental processes at work on the earth. Spatio-Temporal Information Systems (STIS) will especially prove beneficial in producing useful knowledge about changes in our world from these ever burgeoning collections of environment data.STIS provide the ability to store, analyze and represent the dynamic properties of the environment, that is, geographic information in space and time. An STIS, for example, can produce a weather map, but more importantly, it can present a user with information in map or report form indicating how precipitation progresses in space over time to affect a watershed. Other uses include forestry and even electrical systems management. Forestry experts using an STIS are able to examine the rates of movements of forest fires, how they evolve over time, and their impact on forest growth over long periods of time. A large electrical network system manager uses an STIS to track the failures and repairs of electrical transformers. Use of an STIS in this case allows the reconstruction of the status of the network at any given past time. Mining Spatio-Temporal Information Systems, an edited volume is composed of chapters from leading experts in the field of Spatial-Temporal Information Systems and addresses the many issues in support of modeling, creation, querying, visualizing and mining. Mining Spatio-Temporal Information Systems is intended to bring together a coherent body of recent knowledge relating to STIS data modeling, design, implementation and STIS in knowledge discovery. In particular, the reader is exposed to the latest techniques for the practical design of STIS, essential for complex query processing. Mining Spatio-Temporal Information Systems is structured to meet the needs of practitioners and researchers in industry and graduate-level students in Computer Science.

  • 14% sparen
    von Lizy Kurian John
    185,00 €

    The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.

  • von Robert M. Gray
    113,00 €

    The Fourier transform is one of the most important mathematical tools in a wide variety of fields in science and engineering. In the abstract it can be viewed as the transformation of a signal in one domain (typically time or space) into another domain, the frequency domain. Applications of Fourier transforms, often called Fourier analysis or harmonic analysis, provide useful decompositions of signals into fundamental or "e;primitive"e; components, provide shortcuts to the computation of complicated sums and integrals, and often reveal hidden structure in data. Fourier analysis lies at the base of many theories of science and plays a fundamental role in practical engineering design. The origins of Fourier analysis in science can be found in Ptolemy's decomposing celestial orbits into cycles and epicycles and Pythagorus' de- composing music into consonances. Its modern history began with the eighteenth century work of Bernoulli, Euler, and Gauss on what later came to be known as Fourier series. J. Fourier in his 1822 Theorie analytique de la Chaleur [16] (still available as a Dover reprint) was the first to claim that arbitrary periodic functions could be expanded in a trigonometric (later called a Fourier) series, a claim that was eventually shown to be incorrect, although not too far from the truth. It is an amusing historical sidelight that this work won a prize from the French Academy, in spite of serious concerns expressed by the judges (Laplace, Lagrange, and Legendre) re- garding Fourier's lack of rigor.

  • 14% sparen
    von Osamu Wada
    185,00 €

    As we approach the end of the present century, the elementary particles of light (photons) are seen to be competing increasingly with the elementary particles of charge (electrons/holes) in the task of transmitting and processing the insatiable amounts of infonnation needed by society. The massive enhancements in electronic signal processing that have taken place since the discovery of the transistor, elegantly demonstrate how we have learned to make use of the strong interactions that exist between assemblages of electrons and holes, disposed in suitably designed geometries, and replicated on an increasingly fine scale. On the other hand, photons interact extremely weakly amongst themselves and all-photonic active circuit elements, where photons control photons, are presently very difficult to realise, particularly in small volumes. Fortunately rapid developments in the design and understanding of semiconductor injection lasers coupled with newly recognized quantum phenomena, that arise when device dimensions become comparable with electronic wavelengths, have clearly demonstrated how efficient and fast the interaction between electrons and photons can be. This latter situation has therefore provided a strong incentive to devise and study monolithic integrated circuits which involve both electrons and photons in their operation. As chapter I notes, it is barely fifteen years ago since the first demonstration of simple optoelectronic integrated circuits were realised using m-V compound semiconductors; these combined either a laser/driver or photodetector/preamplifier combination.

  • 13% sparen
    von Carlos H. Diaz
    139,00 €

    Electrical overstress (EOS) and Electrostatic discharge (ESD) pose one of the most dominant threats to integrated circuits (ICs). These reliability concerns are becoming more serious with the downward scaling of device feature sizes. Modeling of Electrical Overstress in Integrated Circuits presents a comprehensive analysis of EOS/ESD-related failures in I/O protection devices in integrated circuits. The design of I/O protection circuits has been done in a hit-or-miss way due to the lack of systematic analysis tools and concrete design guidelines. In general, the development of on-chip protection structures is a lengthy expensive iterative process that involves tester design, fabrication, testing and redesign. When the technology is changed, the same process has to be repeated almost entirely. This can be attributed to the lack of efficient CAD tools capable of simulating the device behavior up to the onset of failure which is a 3-D electrothermal problem. For these reasons, it is important to develop and use an adequate measure of the EOS robustness of integrated circuits in order to address the on-chip EOS protection issue. Fundamental understanding of the physical phenomena leading to device failures under ESD/EOS events is needed for the development of device models and CAD tools that can efficiently describe the device behavior up to the onset of thermal failure. Modeling of Electrical Overstress in Integrated Circuits is for VLSI designers and reliability engineers, particularly those who are working on the development of EOS/ESD analysis tools. CAD engineers working on development of circuit level and device level electrothermal simulators will also benefit from the material covered. This book will also be of interest to researchers and first and second year graduate students working in semiconductor devices and IC reliability fields.

Willkommen bei den Tales Buchfreunden und -freundinnen

Jetzt zum Newsletter anmelden und tolle Angebote und Anregungen für Ihre nächste Lektüre erhalten.