Große Auswahl an günstigen Büchern
Schnelle Lieferung per Post und DHL

Bücher der Reihe The Springer International Series in Engineering and Computer Science

Filter
Filter
Ordnen nachSortieren Reihenfolge der Serie
  • 13% sparen
    von Sorin Alexander Huss
    139,00 €

    Model engineering is an important activity within the design flow of in- grated circuits and signal processing systems. This activity is not new at all in computer engineering, however, and takes a central role in practice. Model engineering of digital systems is based on agreed concepts of abstraction - erarchies for design object representations as well as the expressive power of hardware description languages (HDL). Since their gradual introduction over time HDL have proved to form the foundation of design methodologies and related design flows. Design automation tools for simulation, synthesis, test generation, and, last but not least, for formal proof purposes rely heavily on standardized digital HDL such as Verilog and VHDL. In contrast to purely digital systems there is an increasing need to design and implement integrated systems which exploit more and more mixed-signal functional blocks such as A/D and D/A converters or phase locked loops. Even purely analog blocks celebrate their resurrection in integrated systems design because of their unique efficiency when is comes to power consumption - quirements, for example, or complexity limitations. Examples of such analog signal processing functions are filtering or sensor signal conditioning. In g- eral, analog and mixed-signal processing is indispensable when interfacing the real world (i.e., analog signals) to computers (i.e., digital data processing).

  • 13% sparen
    von João Goes
    139,00 €

    Systematic Design for Optimisation of Pipelined ADCs proposes and develops new strategies, methodologies and tools for designing low-power and low-area CMOS pipelined A/D converters. The task is tackled by following a scientifically-consistent approach. First of all, the state of the art in pipeline A/D converters is analysed with a double purpose: a) to identify the best suited among different strategies reported in literature and taking into account the objectives pursued; b) to identify the drawbacks of these strategies as a basic first step to improve them. Then, the book proposes a top-down design approach for implementing high-performance low-power and low-area CMOS pipelined A/D converters through: The conception, development and implementation of self-calibrated techniques to extend the linearity of some critical stages in the architecture of pipelined ADCs. The detailed analysis and modelling of some major non-idealities that limit the physical realisation of pipelined ADCs and the proposal, development and implementation of design methodologies to support systematic design of optimised instances of these converters which combine maximum performance with minimum power dissipation and minimum area occupation. GBP/LISTGBP Several implementations together with consistent measured results are presented. In particular, a practical realisation of a low-power 14-bit 5MS/s CMOS pipelined ADC with background analogue self-calibration is fully described. The proposed approach is fully in line with the best practice regarding the design of mixed-signal integrated circuits. On the one hand, drawbacks of currently existing solutions are overcame through innovative strategies and, on the other hand, the expert knowledge is packaged and made available for re-usability by the community of circuit designers. Finally, feasibility of the strategies and the associated encapsulated knowledge is granted through experimental validation of working silicon. Systematic Design for Optimisation of Pipelined ADCs serves as an excellent reference for analogue design engineers especially designers of low-power CMOS A/D converters. The book may also be used as a text for advanced reading on the subject.

  • 12% sparen
    von Swaminathan Natarajan
    94,00 €

    Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.

  • von Chenzhong Xu
    203,00 €

    Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.

  • 14% sparen
    - A Perspective on the State of the Art
    von Ron Sun
    185,00 €

    Computational Architectures Integrating Neural and Symbolic Processes: A Perspective on the State of the Art focuses on a currently emerging body of research. With the reemergence of neural networks in the 1980s with their emphasis on overcoming some of the limitations of symbolic AI, there is clearly a need to support some form of high-level symbolic processing in connectionist networks. As argued by many researchers, on both the symbolic AI and connectionist sides, many cognitive tasks, e.g. language understanding and common sense reasoning, seem to require high-level symbolic capabilities. How these capabilities are realized in connectionist networks is a difficult question and it constitutes the focus of this book. Computational Architectures Integrating Neural and Symbolic Processes addresses the underlying architectural aspects of the integration of neural and symbolic processes. In order to provide a basis for a deeper understanding of existing divergent approaches and provide insight for further developments in this field, this book presents: (1) an examination of specific architectures (grouped together according to their approaches), their strengths and weaknesses, why they work, and what they predict, and (2) a critique/comparison of these approaches. Computational Architectures Integrating Neural and Symbolic Processes is of interest to researchers, graduate students, and interested laymen, in areas such as cognitive science, artificial intelligence, computer science, cognitive psychology, and neurocomputing, in keeping up-to-date with the newest research trends. It is a comprehensive, in-depth introduction to this new emerging field.

  • 14% sparen
    von Tomasz Imielinski
    277,00 €

    The rapid development of wireless digital communication technology has cre- ated capabilities that software systems are only beginning to exploit. The falling cost of both communication and of mobile computing devices (laptop computers, hand-held computers, etc. ) is making wireless computing affordable not only to business users but also to consumers. Mobile computing is not a "e;scaled-down"e; version of the established and we- studied field of distributed computing. The nature of wireless communication media and the mobility of computers combine to create fundamentally new problems in networking, operating systems, and information systems. Further- more, many of the applications envisioned for mobile computing place novel demands on software systems. Although mobile computing is still in its infancy, some basic concepts have been identified and several seminal experimental systems developed. This book includes a set of contributed papers that describe these concepts and sys- tems. Other papers describe applications that are currently being deployed and tested. The first chapter offers an introduction to the field of mobile computing, a survey of technical issues, and a summary of the papers that comprise sub- sequent chapters. We have chosen to reprint several key papers that appeared previously in conference proceedings. Many of the papers in this book are be- ing published here for the first time. Of these new papers, some are expanded versions of papers first presented at the NSF-sponsored Mobidata Workshop on Mobile and Wireless Information Systems, held at Rutgers University on Oct 31 and Nov 1, 1994.

  • 13% sparen
    von Federico Bruccoleri
    130,00 €

    Low Noise Amplifiers (LNAs) are commonly used to amplify signals that are too weak for direct processing for example in radio or cable receivers. Traditionally, low noise amplifiers are implemented via tuned amplifiers, exploiting inductors and capacitors in resonating LC-circuits. This can render very low noise but only in a relatively narrow frequency band close to resonance. There is a clear trend to use more bandwidth for communication, both via cables (e.g. cable TV, internet) and wireless links (e.g. satellite links and Ultra Wideband Band). Hence wideband low-noise amplifier techniques are very much needed.Wideband Low Noise Amplifiers Exploiting Thermal Noise Cancellation explores techniques to realize wideband amplifiers, capable of impedance matching and still achieving a low noise figure well below 3dB. This can be achieved with a new noise cancelling technique as described in this book. By using this technique, the thermal noise of the input transistor of the LNA can be cancelled while the wanted signal is amplified! The book gives a detailed analysis of this technique and presents several new amplifier circuits.This book is directly relevant for IC designers and researchers working on integrated transceivers. Although the focus is on CMOS circuits, the techniques can just as well be applied to other IC technologies, e.g. bipolar and GaAs, and even in discrete component technologies.

  • 13% sparen
    - Machine Learning
    von Judy A. Franklin
    139,00 €

    Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. Since robot learning involves decision making, there is an inherent active learning issue. Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. On the other hand, machine learning is also highly attractive to robotics. There is a great variety of open problems in robotics that defy a static, hand-coded solution. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3).

  • 14% sparen
    - Design and Applications
    von Ali N. Akansu
    185,00 €

    The scientists and engineers of today are relentless in their continuing study and analysis of the world about us from the microcosm to the macrocosm. A central purpose of this study is to gain sufficient scientific information and insight to enable the development of both representative and useful models of the superabundance of physical processes that surround us. The engineers need these models and the associated insight in order to build the information processing systems and control systems that comprise these new and emerging technologies. Much of the early modeling work that has been done on these systems has been based on the linear time-invariant system theory and its extensive use of Fourier transform theory for both continuous and discrete systems and signals. However many of the signals arising in nature and real systems are neither stationary nor linear but tend to be concentrated in both time and frequency. Hence a new methodology is needed to take these factors properly into account.

  • 13% sparen
    von Kathleen Dahlgren
    139,00 €

    This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency- clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common- sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen- tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "e;Commonsense Reasoning with Verbs"e; (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations.

  • 13% sparen
    von Borko Furht
    140,00 €

    Multimedia computing has emerged in the last few years as a major area of research. Multimedia computer systems have opened a wide range of applications by combining a variety of information sources, such as voice, graphics, animation, images, audio and full-motion video. Looking at the big picture, multimedia can be viewed as the merging of three industries: computer, communications, and broadcasting industries. Research and development efforts can be divided into two areas. As the first area of research, much effort has been centered on the stand-alone multimedia workstation and associated software systems and tools, such as music composition, computer-aided education and training, and interactive video. However, the combination of multimedia computing with distributed systems offers even greater potential. New applications based on distributed multimedia systems include multimedia information systems, collaborative and video conferencing systems, on-demand multimedia services, and distance learning. Multimedia Systems and Techniques is one of two volumes published by Kluwer, both of which provide a broad introduction into this fast moving area. The book covers fundamental concepts and techniques used in multimedia systems. The topics include multimedia objects and related models, multimedia compression techniques and standards, multimedia interfaces, multimedia storage techniques, multimedia communication and networking, multimedia synchronization techniques, multimedia information systems, scheduling in multimedia systems, and video indexing and retrieval techniques. Multimedia Systems and Techniques, together with its companion volume, Multimedia Tools and Applications, is intended for anyone involved in multimedia system design and applications and can be used as a textbook for advanced courses on multimedia.

  • 13% sparen
    von Michael J. Franklin
    139,00 €

    Despite the significant ongoing work in the development of new database systems, many of the basic architectural and performance tradeoffs involved in their design have not previously been explored in a systematic manner. The designers of the various systems have adopted a wide range of strategies in areas such as process structure, client-server interaction, concurrency control, transaction management, and memory management. This monograph investigates several fundamental aspects of the emerging generation of database systems. It describes and investigates implementation techniques to provide high performance and scalability while maintaining the transaction semantics, reliability, and availability associated with more traditional database architectures. The common theme of the techniques developed here is the exploitation of client resources through caching-based data replication. Client Data Caching: A Foundation for High Performance Object Database Systems should be a value to anyone interested in the performance and architecture of distributed information systems in general and Object-based Database Management Systems in particular. It provides useful information for designers of such systems, as well as for practitioners who need to understand the inherent tradeoffs among the architectural alternatives in order to evaluate existing systems. Furthermore, many of the issues addressed in this book are relevant to other systems beyond the ODBMS domain. Such systems include shared-disk parallel database systems, distributed file systems, and distributed virtual memory systems. The presentation is suitable for practitioners and advanced students in all of these areas, although a basic understanding of database transaction semantics and techniques is assumed.

  • 13% sparen
    von Sebastian Thrun
    139,00 - 140,00 €

    Lifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. Explanation-Based Neural Network Learning: A Lifelong Learning Approach describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess. `The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.' From the Foreword by Tom M. Mitchell.

  • 13% sparen
    von Borko Furht
    140,00 €

    Multimedia computing has emerged in the last few years as a major area of research. Multimedia computer systems have opened a wide range of applications by combining a variety of information sources, such as voice, graphics, animation, images, audio, and full-motion video. Looking at the big picture, multimedia can be viewed as the merging of three industries: the computer, communications, and broadcasting industries. Research and development efforts in multimedia computing can be divided into two areas. As the first area of research, much effort has been centered on the stand-alone multimedia workstation and associated software systems and tools, such as music composition, computer-aided education and training, and interactive video. However, the combination of multimedia computing with distributed systems offers even greater potential. New applications based on distributed multimedia systems include multimedia information systems, collaborative and videoconferencing systems, on-demand multimedia services, and distance learning. Multimedia Tools and Applications is one of two volumes published by Kluwer, both of which provide a broad introduction to this fast moving area. This book covers selected tools applied in multimedia systems and key multimedia applications. Topics presented include multimedia application development techniques, techniques for content-based manipulation of image databases, techniques for selection and dissemination of digital video, and tools for digital video segmentation. Selected key applications described in the book include multimedia news services, multimedia courseware and training, interactive television systems, digital video libraries, multimedia messaging systems, and interactive multimedia publishing systems. The second book, Multimedia Systems and Techniques, covers fundamental concepts and techniques used in multimedia systems. The topics include multimedia objects and related models, multimedia compression techniques and standards, multimedia interfaces, multimedia storage techniques, multimedia communication and networking, multimedia synchronization techniques, multimedia information systems, scheduling in multimedia systems, and video indexing and retrieval techniques. Multimedia Tools and Applications, along with its companion volume, is intended for anyone involved in multimedia system design and applications and can be used as a textbook for advanced courses on multimedia.

  • 14% sparen
    von Ravi Jain
    185,00 €

    Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.

  • von Kai-Yuan Cai
    149,00 €

    Introduction to Fuzzy Reliability treats fuzzy methodology in hardware reliability and software reliability in a relatively systematic manner. The contents of this book are organized as follows. Chapter 1 places reliability engineering in the scope of a broader area, i.e. system failure engineering. Readers will find that although this book is confined to hardware and software reliability, it may be useful for other aspects of system failure engineering, like maintenance and quality control. Chapter 2 contains the elementary knowledge of fuzzy sets and possibility spaces which are required reading for the rest of this book. This chapter is included for the overall completeness of the book, but a few points (e.g. definition of conditional possibility and existence theorem of possibility space) may be new. Chapter 3 discusses how to calculate probist system reliability when the component reliabilities are represented by fuzzy numbers, and how to analyze fault trees when probabilities of basic events are fuzzy. Chapter 4 presents the basic theory of profust reliability, whereas Chapter 5 analyzes the profust reliability behavior of a number of engineering systems. Chapters 6 and 7 are devoted to probist reliability theory from two different perspectives. Chapter 8 discusses how to model software reliability behavior by using fuzzy methodology. Chapter 9 includes a number of mathematical problems which are raised by applications of fuzzy methodology in hardware and software reliability, but may be important for fuzzy set and possibility theories.

  • 14% sparen
    von D. Paul Benjamin
    185,00 €

    Change of Representation and Inductive Bias One of the most important emerging concerns of machine learning researchers is the dependence of their learning programs on the underlying representations, especially on the languages used to describe hypotheses. The effectiveness of learning algorithms is very sensitive to this choice of language; choosing too large a language permits too many possible hypotheses for a program to consider, precluding effective learning, but choosing too small a language can prohibit a program from being able to find acceptable hypotheses. This dependence is not just a pitfall, however; it is also an opportunity. The work of Saul Amarel over the past two decades has demonstrated the effectiveness of representational shift as a problem-solving technique. An increasing number of machine learning researchers are building programs that learn to alter their language to improve their effectiveness. At the Fourth Machine Learning Workshop held in June, 1987, at the University of California at Irvine, it became clear that the both the machine learning community and the number of topics it addresses had grown so large that the representation issue could not be discussed in sufficient depth. A number of attendees were particularly interested in the related topics of constructive induction, problem reformulation, representation selection, and multiple levels of abstraction. Rob Holte, Larry Rendell, and I decided to hold a workshop in 1988 to discuss these topics. To keep this workshop small, we decided that participation be by invitation only.

  • 12% sparen
    - A Special Issue of Machine Learning on Knowledge Acquisition
    von Sandra Marcus
    94,00 €

    What follows is a sampler of work in knowledge acquisition. It comprises three technical papers and six guest editorials. The technical papers give an in-depth look at some of the important issues and current approaches in knowledge acquisition. The editorials were pro- duced by authors who were basically invited to sound off. I've tried to group and order the contributions somewhat coherently. The following annotations emphasize the connections among the separate pieces. Buchanan's editorial starts on the theme of "e;Can machine learning offer anything to expert systems?"e; He emphasizes the practical goals of knowledge acquisition and the challenge of aiming for them. Lenat's editorial briefly describes experience in the development of CYC that straddles both fields. He outlines a two-phase development that relies on an engineering approach early on and aims for a crossover to more automated techniques as the size of the knowledge base increases. Bareiss, Porter, and Murray give the first technical paper. It comes from a laboratory of machine learning researchers who have taken an interest in supporting the development of knowledge bases, with an emphasis on how development changes with the growth of the knowledge base. The paper describes two systems. The first, Protos, adjusts the training it expects and the assistance it provides as its knowledge grows. The second, KI, is a system that helps integrate knowledge into an already very large knowledge base.

  • 12% sparen
    von Mohammed Ismail
    94,00 €

    Very large scale integration (VLSI) technologies are now maturing with a current emphasis toward submicron structures and sophisticated applications combining digital as well as analog circuits on a single chip. Abundant examples are found on today's advanced systems for telecom- munications, robotics, automotive electronics, image processing, intelli- gent sensors, etc .. Exciting new applications are being unveiled in the field of neural computing where the massive use of analog/digital VLSI technologies will have a significant impact. To match such a fast technological trend towards single chip ana- logi digital VLSI systems, researchers worldwide have long realized the vital need of producing advanced computer aided tools for designing both digital and analog circuits and systems for silicon integration. Ar- chitecture and circuit compilation, device sizing and the layout genera- tion are but a few familiar tasks on the world of digital integrated circuit design which can be efficiently accomplished by matured computer aided tools. In contrast, the art of tools for designing and producing analog or even analogi digital integrated circuits is quite primitive and still lack- ing the industrial penetration and acceptance already achieved by digital counterparts. In fact, analog design is commonly perceived to be one of the most knowledge-intensive design tasks and analog circuits are still designed, largely by hand, by expert intimately familiar with nuances of the target application and integrated circuit fabrication process. The techniques needed to build good analog circuits seem to exist solely as expertise invested in individual designers.

  • 12% sparen
    von Steven L. Salzberg
    94,00 €

    Machine Learning is one of the oldest and most intriguing areas of Ar- tificial Intelligence. From the moment that computer visionaries first began to conceive the potential for general-purpose symbolic computa- tion, the concept of a machine that could learn by itself has been an ever present goal. Today, although there have been many implemented com- puter programs that can be said to learn, we are still far from achieving the lofty visions of self-organizing automata that spring to mind when we think of machine learning. We have established some base camps and scaled some of the foothills of this epic intellectual adventure, but we are still far from the lofty peaks that the imagination conjures up. Nevertheless, a solid foundation of theory and technique has begun to develop around a variety of specialized learning tasks. Such tasks in- clude discovery of optimal or effective parameter settings for controlling processes, automatic acquisition or refinement of rules for controlling behavior in rule-driven systems, and automatic classification and di- agnosis of items on the basis of their features. Contributions include algorithms for optimal parameter estimation, feedback and adaptation algorithms, strategies for credit/blame assignment, techniques for rule and category acquisition, theoretical results dealing with learnability of various classes by formal automata, and empirical investigations of the abilities of many different learning algorithms in a diversity of applica- tion areas.

  • 13% sparen
    von Kit Man Cham, Paul Vandevoorde, Keunmyung Lee, usw.
    140,00 €

    examples are presented. These chapters are intended to introduce the reader to the programs. The program structure and models used will be described only briefly. Since these programs are in the public domain (with the exception of the parasitic simulation programs), the reader is referred to the manuals for more details. In this second edition, the process program SUPREM III has been added to Chapter 2. The device simulation program PISCES has replaced the program SIFCOD in Chapter 3. A three-dimensional parasitics simulator FCAP3 has been added to Chapter 4. It is clear that these programs or other programs with similar capabilities will be indispensible for VLSI/ULSI device developments. Part B of the book presents case studies, where the application of simu- lation tools to solve VLSI device design problems is described in detail. The physics of the problems are illustrated with the aid of numerical simulations. Solutions to these problems are presented. Issues in state-of-the-art device development such as drain-induced barrier lowering, trench isolation, hot elec- tron effects, device scaling and interconnect parasitics are discussed. In this second edition, two new chapters are added. Chapter 6 presents the methodol- ogy and significance of benchmarking simulation programs, in this case the SUPREM III program. Chapter 13 describes a systematic approach to investi- gate the sensitivity of device characteristics to process variations, as well as the trade-otIs between different device designs.

  • 13% sparen
    von Edmund H. Durfee
    139,00 - 140,00 €

    As artificial intelligence (AI) is applied to more complex problems and a wider set of applications, the ability to take advantage of the computational power of distributed and parallel hardware architectures and to match these architec- tures with the inherent distributed aspects of applications (spatial, functional, or temporal) has become an important research issue. Out of these research concerns, an AI subdiscipline called distributed problem solving has emerged. Distributed problem-solving systems are broadly defined as loosely-coupled, distributed networks of semi-autonomous problem-solving agents that perform sophisticated problem solving and cooperatively interact to solve problems. N odes operate asynchronously and in parallel with limited internode commu- nication. Limited internode communication stems from either inherent band- width limitations of the communication medium or from the high computa- tional cost of packaging and assimilating information to be sent and received among agents. Structuring network problem solving to deal with consequences oflimited communication-the lack of a global view and the possibility that the individual agents may not have all the information necessary to accurately and completely solve their subproblems-is one of the major focuses of distributed problem-solving research. It is this focus that also is one of the important dis- tinguishing characteristics of distributed problem-solving research that sets it apart from previous research in AI.

  • 13% sparen
    von Evan Tick
    139,00 €

    One suspects that the people who use computers for their livelihood are growing more "e;sophisticated"e; as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "e;sophisticated"e; in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "e;hacker's"e; view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.

  • 13% sparen
    von William J. McCalla
    139,00 €

    From little more than a circuit-theoretical concept in 1965, computer-aided circuit simulation developed into an essential and routinely used design tool in less than ten years. In 1965 it was costly and time consuming to analyze circuits consisting of a half-dozen transistors. By 1975 circuits composed of hundreds of transistors were analyzed routinely. Today, simulation capabilities easily extend to thousands of transistors. Circuit designers use simulation as routinely as they used to use a slide rule and almost as easily as they now use hand-held calculators. However, just as with the slide rule or hand-held calculator, some designers are found to use circuit simulation more effectively than others. They ask better questions, do fewer analyses, and get better answers. In general, they are more effective in using circuit simulation as a design tool. Why? Certainly, design experience, skill, intuition, and even luck contribute to a designer's effectiveness. At the same time those who design and develop circuit simulation programs would like to believe that their programs are so easy and straightforward to use, so well debugged and so efficient that even their own grandmother could design effectively using their program.

  • 12% sparen
    von Paul E. Utgoff
    94,00 €

    This book is based on the author's Ph.D. dissertation[56]. The the- sis research was conducted while the author was a graduate student in the Department of Computer Science at Rutgers University. The book was pre- pared at the University of Massachusetts at Amherst where the author is currently an Assistant Professor in the Department of Computer and Infor- mation Science. Programs that learn concepts from examples are guided not only by the examples (and counterexamples) that they observe, but also by bias that determines which concept is to be considered as following best from the ob- servations. Selection of a concept represents an inductive leap because the concept then indicates the classification of instances that have not yet been observed by the learning program. Learning programs that make undesir- able inductive leaps do so due to undesirable bias. The research problem addressed here is to show how a learning program can learn a desirable inductive bias.

  • 12% sparen
    von Ken Carlberg
    94,00 €

    With the tragic airline disaster in New York City, on September 11th, 2001, the subject of emergency communications has become very important. Preferential Emergency Communications: From Telecommunications to the Internet is intended to provide an in-depth exposure to authorized emergency communications. These communications generally involve preferential treatment of signaling and/or data to help ensure forwarding of information through a network. This book covers examples ranging from private networks to current investigations using Next Generation Networks (i.e., IP based communications). The information acts as a reference for network designers, network vendors, and users of authorized emergency communications services.Preferential Emergency Communications: From Telecommunications to the Internet, a professional monograph, is divided into three sections. The first describes systems and protocols that have been deployed as private networks for use by government agencies like the U.S. Department of Defense. This section also presents an in-depth discussion on MLPP. We then present current work in the area of Land Mobile Radio, commonly used by local emergency personnel such as police and fireman. This second section also describes systems that have been deployed over the public switched telephone network. Finally, the third section presents insights on trying to support emergency communications over TCP/IP networks and the Internet. In this last item we look into what IETF protocols can be considered candidates for change, as well as those protocols and applications that should not be altered.Preferential Emergency Communications: From Telecommunications to the Internet is designed to meet the needs of a professional audience composed of practitioners and researchers in industry. This book is also suitable for senior undergraduate and graduate-level students in computer science and electrical engineering.

  • 13% sparen
    von Soha Hassoun
    140,00 €

    Research and development of logic synthesis and verification have matured considerably over the past two decades. Many commercial products are available, and they have been critical in harnessing advances in fabrication technology to produce today's plethora of electronic components. While this maturity is assuring, the advances in fabrication continue to seemingly present unwieldy challenges. Logic Synthesis and Verification provides a state-of-the-art view of logic synthesis and verification. It consists of fifteen chapters, each focusing on a distinct aspect. Each chapter presents key developments, outlines future challenges, and lists essential references. Two unique features of this book are technical strength and comprehensiveness. The book chapters are written by twenty-eight recognized leaders in the field and reviewed by equally qualified experts. The topics collectively span the field. Logic Synthesis and Verification fills a current gap in the existing CAD literature. Each chapter contains essential information to study a topic at a great depth, and to understand further developments in the field. The book is intended for seniors, graduate students, researchers, and developers of related Computer-Aided Design (CAD) tools. From the foreword: "e;The commercial success of logic synthesis and verification is due in large part to the ideas of many of the authors of this book. Their innovative work contributed to design automation tools that permanently changed the course of electronic design."e; by Aart J. de Geus, Chairman and CEO, Synopsys, Inc.

  • 13% sparen
    von Christoforos N. Hadjicostis
    139,00 - 140,00 €

    Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems describes coding approaches for designing fault-tolerant systems, i.e., systems that exhibit structured redundancy that enables them to distinguish between correct and incorrect results or between valid and invalid states. Since redundancy is expensive and counter-intuitive to the traditional notion of system design, the book focuses on resource-efficient methodologies that avoid excessive use of redundancy by exploiting the algorithmic/dynamic structure of a particular combinational or dynamic system. The first part of Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems focuses on fault-tolerant combinational systems providing a review of von Neumann's classical work on Probabilistic Logics (including some more recent work on noisy gates) and describing the use of arithmetic coding and algorithm-based fault-tolerant schemes in algebraic settings. The second part of the book focuses on fault tolerance in dynamic systems. Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems also discusses how, in a dynamic system setting, one can relax the traditional assumption that the error-correcting mechanism is fault-free by using distributed error correcting mechanisms. The final chapter presents a methodology for fault diagnosis in discrete event systems that are described by Petri net models; coding techniques are used to quickly detect and identify failures. From the Foreword: "e;Hadjicostis has significantly expanded the setting to processes occurring in more general algebraic and dynamic systems... The book responds to the growing need to handle faults in complex digital chips and complex networked systems, and to consider the effects of faults at the design stage rather than afterwards."e; George Verghese, Massachusetts Institute of Technology Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems will be of interest to both researchers and practitioners in the area of fault tolerance, systems design and control.

  • 15% sparen
    - From Cluster to Grid Computing
    von Péter Kacsuk
    100,00 - 185,00 €

    Distributed and Parallel Systems: From Cluster to Grid Computing is an edited volume based on DAPSYS 2006, the 6th Austrian-Hungarian Workshop on Distributed and Parallel Systems, which is dedicated to all aspects of distributed and parallel computing. The workshop was held in conjunction with the 2nd Austrian Grid Symposium in Innsbruck, Austria in September 2006.Distributed and Parallel Systems: From Cluster to Grid Computing is designed for a professional audience composed of practitioners and researchers in industry. This book is also suitable for advanced-level students in computer science. 

  • 14% sparen
    von Louis C. Westphal
    277,00 €

    This book is a revision and extension of my 1995 Sourcebook of Control Systems Engineering. Because of the extensions and other modifications, it has been retitled Handbook of Control Systems Engineering, which it is intended to be for its prime audience: advanced undergraduate students, beginning graduate students, and practising engineers needing an understandable review of the field or recent developments which may prove useful. There are several differences between this edition and the first. * Two new chapters on aspects of nonlinear systems have been incorporated. In the first of these, selected material for nonlinear systems is concentrated on four aspects: showing the value of certain linear controllers, arguing the suitability of algebraic linearization, reviewing the semi-classical methods of harmonic balance, and introducing the nonlinear change of variable technique known as feedback linearization. In the second chapter, the topic of variable structure control, often with sliding mode, is introduced. * Another new chapter introduces discrete event systems, including several approaches to their analysis. * The chapters on robust control and intelligent control have been extensively revised. * Modest revisions and extensions have also been made to other chapters, often to incorporate extensions to nonlinear systems.

Willkommen bei den Tales Buchfreunden und -freundinnen

Jetzt zum Newsletter anmelden und tolle Angebote und Anregungen für Ihre nächste Lektüre erhalten.