Große Auswahl an günstigen Büchern
Schnelle Lieferung per Post und DHL

Bücher veröffentlicht von Springer US

Filter
Filter
Ordnen nachSortieren Beliebt
  • 14% sparen
    von Masaru Kitsuregawa & Hidehiko Tanaka
    277,00 - 279,00 €

  • 14% sparen
    von V. E. Andreucci
    185,00 €

  • 14% sparen
  • 14% sparen
    von M. A. Pai
    257,00 - 258,00 €

  • 13% sparen
    von R. H. J. M. Otten & L. P. P. P. van Ginneken
    139,00 €

  • 14% sparen
    von Gholamreza Darai
    185,00 - 208,00 €

  • 14% sparen
    von R. Hoff-Jørgensen & G. Pétursson
    185,00 €

  • 14% sparen
    von Adrian Spitzer
    185,00 €

    Genetic disorders have emerged as a prominent cause of morbidity and mor- tality among infants and adults. As many as 10% to 20% of hospital admis- sions and at least 10% of the mortality in this age group are due to inherited diseases. There are at least two factors that have brought genetic disorders into the forefront of pediatrics. One is a great reduction in childhood mortality due to infections and nutritional deficiency states, and the other is the rapid progress made in the identification of genetic defects. Amniocentesis, chorionic villus sampling, and recombinant DNA technology have already had a tremendous impact on the practice of medicine. This is why the first two chapters of this volume are dedicated to general principles of molecular genetics and to a description of the techniques used to diagnose genetic disorders at the DNA level. The relevance of this new area of science to the study of inherited renal diseases is reflected in the large body of knowledge that has been generated regarding the association between various glomerular nephritides and genetic markers such as the HLA system, and even more impressively in the direct or indirect identification of abnormal genes or gene products in Alport's syn- drome, autosomal dominant polycystic kidney disease, and Lowe's syndrome. These discoveries figure prominently in the pages of this book. Yet, the progress we have made has barely scratched the surface of the problem.

  • 13% sparen
    von Robert F. Ozols
    139,00 €

  • 14% sparen
    von V. E. Andreucci
    185,00 €

  • 14% sparen
    von S. Abraham
    276,00 €

  • 14% sparen
    von Anne-Lise Christensen & David W. Ellis
    276,00 €

  • 13% sparen
    von Robert A. Iannucci
    139,00 - 140,00 €

    It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro- gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex- pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "e;What can dataflow architectures buy us that von Neumann ar- chitectures can't?"e; In the following I explain in a round about way how Bob and I arrived at this question.

  • 12% sparen
    von Steven L. Salzberg
    94,00 €

    Machine Learning is one of the oldest and most intriguing areas of Ar- tificial Intelligence. From the moment that computer visionaries first began to conceive the potential for general-purpose symbolic computa- tion, the concept of a machine that could learn by itself has been an ever present goal. Today, although there have been many implemented com- puter programs that can be said to learn, we are still far from achieving the lofty visions of self-organizing automata that spring to mind when we think of machine learning. We have established some base camps and scaled some of the foothills of this epic intellectual adventure, but we are still far from the lofty peaks that the imagination conjures up. Nevertheless, a solid foundation of theory and technique has begun to develop around a variety of specialized learning tasks. Such tasks in- clude discovery of optimal or effective parameter settings for controlling processes, automatic acquisition or refinement of rules for controlling behavior in rule-driven systems, and automatic classification and di- agnosis of items on the basis of their features. Contributions include algorithms for optimal parameter estimation, feedback and adaptation algorithms, strategies for credit/blame assignment, techniques for rule and category acquisition, theoretical results dealing with learnability of various classes by formal automata, and empirical investigations of the abilities of many different learning algorithms in a diversity of applica- tion areas.

  • 12% sparen
    von Michel DuBois & Shreekant S. Thakkar
    94,00 €

  • 12% sparen
    von Debashis Bhattacharya
    94,00 €

    Test generation is one of the most difficult tasks facing the designer of complex VLSI-based digital systems. Much of this difficulty is attributable to the almost universal use in testing of low, gate-level circuit and fault models that predate integrated circuit technology. It is long been recognized that the testing prob- lem can be alleviated by the use of higher-level methods in which multigate modules or cells are the primitive components in test generation; however, the development of such methods has proceeded very slowly. To be acceptable, high-level approaches should be applicable to most types of digital circuits, and should provide fault coverage comparable to that of traditional, low-level methods. The fault coverage problem has, perhaps, been the most intractable, due to continued reliance in the testing industry on the single stuck-line (SSL) fault model, which is tightly bound to the gate level of abstraction. This monograph presents a novel approach to solving the foregoing problem. It is based on the systematic use of multibit vectors rather than single bits to represent logic signals, including fault signals. A circuit is viewed as a collection of high-level components such as adders, multiplexers, and registers, interconnected by n-bit buses. To match this high-level circuit model, we introduce a high-level bus fault that, in effect, replaces a large number of SSL faults and allows them to be tested in parallel. However, by reducing the bus size from n to one, we can obtain the traditional gate-level circuit and models.

  • 14% sparen
    von T. A. Depner
    185,00 - 257,00 €

  • 14% sparen
    von David E. Tupper & Keith D. Cicerone
    185,00 €

  • 14% sparen
    von Bruce S. Schoenberg & Roger J. Porter
    276,00 €

  • 13% sparen
    von R. Gordon Booth
    140,00 €

  • 13% sparen
    von Frank A. Paine
    186,00 €

  • 14% sparen
    von Naranjan S. Dhalla
    276,00 - 304,00 €

    Heart Hypertrophy and Failure brings together leading basic scientists and clinicians, presenting improved knowledge of the pathophysiology and treatment of the condition. The result is a synthesis of state-of-the-art information on molecular biology, cellular physiology and structure-function relationships in the cardiovascular system in health and disease. The papers presented describe fundamental mechanisms underlying changes in the cellular machinery during the development of cardiac hypertrophy and heart failure. Audience: Students, scientists, clinical and experimental cardiologists who seek to understand and manage the perplexing problems of hypertrophy and heart failure.

  • 13% sparen
    von S. T. Beckett
    140,00 - 146,00 €

  • 13% sparen
    von Kathleen Dahlgren
    139,00 €

    This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency- clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common- sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen- tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "e;Commonsense Reasoning with Verbs"e; (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations.

  • 13% sparen
    von Allen M. Dewey
    139,00 €

    This book describes a new type of computer aided VLSI design tool, called a VLSI System Planning, that is meant to aid designers dur- ing the early, or conceptual, state of design. During this stage of design, the objective is to define a general design plan, or approach, that is likely to result in an efficient implementation satisfying the initial specifications, or to determine that the initial specifications are not realizable. A design plan is a collection of high level design decisions. As an example, the conceptual design of digital filters involves choosing the type of algorithm to implement (e. g. , finite impulse response or infinite impulse response), the type of polyno- mial approximation (e. g. , Equiripple or Chebyshev), the fabrication technology (e. g. , CMOS or BiCMOS), and so on. Once a particu- lar design plan is chosen, the detailed design phase can begin. It is during this phase that various synthesis, simulation, layout, and test activities occur to refine the conceptual design, gradually filling more detail until the design is finally realized. The principal advantage of VLSI System Planning is that the increasingly expensive resources of the detailed design process are more efficiently managed. Costly redesigns are minimized because the detailed design process is guided by a more credible, consistent, and correct design plan.

  • 13% sparen
    von R. a. van Santen & Hans (J. )W. Niemantsverdriet
    140,00 - 148,00 €

  • 11% sparen
    von Arnold Perlmutter, Stephan L. Mintz & Behram N. Kursunogammalu
    95,00 €

  • 12% sparen
    von Xiaopeng Li & Mohammed Ismail
    94,00 €

  • 13% sparen
    von Klaus Atzwanger
    139,00 - 140,00 €

    Rough-and-tumble play provided one of the paradigmatic examples of the appli- tion of ethological methods, back in the 1970's. Since then, a modest number of - searchers have developed our knowledge of this kind of activity, using a variety of methods, and addressing some quite fundamental questions about age changes, sex diff- ences, nature and function of behaviour. In this chapter I will review work on this topic, mentioning particularly the interest in comparing results from different informants and different methods of investigation. Briefly, rough-and-tumble play (or R&T for short) refers to a cluster of behaviours whose core is rough but playful wrestling and tumbling on the ground; and whose general characteristic is that the behaviours seem to be agonistic but in a non-serious, playful c- text. The varieties of R&T, and the detailed differences between rough-and-tumble play and real fighting, will be discussed later. 2. A BRIEF HISTORY OF RESEARCH ON R&T In his pioneering work on human play, Groos (1901) described many kinds of rough-and-tumble play. However, R&T was virtually an ignored topic from then until the late 1960's. There was, of course, a flowering of observational research on children in the 1920s and 1930s, especially in North America; but this research had a strong practical o- entation, and lacked the cross-species perspective and evolutionary orientation present in Groos' work.

Willkommen bei den Tales Buchfreunden und -freundinnen

Jetzt zum Newsletter anmelden und tolle Angebote und Anregungen für Ihre nächste Lektüre erhalten.