Große Auswahl an günstigen Büchern
Schnelle Lieferung per Post und DHL

Bücher veröffentlicht von MOHAMMED ABDUL SATTAR

Filter
Filter
Ordnen nachSortieren Beliebt
  • von Divya B.
    35,00 €

    Breast cancer is the most commonly diagnosed female cancer and is the major cause of morbidity and mortality among women worldwide. Cancer statistics in developed countries such as the United States of America report that about 12% of women in their general population will develop breast cancer sometime during their lives and it represents nearly 23% of all cancers in the American women. Tumors with the absence of distant metastasis and more than 4 cm in lateral dimension or tumors of any size with direct extension to the chest wall or skin are characterized as LABC. The 5-year survival rate of LABC patients is less than 30% and the median survival is about 2-2.5 years [14]. LABC patients have increased risk of loco regional recurrence, distant metastasis, reduced quality of life and overall survival than the patients diagnosed with early stage disease and require systemic therapy for improved outcome. The standard treatment care for LABC patients is as follows: neo-adjuvant chemotherapy (NACT) followed by surgery and postoperative whole breast radiotherapy. Neo-adjuvant therapy are the treatments administrated before the surgical removal of breast tumor. It shrinks the inoperable large tumors, so that it can be removed with less extensive surgery. NACT avoids mastectomy (removal of all breast tissues from the breast) in 25% of LABC patients and also improves the overall survival after breast conservative surgery. Combining NACT with the pre-operative radiation therapy (RT) results in even higher rate of pathological complete response (pCR) and breast conservation.

  • von Mohammad Arshad
    40,00 €

    In the present era, emerging technologies need current-carrying electrical components that are universally good in functionality and durability with low power dissipation. The growing demand for non-volatile, faster, multifunctional memory and logic devices with small sizes inspired the scientific community worldwide. Recently, many have been inspired by spintronics, particularly the usage of micro- electromechanical systems that analyse the control of the magnetic (spin) state via electric fields and/or vice versa. Such a phenomenon utilizes the intrinsic spin of electrons instead of their electronic charge for data storage. In light of this, the next- generation electronics devices, including solid-state transformers, very sensitive dc and ac magnetic field sensors, electrically tunable microwave filters, and electromagnetic-optic actuators, need to be smaller in size with the coexistence of various order parameters, i.e., magnetization, polarization, and strain. A special class of materials that unite these ferroic orders is termed as multiferroics. Multiferroic materials are those that coexist and are connected with more than one ferroic order. The subset with ferroelectric and magnetic order is the most interesting magnetoelectrics. A single-phase multiferroic material is one that possesses two or all three of the so-called "ferroic" properties, i.e., ferromagnetism, ferroelectricity, and ferroelasticity. The term "multiferroic" was first used by Schmid. However, the coexistence of magnetism and ferroelectricity is primarily described as multiferroics in current study and literature. On the other hand, ME coupling is possible regardless of the magnetic and electric order characteristics; for instance, ME can happen in paramagnetic ferroelectrics. ME coupling may also develop between the two order factors directly or indirectly through strain. The magnetic and electric order parameters emerge in different but closely related phases in the strain-mediated indirect ME coupling. Due to its potential applications in industry, magneto-electric multiferroics have attracted increasing attention from the scientific community.

  • von Rafaut Noor
    38,00 €

    Education includes a wide spectrum of knowledge and info which is difficult to be defined. Whatever broadens our horizon, deepens our insight, enhances our understanding, refines our emotions, piques our curiosity, and stimulates our thought and feelings, always educates us in some way. Education draws out the innate potentialities of students as well as unfolding their natural abilities and interests before the society. The process of education starts from the birth of a person. Education includes all forms of influences, direct or indirect, formal or informal, deliberate or incidental, planned or unplanned. The academic achievement of students is correlated with the potentialities and capabilities developed as outcomes of education. Thus, educationists try to fully concentrate on the development of potentials of the students to recognize and channelize them for the benefit of the individual and society. Adolescence is the age of many twists and turns as there are multiple transitions involving education, vocation, social interactions, upcoming responsibilities and future life. A wide and deep understanding of adolescence is based on the information we receive from different perspectives including philosophy, psychology, biology, politics, sociology, sports, and multimedia industries. As this technetronic world is progressing by leaps and bounds, media has taken over the globe. Along with the various psychological factors, celebrity culture emerged as one of the most influential factors which seems to affect the self-identity and academic achievement of students especially at senior secondary level. As youngsters in this group tend to get influenced and fascinated very soon by the glamorous world, they develop their self-identity accordingly as they want to be the best among their peers. Humans have a natural tendency to recognize people, whom they admire and acquaint these people for inspiration, fantasies, romance or for gossip. These admired people are known as "Celebrities" who usually summon utmost human desires, i.e., love, romance, passion, courage, imitation, inspiration, life goals etc. Celebrities seem charming and fascinating to normal people because they look as if they are involved in a universe which is parallel to others. That is just like ours yet is light-years beyond reach and on comparison normal lives seem boring and dull. Academic achievement is the basis on which the entire educational outcome of students is determined. It is one of the major aims of education. Apart from, considering it as a criterion of moving to the next class, academic achievement is also an indicator of success and it in turn establishes the future pattern of one's living. All educators try to understand the strengths and weaknesses of students whom they teach. They even explore the factors affecting the educational outcomes of their students. These factors play an important role in understanding the relationship between students' self-identity and academic achievement.

  • von Veeraankalu Vuyyuru
    35,00 €

    Rainfall is essential to the survival of all living things. It is important not only for humans, but also for animals, plants, and all other living things. Water is probably one of the most natural resources on the planet, and it plays an important part in agriculture and farming. Changes in climatic conditions, as well as rising greenhouse gas emissions, have made it harder for humans and the planet to get the appropriate quantity of rainfall to meet human requirements and continue to utilize in daily life. As a result, it has become critical to analyze shifting rainfall patterns and attempt to forecast rain not just for human needs but also for environmental purposes to forecast natural disasters that might be caused by unexpected heavy rains.Weather forecasting is the process of predicting the future state of the atmosphere at a specific location by applying the current technology and methods, applied to the atmosphere's present state such as humidity, temperature, and wind are gathered. Meteorology is the study of the earth's atmosphere and the distinctions in moisture and temperature patterns. In the process of meteorology, the data collected over the present atmospheric conditions are used to determine the weather forecasts. The major challenges with weather forecasts lie in the atmosphere's chaotic nature, and inadequate knowledge of the process and the forecast range to be predicted. In the case of Automatic weather stations or trained observers, initially, the process starts with observing the surface of the earth's atmosphere followed by collecting information like wind speed and direction, temperature, precipitation, and humidity. Rainfall, stage a substantial character in the climate system as it directly influences other major factors such as agriculture, ecosystems, and water resource management. Often, if not properly managed, the heavy rainfall may also lead to natural disasters like floods, mudslides, landslides, and so on. Every year, the disasters caused by these heavy rainfalls are severely affecting the human-life and infrastructure. In recent times, machine learning has gained a lot of importance because of its capability in finding solutions to most conventional engineering problems. Machine learning algorithms train the computers on data and utilize statistical analysis to automatically predict output for the new input.

  • von Attaur Rahman Azami
    36,00 €

    "A nominal zero interest rate, according to Friedman, is a requirement for the best distribution of resources. By examining general equilibrium models, it was discovered that zero interest rates are both necessary and sufficient for allocative efficiency". Interest Free Banking is based on the Islamic Shariah's "Fiqhul-Muamalaat" doctrine. There are two main tenets: the first is that parties should split profits and losses; the second prohibits lenders and investors from engaging in the collection and payment of interest. The "Riba" of interest collection is expressly forbidden under Shariah. Additionally, "investments in pork, gambling, entertainment, and other banned goods are severely prohibited. In 51 nations, including the United States and the United Kingdom, there were more than 500 Islamic Banks". Thus, IFB was developed to meet customer demand for banking services that complied with Shariah regulations. Islamic thought views "Interest Free Bank as a Halaal alternative that would protect the interests of the servants from harms associated with Haraam that is prohibited by the Almighty saying (God has permitted the sale and prohibited usury)" Quran: 2:75. The fundamental justification for the ban on interest is that it is oppressive and unfair. Rich people who can lend can also charge interest on those loans, allowing them to amass wealth at the expense of those who are in need of it. The Islamic Financial System was reportedly standing on solid ground, particularly in the aftermath of the 2008 financial crisis due to the presence of financial institutions in numerous nations. Currently, the UAE, Saudi Arabia, Bahrain, Britain, Malaysia, and Singapore are vying with one another to establish themselves as the world's leading centres for IFB. The Islamic finance sector has now begun to grow outside of the major Muslim nations, transcending national boundaries and religious restrictions. This industry has experienced tremendous expansion in Europe, the "United Kingdom", "Australia", "Hong Kong", "Korea", and the "United States".

  • von Shashikumar G S
    32,00 €

    Critical issues in human behaviour are emotions. In daily interactions with others and in the making of decisions, emotion is significant. According to physiological concepts, emotion is a short-lived event. There is a close relation between emotion recognition and human wellness. The environment presented the constant demands, so that phenomenon of emotion represents efficient modes of adaptation. Human interaction has been categorised into two interaction channels: one transmits very clear messages, which is about anything or nothing and other one transmits embedded messages about the speakers themselves. In order to understand the channel which does not transmit clear messages, huge efforts were made towards the linguistic analysis. But the second is not as well understood. Understanding the other person's emotion is one of the key challenging tasks associated with the embedded message. Emotion can express through actions, such as words, sounds, facial expression, and body language, However, emotion expressed in such actions are sometimes manipulated by people and real feelings cannot be conveyed clearly. The emotion is referred by researchers as a mental state that is influenced by strong or deep brain impulses. This neural impulse arises subjectively but not through conscious effort. To move an organism into action, emotion induces either positive or negative psychological response. A cutting-edge area of computing called affective computing has been working to enhance human-computer interaction (HCI). Due to a lack of emotional intelligence, today's HCI systems are defective and unable to distinguish different emotional states in people. By recognizing emotional cues that emerge during HCI and evoking emotional reactions, affective computing seeks to close this gap. If computers can identify emotions, this will improve the design and development of systems that can do the same. The healthcare industry would benefit greatly from wearable technology that can distinguish different human emotions in real time and respond accordingly.

  • von Priyanka Singh Tomar
    36,00 €

    The current evolution of West Nile Virus (WNV) disease with increase in the frequency, number and severity of cases in many countries reveal that new epidemiological scenarios are being appeared all across the world. The first United States (US) outbreak of WNV occurred in New York City in 1999; 68 human infections, mostly as encephalitis and meningitis were reported, resulting in 7 deaths. Since 1999, WNV epidemics have re-emerged again with about 30,000 confirmed human cases and more than 1,000 deaths through 2006. Analysis have revealed that spread of a new WNV dominant genotype, named WN02, which has been rising in the US since 2002. Phylogenetic comparison of complete and partial nucleotide sequences from US isolates found between the 1999 and 2000 outbreaks with WN- NY99 isolate revealed a high level of genetic similarity. The study of 22 WNV isolates from 2001-2002 revealed genetic diversity compared with WN-NY99 genotype. More than 70 viruses are included within the genus Flavivirus which have wide global range. Some geographically important flaviviruses which causes infection in humans include JEV, Zika virus (ZIKAV), Dengue virus (DENV), Murray valley encephalitis virus (MVEV), WNV and Yellow fever virus (YFV). Utmost number of these viruses is responsible for hemorrhagic disease, encephalitis clinical expression such as capillary leakage and fever finally leading to death. According to analysis, more than half of the world population suffering from the viral infection risk by WNV, DENV, ZIKAV, JEV and YFV. WNV transmission usually occurs by the bite of an infected Culex species mosquitoes to healthy human and horses. Culex species mosquitoes have been recognized as the most important vector for transmission of WNV infection. Humans either most commonly get infected by mosquitoes acquire virus infection or mosquitoes bite while feeding on WNV infected birds. About 20% of WNV infected individuals develop symptoms after incubation period of 3-14 days. The comman symptoms of WNV are headache, fever, rashes, nausea and vomiting etc. But this situation become critical during neurological complications like encephalitis, meningitis and acute flaccid paralysis occurs in young or old age individuals

  • von Apurva S. Kittur
    36,00 €

    Internet of Things (IoT) is coined in 1999 by Kevin Ashton. IoT can be de¿ned in many ways. 'Internet' refers to the interconnectivity of devices to create a network, and 'Things' refers to the embedded objects or devices that can connect to the Internet. One way of de¿ning is, 'it is a network of sensors and smart devices which sense the data which is further processed and analysed in an ubiquitous network.' IoT has seen rapid development in recent years because of its 'smartness.' The various applications of IoT include Smart City, Smart Home, Smart Health, Transport and Logistic applications, Weather monitoring and Forecast etc. These applications have millions of devices generating large volumes of data. The rapid development of IoT has increased the curiosity of the attackers. The devices (nodes) in the IoT network have weak security protocols because of their limited computation ability and energy. Hence the nodes of the network are vulnerable to various kinds of attacks. The IoT network has sensors, actuators, controllers, gateway heads, sink, etc. For our reference, we have broadly classi¿ed these devices into sensor nodes and gateway nodes. Sensor nodes includes low computation end devices such as sensors, actuators etc., and gateway node includes gateway, sink, cluster head etc. The gateway nodes have better computation power as compared to sensor nodes. But the overall network is not competent to handle complex cryptographic operations. Signature is a unique way to identify a signer. The authenticity of a person/ organization/ entity is veri¿ed through its signature. A signature can be a handwritten one, or it can be a digital one. Digital signatures are used to verify the content of the received message and the signer's identity in digital communication. The exponential growth of Internet technology has led to the growth in usage of digital signatures. Every signer generates a unique signature using his/her private key. The veri¿er possesses the public key of the signer and veri¿es the signature using the same. The signature generated by the signer can be veri¿ed by everyone who has access to the public key. The concept of batch veri¿cation is introduced to reduce the veri¿cation time and complexity. Batch veri¿cation schemes verify multiple signatures together with signatures either signed by single or multiple signers.

  • von Amit Kumar Trivedi
    35,00 €

    The developments in electronic circuit manufacturing technology made it feasible to have very compact yet powerful mobile devices. Society is more connected using these mobile electronic devices with fast and advanced communication technologies. In this electronically advanced society, the card or token-based person identi¿cation or authentication is no more trusted. Information of card can be stolen and a duplicate card can be made very easily, which is exactly same as the original one. PIN or password can be easily guessed knowing some information about the person that he/she shares on social network. Even the secret password or card can be shared, so they cannot provide security against repudiation. The use of idiosyncratic anatomical or behavioural characteristics, called biometric identi¿ers or attributes or properties or traits for establishing the individuality of a person for either identi¿cation or veri¿cation is called Biometric recognition or simply Biometrics. Biometric traits cannot be shared or stolen or duplicated. They are intrinsic property of an individual's physical body or behaviour and inseparable from the individual. Establishing the identity of a person both positive and negative from his/her physical body or behaviour is a very powerful tool for identity management and have immense potential. All these make biometrics a very appealing pattern recognition research problem, but it should be used carefully for the bene¿t of the society and reduce the identity fraud. A person's identity needs to be veri¿ed or established at a number of places. Like, is he or she authorised to enter some privilege facility, or for accessing privilege information or to check if the person is wanted for some crime. Sometimes, it is required to check whether a person has already availed a social service or not, or whether is he or she is eligible to avail the services. Unreliable answers of these questions may lead to some ¿nancial or non-¿nancial loss. Biometric identi¿ers provide a reliable answer to such questions as it cannot be easily forged, shared or lost as compared to traditional to- ken or knowledge based identi¿ers. Biometric recognition is more convenient, secure, and e¿cient; hence has better accountability.

  • von Deepika T
    35,00 €

    The relentless ascent of the cloud computing paradigm has garnered focused attention in the framework of industry 4.0. Nowadays, Cloud computing services are being used by 70% of business organizations, except 10% more organizations contrived to utilize it. As a result, 4000 data centers are the estimated need over the next decade to accompany 400 million servers. In 2013, the projected energy utilization of United States data centers was 91 billion kWh of electricity, equivalent to a yearly yield of 34 huge (500-megawatt) coal-¿red power plants, sücient to provide electricity to all households in New York City for two years. Consequently, in the next few years, this is expected to escalate to approximately 140 billion kilowatts per hour; it emits almost 150 million carbon emission metrics annually. Speci¿cally, Amazon expends nearly half its administration ¿nancial plan to control and cool the server farms. Additionally, excessive power utilization increases system temperature and escalates every 10¿C tends to double the failure rate of electronic devices. The data center's power utilization will foresee (3- 13)% of worldwide electricity usage in 2030. The worldwide power utilization of the Hyper-Scale Data Centers (HSDCs) is 5%, while the Small and Medium-Scale Data Centers (SMSDCs) consumed the rest of the 95%. The U.S established nearly 5.17 million servers (40%) in SMSDCs. In recent days, the SMSDCs furnished with high computing utilities tend to in¿uence server power utilization. Therefore, this calls for identifying the monitoring and control measures to curtail power utilization and minimize the carbon footprint in SMSDCs. A cloud data center is associated with a group of connected Physical Machines (PMs) or hosts used by organizations for network processing, remote storage, and access to enormous data. The data centers are the backbone of the cloud environments. The virtualization technique plays a signi¿cant role in the data centers - facilitates sharing resources among customers through Virtual Machines (VMs). The IaaS layer uses virtualization technology to create VMs, consolidate work-loads, and facilitate the delivery of computational resources to end-users. The industry 4.0 environment encompasses the extensive growth of big data applications and the pervasive Internet of Things technology. Data centers are central to the current modern industrial business world. Therefore, almost 80 % of business organizations are contriving to transform to cloud computing technology, promising to enhance the business functionality. Extensive enhancements in the SMSDC infrastructure comprise a diverse set of connected devices that disseminate resources to the end users.

  • von Mahak
    33,00 €

    The development of the World Wide Web has huge and increasing amount of data and information. This huge data provided by the web has become an important information resource for all internet users. Furthermore, the low cost of web data makes it more attractive to researchers. Users can retrieve web data by browsing and keyword searching. Still, it may have lots of limitations to these techniques. Normally, several web links are getting users while they browsing for one data so it is hard for researchers to retrieve data efficiently. Mostly, pages in web contain both hyperlinks and text documents to other documents. Furthermore, mailing lists, newsgroups, and forums are considered as another form of data sources. Thus, web mining can also support web mining design and implementation but this becomes provocation for people with extracting web information. Web mining is able to support the web information sources based on user needs, including knowing availability, importance and relevance of web systems; it should be able to select extracted data, because both related and not related information are present in Web sites; it should be easy to collect data and then analyze and help to build models and produce validity. Internet users have improved significantly over the last decade and endure to growth. Through the user numbers, data available in web, persistent to increase exponentially. E- commerce is one of the application areas related to information mining. Fast growth of internet users has improved e-business applications. Many attempts are made on "breaking the syntax barrier" in web and many of them depend on text corpora of the semantic information used completely by statistical techniques. Ontology framework plays a key role in semantic web along with the artificial intelligence purposes and these contain Resource Description Framework (RDF) and XML. Ontology has developed into a necessary model tool for applying various intelligent systems which represent domain knowledge, which is easily understandable by both humans and machines. Ontologies play important role to make interoperability efficient and smooth, among heterogeneous systems. Ontology basically provides the link between particular domain concepts. The aim of ontology is to attain good knowledge about that system that can be circulated between people and application framework and intend to gain knowledge of domain and their role involves the semantics design exactly in a generic method, offering the agreement premise inside the area. In general, Ontology covers four key components namely: instances, concepts, axioms and relations. Concept was key element and is a basic domain with collection of group or objects or abstract set and normally means a common knowledge shared among group of members.

  • von Prachi Telang
    36,00 €

    Phase transitions and new phases of matter have continued to challenge our understanding of the quantum condensed matter systems. For decades, phase transitions were thought to be facilitated by the breaking of some symmetry. For example, magnetic ordering emerges due to breaking of the time-reversal symmetry; the crystallization of water into ice, on the other hand, involves broken translational symmetry. This paradigm, however, was challenged with the discovery of topological order in some condensed matter systems in the year 1982. Since then many electronic phases with novel topologies have been predicted theoretically and few have also been realized experimentally. Examples of these electronic phases, across the research frontier, span from graphene to the topological insulators and beyond. Still, several theoretical predictions of various novel topological phases are yet to be realized experimentally. Due to this reason, signi¿cant efforts are concentrated around materials with strong spin-orbit interaction, which provide a fertile ground for experimental realization of novel topological phases. Much of condensed matter is about how different kinds of order emerge from inter- actions between many simple constituents. An ideal example of this is the family of transition metal oxides (TMOs) where a plethora of novel phenomena such as high-TC superconductivity, colossal magnetoresistance, and metal-insulator transitions could be observed as a result of the complex interplay of various electronic interactions such as electron correlations, crystal ¿eld splitting and spin-orbit coupling. The magnitude of these interactions depends on a number of factors like the atomic number of the transition metal, underlying crystal geometry, ligand environment around the transition metal, etc. Understanding the interplay of these interactions which ultimately decides the ground state is one of the fundamental problems in condensed matter physics. Failure of band theory to explain the insulating ground state in TMOs highlighted the importance of electron-electron interactions. In TMOs, the conduction is driven by d- orbitals which are spatially compact when compared to the s- or p-orbitals in simple metals. This results in strong Coulomb repulsion between the d electrons of TMOs. The transition from metallic to insulating state due to strong electron correlations was successfully modelled theoretically via Hubbard model given by John Hubbard.

  • von Anil Kumar
    33,00 €

    Energy Sources can be classified as primary and secondary energy sources. Primary energy sources are normally categorized as renewable and non-renewable on the basis of their depleting characteristics. Renewable energy derived from natural resources and they are automatically replenished. It is also known as clean energy sources. Secondary energy sources are derived from transformation of primary energy sources i.e. heat and electricity. Renewable energy sources are inexhaustible but as per Indian climatic conditions solar energy is most suitable energy source. Solar Photovoltaic (PV) and solar thermal both are efficient, getting popularity all over the globe. Selection of them is depends on utility, suitability. For energy storage and 24x7 energy supply solar thermal technology is getting more popularity than solar PV technology. Solar thermal technology is also acknowledged as Concentrating Solar Power (CSP). CSP systems could be cost-effectively feasible at minimum 1600-1800 kWh/m2/year Direct Normal Irradiance (DNI) by utilizing novel technologies, substances, economies of scale and supporting renewable policies, etc. CSP systems have a range of prime objectives. These are to work environmentally safe, to diminish primary cost and ground area, to increase long-term system trustworthiness, to make possible ease in service and maintenance. Sequestration of one ton of Carbon Dioxide-Equivalent (C02-e) equivalent to one CER unit. CSP technologies can be categorized as line and point focus,

  • von Vinayak N Kulkarni
    35,00 €

    The bone is a functional organ associated with other elements such as vascular systems, cartilage, connective tissues and nervous components. Body movement is possible when these functional organs act together with skeletal muscles. Large bones are more prone to injury in the skeletal system compared to small and medium bones. Injured bones are treated through internal and external fixation of implants. Apart from long bone injury, joint replacement is another prime intervention where the bone is anticipated to host a biomaterial. The regeneration process of a patient's injury depends on the nature of the response of the bone to the implanted biomaterial. The demand for biomaterials is increasing year by year. The field of biomaterials has been on a continuous boom for the last few decades because of the increase in the ageing population and growth in the average weight of people across the globe, apart from common accidents and sports injuries. Heart, blood vessels, shoulders, knees, spine, elbow, ears, hips, dental structures etc., are a few parts of the body in which biomaterials are used as artificial valves, stents, replacement implants and bone plates. More replacements have been registered for hip, knee, spine and dental-related problems. By the end of the next decade, towards 2030, it has been estimated that the number of total knee arthroplasty surgeries is forecasted to grow by 673%, and total hip replacement will increase by 174% compared to the present replacement rate. The mechanical qualities of the bone deteriorate and are degraded as a result of excessive loading on bones and joints and the lack of a normal biological self-healing process, which causes degenerative illnesses. One of the best solutions to such problems is to use artificial biomaterials as an implant of suitable size and shape, which supports the function of restoring the body movement in the affected joints and compromised structures. No doubt that the replacement surgeries have dwelled, but on the other side, the revision surgeries of knee, hip and spinal implants have been increasing rapidly, which is a matter of concern. These revision surgeries are expensive with a low success rate and, most importantly, painful for the patients. The targets of the present day's researchers are not just to develop appropriate bio-implants but also to select suitable machining techniques and proper procedures to produce such bio-implant alloys, for improving the quality of the implant material to avoid revision surgeries.

  • von Charu Sharma
    35,00 €

    In recent years, Wireless Sensor Networks (WSNs) have gained global attention as they provide the most cost-effective solutions to a variety of real-world problems. However, because of the wireless communication and lack of physical medium, these networks are vulnerable to many cyber-attacks. As a result, false information is easily injected to bring the network performance down. The communication of sensor nodes (SNs) acquires different types of security threats. Due to the limited storage and low power of SNs, security solutions are unachievable. As a result, extensive research is required to achieve security in WSNs. Different issues related to security in WSNs have been analyzed and discussed in this chapter, along with various research objectives implemented in this thesis in the field of WSNs. WSNs offer immense benefits due to their low-cost solutions, flexibility to work in different environments, ease of deployment (no cables necessary), easy troubleshooting, minimal installation cost and high performance. These networks consist of autonomous, small-sized, portable smart SNs or motes. The nodes are randomly deployed in an unattended fashion and connected via wireless communications. WSNs are a rapidly developing technology that can be utilized in a variety of dangerous and unusual conditions including forest fire detection, battlefields, agriculture, industries, health care, oceanography and wildlife movement monitoring, as well as used to automate mundane tasks. WSNs are highly scalable, versatile and ideally suited for real-time monitoring because of their low maintenance. The sensor network is made up of SNs that sense data and relay it to the cluster head (CH). The role of a CH is to gather data, process it and communicate with other CHs in order to pass the information to the base station (BS). The SNs have restricted resources in terms of power, bandwidth,

  • von Samraj Mollick
    36,00 €

    Porosity is an important feature of a material. In a conventional sense, porous materials must have permanent, interconnected voids that are permeable to gases or liquids. Generally, Porosity can be observed in rocks, ceramics, soils, biological tissue, charcoal, dried plant husks, and have been utilized for filtration, purification, petrochemicals, cooling, etc.[1-4] In modern times, there is a growing demand for utilization of porous solids not only limited to adsorption and catalysis, but they are also deployed in different sectors including energy, environmental, and health department. The porous solids have ubiquitous influence on our society and are indispensable in our daily life for a long time. However, the target-specific applications are almost impossible with such traditional porous solids as the structure- property relationship remains unknown. Therefore, the search for a new class of porous materials is essential where fine-tuning of order and functionalities at the atomic level could be easily attained. Such exquisite control over the entire structures of the porous materials permits the tailoring of materials for challenging and appealing applications that are still not realized by such traditional porous solids. This newly developed special class of porous materials have been termed as advanced porous materials (APMs). The discovery of APMs at the end of eighties has brought a revolution in chemistry and material engineering. The participation of the ever-escalating number of researchers in this newly developed field owing to the possibilities of unlimited chemical and structural features in this class of materials present scope for imaginative chemists. The unique arrangement of organic, inorganic or combination of both moieties affords a new class of porous materials, potentially complementary and even much more attractive compared to traditional porous solids. APMs encompass a wide range of materials i.e., metal-organic frameworks (MOFs), porous organic materials (POMs), metal-organic polyhedra (MOPs), metal-organic gel (MOG) etc. The research interest in APMs have skyrocketed in various aspects pertaining to its several key features. Firstly, APMs can be designed to feature high surface area and well-defined functional pores. Secondly, some of them can be readily molded into monolithic forms or thin films which provide substantial advantages in many practical applications. Additionally, few of them can even be dissolved in a common solvent and processed into workable forms without compromising their inherent porosity, which is almost impossible to imagine with conventional porous solids. Furthermore, stimuli-responsive APMs can also be designed that are capable of reversibly switching between the open and closed porous state after applying an external stimuli.

  • von Harshad Paithankar
    34,00 €

    RNA performs a broad range of vital cellular functions. They serve as information carriers, form part of ribosome, exhibit catalytic activities, involved in gene regulation, etc. Most of these functions involve the interaction of RNAs with proteins. For example, the genetic information contained in DNA is transcribed to RNA assisted by RNA polymerase, the ribosome contains tRNA molecules bound to aminoacyl-tRNA synthetase that are responsible for the translation of information encoded in the mRNA, etc.. Another example of protein-RNA interaction involves the processing of RNAs by ribonucleases that form the ribonucleoprotein complex (RNP). These complexes are involved in gene regulation, antiviral defense, immune response, etc. The formation of the correct complex is governed by the different types of interactions between the amino acids of the protein and the nucleobases or ribose sugar or phosphodiester backbone of the RNA that include hydrogen bonding, electrostatic, stacking interactions, etc. Any irregularities in these interactions can lead to dysfunction of the complex and can lead to diseases such as cancer, cardiovascular dysfunction, neurodegenerative diseases, to name a few. Regardless of the large number functionally significant protein-RNA interactions known, the molecular mechanisms of interactions between the RNA and the protein are poorly understood. An important aspect of understanding these mechanisms involves the characterization of these interactions at structural, kinetic, and thermodynamic levels. RNA molecules can fold into a variety of secondary and tertiary structures that are formed due to the various type of base-pairing interactions between the nucleobases. The base-pairing in RNA can vary from most commonly observed Watson-Crick pairs to others like Hoogsteen, Wobble, reverse Watson-Crick, reverse Hoogsteen, etc1. Proteins can recognize RNAs by interacting with a single- stranded RNA and/or secondary and tertiary structures like an RNA duplex, duplex with internal bulge or mismatch, stem-loop structures, quadruplexes, etc.

  • von Bhanu Priya
    39,00 €

    Semiconducting materials have been around since the early 19th century, when Michael Faraday discovered that, unlike pure metals, the electrical resistance in silver sulphide decreased as the temperature of the material was raised. From a more practical perspective, a semiconductor is a substance with electrical conductivity between that of an insulator and a metal, as the name suggests. There is a plethora of non- conducting features of semiconductors that have led to their uses in a wide variety of applications. After the development of the transistor device, silicon (Si) has become the most well-known semiconductor in the world. Without a doubt, the advent of the transistor in the 20th century was the single most important scientific event of the last two centuries, paving the way for the rapid development of technology in our modern world. In the last few decades, Transition Metal Oxide Semiconductors (TMOS) are a class of materials that have attracted significant attention in the field of electronics and optoelectronics due to their unique properties. These materials are composed of transition metal cations and oxygen anions, and can exhibit a wide range of electronic and optical behaviors, including band gap tuning, carrier density modulation, and photoresponse enhancement. TMOS are especially promising for applications such as solar cells, gas sensors, and electronic devices, due to their high carrier mobility, chemical stability, and abundance of raw materials. The diverse range of properties exhibited by TMOS makes them a promising avenue for developing new and advanced technologies in the field of materials science.

  • von Suryaprabha M
    36,00 €

    The internet publicly collaborates and billions of people around the world have access to a self-sustaining facility. The internet is an encyclopaedic information resource. It is undeniable that computers and the Internet have become two of modern society's most important achievements. They brought a revolution in the domains of science, education, entertainment, healthcare, etc. The internet which literary means network of networks has brought a lot of impact in the day-to-day human life. The Internet has become a conventional part of people's information exists, and the majority of many people have access to it and they are confident in their ability to use it for their information needs. The Internet has made it possible the users to share and search for ideas, opinions, and recommendations. According to Google, one out of every twenty searches is for health information. Additionally, 80 percent of internet users have looked up healthcare related topics such as specific diseases, symptoms, or medical treatments, and 34% have read other patients' opinions or experiences on blogs, social networks, or health communities, and 24% have looked up online reviews of specific drugs and treatments. As a result of the rapid proliferation of healthcare information on the internet, more patients are turning to the internet as their first source of healthcare information and learning about their conditions before seeking a medical professional opinion. The majority of us rely on the Internet and health apps on our smartphones to get first-hand information regarding what might be the cause for the disease and what could be the possible treatments. Reputable medical websites offer in- depth, easy-to-understand information on symptoms, treatment options, and common diseases.

  • von Rahul Kumar Verma
    33,00 €

    The identity of an individual entity lies in the wholeness of the system in which it is present. We observe numerous complex phenomena happening around us, and to study them, we de¿ne them as systems with particular entities leading to the commencement of those phenomena. Modelling these complex systems gives rise to the formation of complex networks. These networks represent the meaningful connections between the entities of the complex system. "I think the next century (21st) will be the century of complexity", once said Stephen Hawking in light of the omnipresence of complex systems around us. The past two decades observed the immense potential of network science due to its holistic approach, ¿exibility, and applications to vast ¿elds of scienti¿c research. Network science has provided various models and algorithms under the umbrella of statistical physics to analyze natural and social sciences, including complex biological systems. Like any other physical system, it is also required to identify and characterize the individual building blocks in complex biological systems and obtain and establish insights into the interactions. The biological complex systems can be de¿ned by multiple types of entities such as biomolecules (proteins and genes), pathways (metabolic, anabolic, and disease), cells (neurons), tissues (brain regions), and organs (human complexome) along with their de¿ned interactions. In biological systems, interactions among cellular entities are not always straightforward as in social and physical networks. Hence, their interpretation becomes much more complicated, aided by the immense size, temporal dynamics, and non-linearity behaviour. However, the vast diversity of biological systems allows us to de¿ne them at various levels into network models.

  • von Bibhu Prasad Nayak
    35,00 €

    Electronic devices generally emit electromagnetic (EM) noise to the surroundings and are also susceptible to the surrounding fields. Electromagnetic compatibility (EMC) is the ability of the systems to function in the presence of electromagnetic environment, by reducing the unwanted field generation and reception of electromagnetic energy which may result in malfunction such as electromagnetic interference (EMT). EMC focuses on two aspects of system behavior, emission and immunity. The main challenge in the case of emission is to identify the noise source in the system and to find out the countermeasures to reduce the emissions without affecting the functionality. On the other hand, immunity is the ability of equipment to function correctly in the presence of electromagnetic noise. Based on this, systems can be classified as the emitter and receptor. Noise can propagate from the emitter to the receptor through the coupling media. Coupling can happen either through conduction by the shared power supply cables between devices or by the direct radiation between them. In board level design, active components like microcontroller, communication ICs are prone to the external noises. PCB traces in the layout provide the coupling paths to the internal or external EM noises. Using a good immunity EMC model for the ICs, an optimized layout can be designed to reduce the level of coupling that increase the overall system EMC performance. Generally, EMC testing is done once porotype is available. Any failure in the test requires design iteration such as addition of extra filter components or layout changes. This causes delay in time to market and eventually loss in revenue. Nowadays EMC simulation-based design is being practiced at concept level to avoid such losses. It is well known that project cost reduces when the EMC simulation and design is brought in the early design phase.

  • von Narinder Singh
    35,00 €

    The word 'composite' refers to a macroscopic combination of two or more different materials that result in a new material having much improved and new characteristics. Nanocomposites are the composites consisting of multiphase materials in which at least one of the phases lies in the nanometric size domain (Inm-I00 nm). The Hybrid Nanocomposites (HNCs) are the multi-component compounds in which at least one of the constituents (organic or inorganic) has dimensions in the nanometre range with some interaction between them. The diverse and advanced technological applications of organic and inorganic materials are restricted owing to poor conductivity, lesser stability and solubility of organic materials and complicated processability and high-temperature operation of inorganic materials. However, in HNCs, the guest-host chemistry may help to overcome their particular limitations due to synergetic effects that lead to emerging research in the area of advanced functional materials. In HNCs, the interactions at the molecular and supra-molecular level result in modulation of mechanical, optical, electrical, catalytic and electrochemical properties at interface. Owing to diverse morphologies and interfacial interactions, the HNCs show excellent and unique properties that are absent in their constituents

  • von Samik Datta
    36,00 €

    Sentiment analyses are widely utilized to recognize the character of the end users and play an essential task in monitoring the user's review. In sentiment analysis, opinion mining is utilized to understand the opinion presented in written language or text. Reviewing the usage of different household objects generate more complexities in e-commerce applications and among service providers. Here, the object presented as movie text, special symbols, and emoticons and dealing with the unstructured data became highly complicated. In the Aspect Based Sentiment Analysis , two kinds of tasks are executed. The procedure to detect the attributes in the object where the people are commenting is called aspect category. In this phase, the object attributes are termed as aspects and aspect value or sentiment identification is performed as the next task with the aspects. Customer reactions are understood quickly in sentiment analysis, and they face more complications in analyzing human languages. NLP is the field connected with computers for processing human languages such as French, English, German etc. It became more essential to design a new model for professionals who are highly close to humans based on their applications and usage. It is complex to allocate different things to the machine, and the dependencies must be addressed. The processes of human textual data processing are the essential field where machines are trained to observe and process the knowledge of data content. These types of observation need a multi-disciplinary technique, and also, the process of naturally attained text is offered to logic, search, machine learning, knowledge representation, planning and statistical technique. In the present internet era, large volumes of text are presented in the form of power-point presentations, word pages and PDF pages. In this case, the programs are needed to generate some sense with the textual documents, and also, they need different NLP approaches. Finally, the search identifies the best optimization technique for the computer. In some cases, the selection is required for processing the data, and also, the search techniques find the good possible solution to obtain the optimal best solution. Moreover, logic is essential to perform effective interference and reasoning. Next, the textual data are modified as logical forms into a machine for processing. Based on knowledge presentation, the embedded knowledge is collected according to machine knowledge. In NLP, the communication procedure is improved regarding the sentence, meaning, phrases, words and syntactic processing that are more essential for NLP.

  • von Prasanna B. P.
    38,00 €

    Polymers are extended chain giant organic molecules which consists of repeated interlinking of many monomer units in long chain there by given its name poly, meaning 'many' and mer, meaning 'part' in Greek. A polymer is similar to a necklace made from numerous tiny beads joining together known as monomers. The nonconducting properties of most of the polymers signify a substantial advantage for various practical applications of plastics. However, organic polymers with good electrical conductivity have been observed during last two decades. The polymeric materials have good processability, less specific weight, corrosion resistance and also the exciting prospects for plastics fabricated in to films, electronic devices and electrical wires. Because of these properties, in recent years they have grabbed the attention of both academic researchers and industrial domains ranging from solid state physics to chemistry and to electrochemistry. Conducting polymers are the class of polymers which can conduct electricity due to its T-electron system. Sometimes these conducting polymers are also called organic polymeric conductors or conjugated polymers or purely conductive polymers. The existence of alternate single and double bonds between the carbon atoms leading to formation of sigma (a) and pi (T) bonds is known as conjugation. Due to the formation of covalent bonds between the carbon atoms the a-electrons are fixed and immobile, while the remaining T-electrons are easily delocalized upon doping. Thus, an extended T system along the conducting polymer backbone confers the electronic conduction due to the movement of T-electrons along the chain. Ever since the invention of iodine doped conductive polyacetylene, a new field of conducting polymers, which is also called as "synthetic metals", with a number of different conducting polymers and their derivatives have been established.

  • von S. Manigandan
    36,00 €

    A free jet defined as a rapid stream driven by pressure issuing from the nozzle exit which exhibits excellent characteristics of flow field along the width to axial distance (X/D). Where X is the axial distance and D is the diameter of the nozzle exit. The extensive understanding of flow and mixing characteristics of supersonic jet has been gaining the importance in recent years owing to its vast application in many areas like rockets, aircraft engines, and nozzle for different application etc. Over few decades the fundamentals understanding of jets and the factors affecting jet spread had been studied extensively using techniques like analytical, experimental and numerical. The effectiveness and efficiency of the jet is determined using parameters like jet spread rate, potential core length and jet decay In addition to the above shear layer also defines the efficiency of free jet. Free jet is pressure driven rapid stream which issuing either from nozzle or orifice into a quiescent ambience without any obstruction, it can also be defined using the shear layer. Shear layer is the boundary layer where the interaction between the jet and atmosphere takes place due to the exchange of momentum. The magnitude of the shear layer not only depends on the shape of exit but also the mach number of the jet]. The value of the shear layer increases with respect to the jet Mach number. Hence understanding the physics of shear layer during the evolution of jet is important. On the other hand, the characteristics of jet are also influenced by the shape of nozzle, exit profile, thickness, velocity profile and orientation of jet. Due to the velocity difference between the entraining flow and ambient boundary layer is formed at jet periphery. This boundary layer is called as shear layer. Usually shear layer is a highly unstable due to the jet spread in downstream direction. Which lead to formation of vortices. Secondly, these vortices enhance the mixing characteristics of free jet. The jet is classified into free jet, confined jet, isothermal and non-isothermal jet. A free jet is a rapid stream which discharges into ambient fluid. If the jet is influenced with reverse flow then it is called confined jet. If the jet is attached to the surface called attached jet. Based on the effect of temperature the jet is categorized as isothermal and non-isothermal.

  • von Mrinal Kanti Sen
    35,00 €

    A natural disaster cannot be anticipated in any kind of form by humankind. These may occur in the form of floods, seismic, volcanos, and hurricanes. The menace of these occurrences cannot be stopped at will, but a robust framework can be created by applying the concept of resilience. Resilience means the strength of a community to resist the effect of any hazard and bounces back to that community's desired level of performance after the occurrence of the risk. Resilience is defined as a system's ability to withstand and recover from the effects due to natural or human-made hazards. In any community, the meaning of resilience can be taken as the time taken by the socio-physical infrastructure to bounce back to its original or functional state. The concept of resilience is well established in various domains, such as ecology, finance, engineering, and medical science. Infrastructure resilience mainly depends on four key factors for a particular, where systems' robustness is the ability to resist the effects of a disaster, redundancy is the availability of alternative resources ensuring operational requirements during/after a disaster, rapidity is the time taken to bounce back to its original/desirable position and resourcefulness is the availability of resources for recovery.Any system's performance loss and recovery profile are typically uncertain in nature, primarily due to the inherent uncertainty in natural hazards (related to system damage) and the time variation in the restoration process due to resource availability. Performance loss mainly depends on both robustness and redundancy. In most cases, recovery is modelled as a linear or stepped profile for critical infrastructure systems instead of nonlinear. The recovery profile mainly depends on the type of infrastructure system under consideration and resource availability. For instance, a stepped recovery pattern is typically followed in restoring road and bridge systems.

  • von Suhas Krishna Diwase
    36,00 €

    Disaster is a serious disruption of the functioning of a community or a society involving widespread human, material, economic or environmental losses and impacts, which exceeds the ability of the affected community or society to cope using its own resources.1 Global data shows that the natural disaster events have increased in the past 100 years from less than 10 events per year to about 400 events per year. During 2005 to 2014, Asia-Pacific region witnessed 1,625 reported disaster events in which approximately 500,000 people lost their lives, around 1.4 billion people were affected and there was $523 billion worth of economic damage.3 The population at risk in the Asia-Pacific region is very high as reportedly around 740 million city dwellers in this region are living in multi-hazard hotspots that are vulnerable to floods, earthquakes, cyclones and landslides. According to Global Assessment Report on Disaster Risk Reduction 2015, the cost of disasters worldwide has reached an average of$250 billion to $300 billion every year. Climate change is expected to impact societies and increase their vulnerabilities to various hydro meteorological disasters which would have a disastrous impact on developing countries. Although it is impossible to eliminate disaster risk, its impact can be minimized by planning, preparing and building capacities for mitigation, coupled with prompt action. Traditionally, disaster management consisted primarily of reactive mechanisms wherein response was the main focus area instead of a more comprehensive approach involving participation from communities and all other stakeholders on a regular basis. However, the past few years have witnessed a gradual shift towards a more proactive, mitigation-based approach wherein the damage caused by any disaster can be minimized largely by developing early warning systems, careful planning, and prompt action.

  • von Jaya Mabel Rani A
    35,00 €

    Today data has grown more and more all around the world in tremendous way. According to Statista, the total amount of data has grown has forecasted globally by 2021 as 79 zettabytes. Using this data, data analyst can analyse, visualize and construct the pattern based on end users requirements. For analyzing and visualization the data, here in need of more fundamental techniques for understanding types of data sets, size and frequency of data set to take proper decision. There are different types of data such as relational data base, could be data warehouse database, transactional data, multimedia data, spatial data, WWW data, time series data, heterogeneous data, text data. There are more and more number of data mining techniques including pattern recognition, and machine learning algorithms. This book focused on data clustering technique, which is one of the sub part of machine learning. Clustering is one of the Unsupervised Machine Learning technique used for statistical data analysis in many fields, which is one of the sub branch of data mining. There are two main sub branches such as supervised machine learning and unsupervised machine learning under data mining. All classification methods including Rule based classification, Decision Tree (DT) classification, Random forest classification, support vector machine, etc., and linear regression based learning are come under Supervised Learning. Then all clustering algorithms such as K-Means (KM), K-Harmonic Means (KHM), Fuzzy clustering, Hybrid clustering, Optimization based clustering association based mining etc., are come under unsupervised clustering. Clustering algorithms can also be categorized into different types such as, traditional clustering algorithms such as, hierarchical clustering algorithms, grid based clustering, partitioning-based clustering, density based clustering. There are wide variety of clustering algorithms to cluster the data point into a set of disjoint classes. After clustering of the data all related data objects come under one group of data and different or dissimilar data objects come under another cluster of data. Clustering algorithms can be applied in most of the fields such as medical, engineering, financial forecasting, education, business, commerce, and so on. Clustering Algorithms can also use in Data Science to analyse more complicated problems and to get more valuable insights from the data.

  • von Roshan Jose
    35,00 €

    Dielectric materials are electrically insulating materials, which do not contain any free charge carriers for electric conduction. These materials can store charge and the molecules of the dielectric material gets polarized in the presence of an electric ¿eld. The displacement of the positive and negative charges with respect to their equilibrium positions due to the applied electric ¿eld is called polarization. The extent of polarization can be quanti¿ed by electric dipole moment. There are two types of dielectric materials polar and non-polar dielectrics. The polar dielectric materials possess a permanent dipole moment with an asymmetric crystal structure. The non-polar dielectric materials do not have a permanent dipole moment, and their crystal structure is symmetric. Based on the symmetry elements (centre of symmetry, an axis of rotation, mirror plane and their combinations) crystals are divided into 32-point groups. Out of 32 crystal class, 21 does not have the centre of symmetry, i.e. the centre of positive and negative charges does not coincide. The remaining 11 class possess a centre of symmetry and hence cannot possess polar characterization. Out of the 21-point groups, one class (432) lacks symmetry, because it has an extra symmetry element that prevents the polar characterization. One or more polar axes are present in the remaining 20-point groups and they exhibit various polar effects such as piezoelectricity, pyroelectricity and ferroelectricity. The term pyroelectricity refers to the temperature dependence of the magnitude of polarization. Some dielectric materials exhibit spontaneous electric polarization without the application of the electric ¿eld. Such materials are known as ferroelectric materials. The phenomenon is referred to as ferroelectricity. This phenomenon was ¿rst observed in Rochelle salt by J. Valasek in 1921. Ferroelectric material having a non-centrosymmetric crystal structure and containing a unique polar axis. It contains electric dipoles that are spontaneously polarized, which can be reversed by the application of a ¿eld in the opposite direction. The variation of polarization with the electric ¿eld is not linear for such materials, but forms a closed loop called a hysteresis loop. The reversible spontaneous polarization of these materials is utilized for the development of non-volatile ferroelectric random access memory (FeRAM).

  • von G. Muthulakshmi
    36,00 €

    The environmental pollution and energy crisis forecast a scarcity of fuel, rising global temperatures, and the defeat of biodiversity. The exhaustion of fossil fuel reserves and rapidly rising global warming has heightened public awareness of the necessity to stage out the fossil fuel industry. Consequently, the rapid increase in energy production from renewable sources has compelled the development of new generation energy storage systems because renewable sources cannot produce energy on demand. Renewable resources are virtually infinite in terms of duration, but they are limited in terms of energy available per unit of time, such as solar cells, wind turbines, solar thermal collectors, and geothermal power, they are promising sources of energy with a lesser carbon footprint. As a result, energy storage devices are critical for maximizing the use of renewable energy and to eliminate the carbon foot print from the environment. Moreover, the energy storage device is inevitable for the future of e-Vehicles, consumer electronics and the transmission and distribution of electricity. Energy storage is essential to the energy security of today's energy networks. Most of the energies are stored in the form of raw or refined hydrocarbons, whether in the form of coal heaps or oil and gas reserves. These energy sources are creating more environmental and degradation due to their emissions of greenhouse gases and heat. The only exception is a pumped hydroelectric plant, which can provide a large amount of energy in a short period of time while also improving electric system reliability. The purpose and form of energy storage are likely to change significantly as energy systems evolve to use low-carbon technology. Perhaps two broad trends will drive this change. Initially, with the intermittent nuclear power and static production playing an important role in supplying electricity and it became difficult to match with demand, while imbalances will grow and dominate over the time. Moving away from fossil fuel production means that, with the exception of flexible gas generation, most power sources can no longer be stored as hydrocarbons. Likewise, if low-carbon emission electrical supply replaces the oil and gas for domestic and industrial power needs, the structure of electricity demand will change dramatically.

Willkommen bei den Tales Buchfreunden und -freundinnen

Jetzt zum Newsletter anmelden und tolle Angebote und Anregungen für Ihre nächste Lektüre erhalten.