834 resultados para distributed heating
Resumo:
Failure detection is at the core of most fault tolerance strategies, but it often depends on reliable communication. We present new algorithms for failure detectors which are appropriate as components of a fault tolerance system that can be deployed in situations of adverse network conditions (such as loosely connected and administered computing grids). It packs redundancy into heartbeat messages, thereby improving on the robustness of the traditional protocols. Results from experimental tests conducted in a simulated environment with adverse network conditions show significant improvement over existing solutions.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
There is currently a strong interest in mirrorless lasing systems(1), in which the electromagnetic feedback is provided either by disorder (multiple scattering in the gain medium) or by order (multiple Bragg reflection). These mechanisms correspond, respectively, to random lasers(2) and photonic crystal lasers(3). The crossover regime between order and disorder, or correlated disorder, has also been investigated with some success(4-6). Here, we report one-dimensional photonic-crystal lasing (that is, distributed feedback lasing(7,8)) with a cold atom cloud that simultaneously provides both gain and feedback. The atoms are trapped in a one-dimensional lattice, producing a density modulation that creates a strong Bragg reflection with a small angle of incidence. Pumping the atoms with auxiliary beams induces four-wave mixing, which provides parametric gain. The combination of both ingredients generates a mirrorless parametric oscillation with a conical output emission, the apex angle of which is tunable with the lattice periodicity.
Resumo:
Synchronous distributed generators are prone to operate islanded after contingencies, which is usually not allowed due to safety and power-quality issues. Thus, there are several anti-islanding techniques; however, most of them present technical limitations so that they are likely to fail in certain situations. Therefore, it is important to quantify and determine whether the scheme under study is adequate or not. In this context, this paper proposes an index to evaluate the effectiveness of anti-islanding frequency-based relays commonly used to protect synchronous distributed generators. The method is based on the calculation of a numerical index that indicates the time period that the system is unprotected against islanding considering the global period of analysis. Although this index can precisely be calculated based on several electromagnetic transient simulations, a practical method is also proposed to calculate it directly from simple analytical formulas or lookup tables. The results have shown that the proposed approach can assist distribution engineers to assess and set anti-islanding protection schemes.
Resumo:
Background Recent studies reported the association between SLCO1B1 polymorphisms and the development of statin-induced myopathy. In the scenario of the Brazilian population, being one of the most heterogeneous in the world, the main aim here was to evaluate SLCO1B1 polymorphisms according to ethnic groups as an initial step for future pharmacogenetic studies. Methods One hundred and eighty-two Amerindians plus 1,032 subjects from the general urban population were included. Genotypes for the SLCO1B1 rs4149056 (c.T521C, p.V174A, exon 5) and SLCO1B1 rs4363657 (g.T89595C, intron 11) polymorphisms were detected by polymerase chain reaction followed by high resolution melting analysis with the Rotor Gene 6000® instrument. Results The frequencies of the SLCO1B1 rs4149056 and rs4363657 C variant allele were higher in Amerindians (28.3% and 26.1%) and were lower in African descent subjects (5.7% and 10.8%) compared with Mulatto (14.9% and 18.2%) and Caucasian descent (14.8% and 15.4%) ethnic groups (p < 0.001 and p < 0.001, respectively). Linkage disequilibrium analysis show that these variant alleles are in different linkage disequilibrium patterns depending on the ethnic origin. Conclusion Our findings indicate interethnic differences for the SLCO1B1 rs4149056 C risk allele frequency among Brazilians. These data will be useful in the development of effective programs for stratifying individuals regarding adherence, efficacy and choice of statin-type.
Resumo:
Recent work on argument structure has shown that there must be a synchronic relation between nouns and derived verbs that can be treated in structural terms. However, a simple phonological/morphological identity or diachronic derivation between a verb and a noun cannot guarantee that there is a denominal structure in a synchronic approach. In this paper we observe the phenomenon of Denominal Verbs in Brazilian Portuguese and argue for a distinction between etymological and synchronic morphological derivation. The objectives of this paper are 1) to identify synchronic and formal criteria to define which diachronic Denominal Verbs can also be considered denominal under a synchronic analysis; and 2) to detect in which cases the label "denominal" can be justifiably abandoned. Based on results of argument structure tests submitted to the judgments of native speakers, it was possible to classify the supposed homogenous Denominal Verbs class into three major groups: Real Denominal Verbs, Root-derived Verbs, and Ambiguous Verbs. In a Distributed Morphology approach, it was possible to explain the distinction between these groups based on the ideia of phases in words and the locality of restriction in the interpretation of roots.
Resumo:
This pioneering study characterized the chemical, physical and mineralogical aspects of the Urucum Standard manganese ore typology, and evaluated some of its metallurgical characteristics, such as the main mineral heat decompositions, and the particle disintegration at room temperature and under continuous heating. A one-ton sample of ore was received, homogenized and quartered. Representative samples were collected and characterized with the aid of techniques, such as ICP-AES, XRD, SEM-EDS, BET and OM. Representative samples with particle sizes between 9.5 mm and 15.9 mm were separated to perform tumbling tests at room temperature, and thermogravimetry tests for both air and nitrogen constant flow at different temperatures. After each heating cycle, the mechanical strength of the orewas evaluated by means of screening and tumbling procedures. The Urucum Standard typology was classified as an oxidized anhydrous ore, with a high manganese content (~47%). This typology ismainly composed of cryptomelane and pyrolusite; however there is a significantamount of hematite. The Urucum Standard particles presented low susceptibility to disintegration at room temperature, but as temperature increased, susceptibility increased. No significant differences were observed between the tests done with the air or nitrogen injections.
Resumo:
The heating of the solar corona has been investigated during four of decades and several mechanisms able to produce heating have been proposed. It has until now not been possible to produce quantitative estimates that would establish any of these heating mechanism as the most important in the solar corona. In order to investigate which heating mechanism is the most important, a more detailed approach is needed. In this thesis, the heating problem is approached ”ab initio”, using well observed facts and including realistic physics in a 3D magneto-hydrodynamic simulation of a small part of the solar atmosphere. The ”engine” of the heating mechanism is the solar photospheric velocity field, that braids the magnetic field into a configuration where energy has to be dissipated. The initial magnetic field is taken from an observation of a typical magnetic active region scaled down to fit inside the computational domain. The driving velocity field is generated by an algorithm that reproduces the statistical and geometrical fingerprints of solar granulation. Using a standard model atmosphere as the thermal initial condition, the simulation goes through a short startup phase, where the initial thermal stratification is quickly forgotten, after which the simulation stabilizes in statistical equilibrium. In this state, the magnetic field is able to dissipate the same amount of energy as is estimated to be lost through radiation, which is the main energy loss mechanism in the solar corona. The simulation produces heating that is intermittent on the smallest resolved scales and hot loops similar to those observed through narrow band filters in the ultra violet. Other observed characteristics of the heating are reproduced, as well as a coronal temperature of roughly one million K. Because of the ab initio approach, the amount of heating produced in these simulations represents a lower limit to coronal heating and the conclusion is that such heating of the corona is unavoidable.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
Heterocyclic compounds represent almost two-thirds of all the known organic compounds: they are widely distributed in nature and play a key role in a huge number of biologically important molecules including some of the most significant for human beings. A powerful tool for the synthesis of such compounds is the hetero Diels-Alder reaction (HDA), that involve a [4+2] cycloaddition reaction between heterodienes and suitable dienophiles. Among heterodienes to be used in such six-membered heterocyclic construction strategy, 3-trialkylsilyloxy-2-aza-1,3-dienes (Fig 1) has been demonstrated particularly attractive. In this thesis work, HDA reactions between 2-azadienes and carbonylic and/or olefinic dienophiles, are described. Moreover, substitution of conventional heating by the corresponding dielectric heating as been explored in the frame of Microwave-Assisted-Organic-Synthesis (MAOS) which constitutes an up-to-grade research field of great interest both from an academic and industrial point of view. Reaction of the azadiene 1 (Fig 1) will be described using as dienophiles carbonyl compounds as aldehyde and ketones. The six-membered adducts thus obtained (Scheme 1) have been elaborated to biologically active compounds like 1,3-aminols which constitutes the scaffold for a wide range of drugs (Prozac®, Duloxetine, Venlafaxine) with large applications in the treatment of severe diseases of nervous central system (NCS). Scheme 1 The reaction provides the formation of three new stereogenic centres (C-2; C-5; C-6). The diastereoselective outcome of these reactions has been deeply investigated by the use of various combination of achiral and chiral azadienes and aliphatic, aromatic or heteroaromatic aldehydes. The same approach, basically, has been used in the synthesis of piperidin-2-one scaffold substituting the carbonyl dienophile with an electron poor olefin. Scheme 2 As a matter of fact, this scaffold is present in a very large number of natural substances and, more interesting, is a required scaffold for an huge variety of biologically active compounds. Activated olefins bearing one or two sulfone groups, were choose as dienophiles both for the intrinsic characteristic flexibility of the “sulfone group” which may be easily removed or elaborated to more complex decorations of the heterocyclic ring, and for the electron poor property of this dienophiles which makes the resulting HDA reaction of the type “normal electron demand”. Synthesis of natural compounds like racemic (±)-Anabasine (alkaloid of Tobacco’s leaves) and (R)- and (S)-Conhydrine (alkaloid of Conium Maculatum’s seeds and leaves) and its congeners, are described (Fig 2).
Resumo:
Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.