974 resultados para Parallel key-insulation
Resumo:
Specialization to nectarivory is associated with radiations within different bird groups, including parrots. One of them, the Australasian lories, were shown to be unexpectedly species rich. Their shift to nectarivory may have created an ecological opportunity promoting species proliferation. Several morphological specializations of the feeding tract to nectarivory have been described for parrots. However, they have never been assessed in a quantitative framework considering phylogenetic nonindependence. Using a phylogenetic comparative approach with broad taxon sampling and 15 continuous characters of the digestive tract, we demonstrate that nectarivorous parrots differ in several traits from the remaining parrots. These trait-changes indicate phenotype–environment correlations and parallel evolution, and may reflect adaptations to feed effectively on nectar. Moreover, the diet shift was associated with significant trait shifts at the base of the radiation of the lories, as shown by an alternative statistical approach. Their diet shift might be considered as an evolutionary key innovation which promoted significant non-adaptive lineage diversification through allopatric partitioning of the same new niche. The lack of increased rates of cladogenesis in other nectarivorous parrots indicates that evolutionary innovations need not be associated one-to-one with diversification events.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Os resultados apresentados no capítulo 2 foram incluídos no artigo Dantas JM, Campelo LM, Duke NEC, Salgueiro CA, Pokkuluri PR (2015) "The structure of PccH from Geobacter sulfurreducens – a novel low reduction potential monoheme cytochrome essential for accepting electrons from an electrode", FEBS Journal, 282, 2215-2231.
Resumo:
White sand forests, although low in nutrients, are characterized not only by several endemic species of plants but also by several monodominant species. In general, plants in this forest have noticeably thin stems. The aim of this work was to elaborate a parallel dichotomous key for the identification of Angiosperm tree species occurring on white sand forests at the Allpahuayo Mishana National Reserve, Loreto, Peru. We compiled a list of species from several publications in order to have the most comprehensive list of species that occur on white sand forest. We found 219 species of Angiosperm, the more abundant species were Pachira brevipes (26.27%), Caraipa utilis (17.90%), Dicymbe uaiparuensis (13.27%), Dendropanax umbellatus (3.28%), Sloanea spathulata (2.52%), Ternstroemia klugiana (2.30%), Haploclathra cordata (2.28%), Parkia igneiflora (1.20%), Emmotum floribundum (1.06%), Ravenia biramosa (1.04%) among others. Most species of white sand forests can be distinguished using characteristics of stems, branches and leaves. This key is very useful for the development of floristic inventories and related projects on white sand forests from Allpahuayo Mishana National Reserve.
Resumo:
Dissertação de mestrado em Biologia Molecular, Biotecnologia e Bioempreendedorismo em Plantas
Resumo:
Myocardial ischemic postconditioning (PosC) describes an acquired resistance to lethal ischemia-reperfusion (I/R) injury afforded by brief episodes of I/R applied immediately after the ischemic insult. Cardioprotection is conveyed by parallel signaling pathways converging to prevent mitochondria permeability transition. Recent observations indicated that PostC is associated with free radicals generation, including nitric oxide (NO(.)) and superoxide (O2 (.-)), and that cardioprotection is abrogated by antioxidants. Since NO. And O2 (. -) react to form peroxynitrite, we hypothesized that postC might trigger the formation of peroxyntrite to promote cardioprotection in vivo. Rats were exposed to 45 min of myocardial ischemia followed by 3h reperfusion. PostC (3 cycles of 30 seconds ischemia/30 seconds reperfusion) was applied at the end of index ischemia. In a subgroup of rats, the peroxynitrite decomposition catalyst 5,10,15,20-tetrakis(4-sulphonatophenyl) porphyrinato iron (FeTPPS) was given intravenously (10 mg/kg(-1)) 5 minutes before PostC. Myocardial nitrotyrosine was determined as an index of peroxynitrite formation. Infarct size (colorimetric technique and plasma creatine kinase-CK-levels) and left ventricle (LV) function (micro-tip pressure transducer), were determined. A significant generation of 3-nitrotyrosine was detected just after the PostC manoeuvre. PostC resulted in a marked reduction of infarct size, CK release and LV systolic dysfunction. Treatment with FeTPPS before PostC abrogated the beneficial effects of PostC on myocardial infarct size and LV function. Thus, peroxynitrite formed in the myocardium during PostC induces cardioprotective mechanisms improving both structural and functional integrity of the left ventricle exposed to ischemia and reperfusion in vivo.
Resumo:
The mutual information of independent parallel Gaussian-noise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signalling constellations with limited peak-to-average ratios (m-PSK, m-QAM, etc) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum mean-square error proves key to solving the power allocation problem.
Resumo:
For the last decade, high-resolution (HR)-MS has been associated with qualitative analyses while triple quadrupole MS has been associated with routine quantitative analyses. However, a shift of this paradigm is taking place: quantitative and qualitative analyses will be increasingly performed by HR-MS, and it will become the common 'language' for most mass spectrometrists. Most analyses will be performed by full-scan acquisitions recording 'all' ions entering the HR-MS with subsequent construction of narrow-width extracted-ion chromatograms. Ions will be available for absolute quantification, profiling and data mining. In parallel to quantification, metabotyping will be the next step in clinical LC-MS analyses because it should help in personalized medicine. This article is aimed to help analytical chemists who perform targeted quantitative acquisitions with triple quadrupole MS make the transition to quantitative and qualitative analyses using HR-MS. Guidelines for the acceptance criteria of mass accuracy and for the determination of mass extraction windows in quantitative analyses are proposed.
Resumo:
Phenotypic convergence is a widespread and well-recognized evolutionary phenomenon. However, the responsible molecular mechanisms remain often unknown mainly because the genes involved are not identified. A well-known example of physiological convergence is the C4 photosynthetic pathway, which evolved independently more than 45 times [1]. Here, we address the question of the molecular bases of the C4 convergent phenotypes in grasses (Poaceae) by reconstructing the evolutionary history of genes encoding a C4 key enzyme, the phosphoenolpyruvate carboxylase (PEPC). PEPC genes belong to a multigene family encoding distinct isoforms of which only one is involved in C4 photosynthesis [2]. By using phylogenetic analyses, we showed that grass C4 PEPCs appeared at least eight times independently from the same non-C4 PEPC. Twenty-one amino acids evolved under positive selection and converged to similar or identical amino acids in most of the grass C4 PEPC lineages. This is the first record of such a high level of molecular convergent evolution, illustrating the repeatability of evolution. These amino acids were responsible for a strong phylogenetic bias grouping all C4 PEPCs together. The C4-specific amino acids detected must be essential for C4 PEPC enzymatic characteristics, and their identification opens new avenues for the engineering of the C4 pathway in crops.
Resumo:
A high-speed and high-voltage solid-rotor induction machine provides beneficial features for natural gas compressor technology. The mechanical robustness of the machine enables its use in an integrated motor-compressor. The technology uses a centrifugal compressor, which is mounted on the same shaft with the high-speed electrical machine driving it. No gearbox is needed as the speed is determined by the frequency converter. The cooling is provided by the process gas, which flows through the motor and is capable of transferring the heat away from the motor. The technology has been used in the compressors in the natural gas supply chain in the central Europe. New areas of application include natural gas compressors working at the wellheads of the subsea gas reservoir. A key challenge for the design of such a motor is the resistance of the stator insulation to the raw natural gas from the well. The gas contains water and heavy hydrocarbon compounds and it is far harsher than the sales gas in the natural gas supply network. The objective of this doctoral thesis is to discuss the resistance of the insulation to the raw natural gas and the phenomena degrading the insulation. The presence of partial discharges is analyzed in this doctoral dissertation. The breakdown voltage of the gas is measured as a function of pressure and gap distance. The partial discharge activity is measured on small samples representing the windings of the machine. The electrical field behavior is also modeled by finite element methods. Based on the measurements it has been concluded that the discharges are expected to disappear at gas pressures above 4 – 5 bar. The disappearance of discharges is caused by the breakdown strength of the gas, which increases as the pressure increases. Based on the finite element analysis, the physical length of a discharge seen in the PD measurements at atmospheric pressure was approximated to be 40 – 120 m. The chemical aging of the insulation when exposed to raw natural gas is discussed based on a vast set of experimental tests with the gas mixture representing the real gas mixture at the wellhead. The mixture was created by mixing dry hydrocarbon gas, heavy hydrocarbon compounds, monoethylene glycol, and water. The mixture was chosen to be more aggressive by increasing the amount of liquid substances. Furthermore, the temperature and pressure were increased, which resulted in accelerated test conditions. The time required to detect severe degradation was thus decreased. The test program included a comparison of materials, an analysis of the e ects of di erent compounds in the gas mixture, namely water and heavy hydrocarbons, on the aging, an analysis of the e ects of temperature and exposure duration, and also an analysis on the e ect of sudden pressure changes on the degradation of the insulating materials. It was found in the tests that an insulation consisting of mica, glass, and epoxy resin can tolerate the raw natural gas, but it experiences some degradation. The key material in the composite insulation is the resin, which largely defines the performance of the insulation system. The degradation of the insulation is mostly determined by the amount of gas mixture di used into it. The di usion was seen to follow Fick’s second law, but the coe cients were not accurately defined. The di usion was not sensitive to temperature, but it was dependent upon the thermodynamic state of the gas mixture, in other words, the amounts of liquid components in the gas. The weight increase observed was mostly related to heavy hydrocarbon compounds, which act as plasticizers in the epoxy resin. The di usion of these compounds is determined by the crosslink density of the resin. Water causes slight changes in the chemical structure, but these changes do not significantly contribute to the aging phenomena. Sudden changes in pressure can lead to severe damages in the insulation, because the motion of the di used gas is able to create internal cracks in the insulation. Therefore, the di usion only reduces the mechanical strength of the insulation, but the ultimate breakdown can potentially be caused by a sudden drop in the pressure of the process gas.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
A key capability of data-race detectors is to determine whether one thread executes logically in parallel with another or whether the threads must operate in series. This paper provides two algorithms, one serial and one parallel, to maintain series-parallel (SP) relationships "on the fly" for fork-join multithreaded programs. The serial SP-order algorithm runs in O(1) amortized time per operation. In contrast, the previously best algorithm requires a time per operation that is proportional to Tarjan’s functional inverse of Ackermann’s function. SP-order employs an order-maintenance data structure that allows us to implement a more efficient "English-Hebrew" labeling scheme than was used in earlier race detectors, which immediately yields an improved determinacy-race detector. In particular, any fork-join program running in T₁ time on a single processor can be checked on the fly for determinacy races in O(T₁) time. Corresponding improved bounds can also be obtained for more sophisticated data-race detectors, for example, those that use locks. By combining SP-order with Feng and Leiserson’s serial SP-bags algorithm, we obtain a parallel SP-maintenance algorithm, called SP-hybrid. Suppose that a fork-join program has n threads, T₁ work, and a critical-path length of T[subscript â]. When executed on P processors, we prove that SP-hybrid runs in O((T₁/P + PT[subscript â]) lg n) expected time. To understand this bound, consider that the original program obtains linear speed-up over a 1-processor execution when P = O(T₁/T[subscript â]). In contrast, SP-hybrid obtains linear speed-up when P = O(√T₁/T[subscript â]), but the work is increased by a factor of O(lg n).
Resumo:
The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.