942 resultados para Parallel programming model
Resumo:
An efficient screening strategy for the identification of potentially interesting low-abundance antifungal natural products in crude extracts that combines both a sensitive bioautography assay and high performance liquid chromatography (HPLC) microfractionation was developed. This method relies on high performance thin layer chromatography (HPTLC) bioautography with a hypersusceptible engineered strain of Candida albicans (DSY2621) for bioactivity detection, followed by the evaluation of wild type strains in standard microdilution antifungal assays. Active extracts were microfractionated by HPLC in 96-well plates, and the fractions were subsequently submitted to the bioassay. This procedure enabled precise localisation of the antifungal compounds directly in the HPLC chromatograms of the crude extracts. HPLC-PDA-mass spectrometry (MS) data obtained in parallel to the HPLC antifungal profiles provided a first chemical screening about the bioactive constituents. Transposition of the HPLC analytical conditions to medium-pressure liquid chromatography (MPLC) allowed the efficient isolation of the active constituents in mg amounts for structure confirmation and more extensive characterisation of their biological activities. The antifungal properties of the isolated natural products were evaluated by their minimum inhibitory concentration (MIC) in a dilution assay against both wild type and engineered strains of C. albicans. The biological activity of the most promising agents was further evaluated in vitro by electron microscopy and in vivo in a Galleria mellonella model of C. albicans infection. The overall procedure represents a rational and comprehensive means of evaluating antifungal activity from various perspectives for the selection of initial hits that can be explored in more in-depth mode-of-action studies. This strategy is illustrated by the identification and bioactivity evaluation of a series of antifungal compounds from the methanolic extract of a Rubiaceae plant, Morinda tomentosa, which was used as a model in these studies.
Resumo:
Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.
Resumo:
High altitude constitutes an exciting natural laboratory for medical research. While initially, the aim of high-altitude research was to understand the adaptation of the organism to hypoxia and find treatments for altitude-related diseases, over the past decade or so, the scope of this research has broadened considerably. Two important observations led to the foundation for the broadening of the scientific scope of high-altitude research. First, high-altitude pulmonary edema (HAPE) represents a unique model which allows studying fundamental mechanisms of pulmonary hypertension and lung edema in humans. Secondly, the ambient hypoxia associated with high-altitude exposure facilitates the detection of pulmonary and systemic vascular dysfunction at an early stage. Here, we review studies that, by capitalizing on these observations, have led to the description of novel mechanisms underpinning lung edema and pulmonary hypertension and to the first direct demonstration of fetal programming of vascular dysfunction in humans.
Resumo:
The continuous production of vascular tissues through secondary growth results in radial thickening of plant organs and is pivotal for various aspects of plant growth and physiology, such as water transport capacity or resistance to mechanical stress. It is driven by the vascular cambium, which produces inward secondary xylem and outward secondary phloem. In the herbaceous plant Arabidopsis thaliana (Arabidopsis), secondary growth occurs in stems, in roots and in the hypocotyl. In the latter, radial growth is most prominent and not obscured by parallel ongoing elongation growth. Moreover, its progression is reminiscent of the secondary growth mode of tree trunks. Thus, the Arabidopsis hypocotyl is a very good model to study basic molecular mechanisms of secondary growth. Genetic approaches have succeeded in the identification of various factors, including peptides, receptors, transcription factors and hormones, which appear to participate in a complex network that controls radial growth. Many of these players are conserved between herbaceous and woody plants. In this review, we will focus on what is known about molecular mechanisms and regulators of vascular secondary growth in the Arabidopsis hypocotyl.
Resumo:
To support the analysis of driver behavior at rural freeway work zone lane closure merge points, Center for Transportation Research and Education staff collected traffic data at merge areas using video image processing technology. The collection of data and the calculation of the capacity of lane closures are reported in a companion report, "Traffic Management Strategies for Merge Areas in Rural Interstate Work Zones". These data are used in the work reported in this document and are used to calibrate a microscopic simulation model of a typical, Iowa rural freeway lane closure. The model developed is a high fidelity computer simulation with an animation interface. It simulates traffic operations at a work zone lane closure. This model enables traffic engineers to visually demonstrate the forecasted delay that is likely to result when freeway reconstruction makes it necessary to close freeway lanes. Further, the model is also sensitive to variations in driver behavior and is used to test the impact of slow moving vehicles and other driver behaviors. This report consists of two parts. The first part describes the development of the work zone simulation model. The simulation analysis is calibrated and verified through data collected at a work zone in Interstate Highway 80 in Scott County, Iowa. The second part is a user's manual for the simulation model, which is provided to assist users with its set up and operation. No prior computer programming skills are required to use the simulation model.
Resumo:
Peer-reviewed
Resumo:
Diplomityössä tehdään jatkokehitystä KCI Konecranes yrityksen siltanosturin laskentaohjelmaan. Ohjelman tärkeimmät jatkokehityskohteet kartoitettiin käyttäjäkyselyn avulla ja niistä valittiin toivotuimmat, sekä diplomityön lujuusopilliseen aihepiiriin parhaiten soveltuvat. Työhön valitut kaksi aihetta ovat koteloprofiilin kaksiosaisen uuman lujuuslaskennan selvittäminen ja siltanosturin kahdeksanpyöräisenpäätykannattajan elementtimallin suunnittelu. Diplomityössä selvitetään jatkokehityskohteisiin liittyvä teoria, mutta varsinainen ohjelmointi jätetään työn ulkopuolelle. Kaksiosaisella uumalla varustetussa koteloprofiilissa nostovaunun kulkukiskon alla olevan uuman yläosa tehdään paksummaksi, jotta uuma kestäisi nostovaunun pyöräkuormasta aiheutuvan paikallisen jännityksen, eliniin sanotun rusennusjännityksen. Rusennusjännityksen määrittäminen uumalevyissä on kaksiosaisen uuman lujuuslaskennan tärkein tehtävä. Rusennuksen aiheuttamankalvojännityksen ja jännityskeskittymien määrittämiseen erilaisissa konstruktioissa etsittiin sopivimmat menetelmät kirjallisuudesta ja standardeista. Kalvojännitys voidaan määrittää luotettavasti käyttäen joko 45 asteen sääntöä tai standardin mukaista menetelmää ja jännityskonsentraatioiden suuruus saadaan kertomallakalvojännitys jännityskonsentraatiokertoimilla. Menetelmien toimivuus verifioitiin tekemällä kymmeniä uuman elementtimalleja erilaisin dimensioin ja reunaehdoin ja vertaamalla elementtimallien tuloksia käsin laskettuihin. Käsin lasketut jännitykset saatiin vastaamaan tarkasti elementtimallien tuloksia. Kaksiosaisen uuman lommahdus- ja väsymislaskentaa tutkittiin alustavasti. Kahdeksanpyöräisiä päätykannattajia käytetään suurissa siltanostureissa pienentämään pyöräkuormia ja radan rusennusjännityksiä. Kahdeksanpyöräiselle siltanosturin päätykannattajalle suunniteltiin elementtimallit molempiin rakenteesta käytettyihin konstruktioihin: nivelöityyn ja jäykkäkehäiseen malliin. Elementtimallien rakentamisessa hyödynnettiin jo olemassa olevia malleja, jolloin niiden lisääminen ohjelmakoodiin nopeutuu ja ne ovat varmasti yhteensopivia muiden laskentamoduuleiden kanssa. Elementtimallien värähtelyanalyysin reunaehtoja tarkasteltiin. Värähtelyanalyysin reunaehtoihin ei tutkimuksen perusteella tarvitse tehdä muutoksia, mutta staattisen analyysin reunaehdot kaipaavat vielä lisätutkimusta.
Resumo:
Insults during the fetal period predispose the offspring to systemic cardiovascular disease, but little is known about the pulmonary circulation and the underlying mechanisms. Maternal undernutrition during pregnancy may represent a model to investigate underlying mechanisms, because it is associated with systemic vascular dysfunction in the offspring in animals and humans. In rats, restrictive diet during pregnancy (RDP) increases oxidative stress in the placenta. Oxygen species are known to induce epigenetic alterations and may cross the placental barrier. We hypothesized that RDP in mice induces pulmonary vascular dysfunction in the offspring that is related to an epigenetic mechanism. To test this hypothesis, we assessed pulmonary vascular function and lung DNA methylation in offspring of RDP and in control mice at the end of a 2-wk exposure to hypoxia. We found that endothelium-dependent pulmonary artery vasodilation in vitro was impaired and hypoxia-induced pulmonary hypertension and right ventricular hypertrophy in vivo were exaggerated in offspring of RDP. This pulmonary vascular dysfunction was associated with altered lung DNA methylation. Administration of the histone deacetylase inhibitors butyrate and trichostatin A to offspring of RDP normalized pulmonary DNA methylation and vascular function. Finally, administration of the nitroxide Tempol to the mother during RDP prevented vascular dysfunction and dysmethylation in the offspring. These findings demonstrate that in mice undernutrition during gestation induces pulmonary vascular dysfunction in the offspring by an epigenetic mechanism. A similar mechanism may be involved in the fetal programming of vascular dysfunction in humans.
Resumo:
We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.
Resumo:
Numerical weather prediction and climate simulation have been among the computationally most demanding applications of high performance computing eversince they were started in the 1950's. Since the 1980's, the most powerful computers have featured an ever larger number of processors. By the early 2000's, this number is often several thousand. An operational weather model must use all these processors in a highly coordinated fashion. The critical resource in running such models is not computation, but the amount of necessary communication between the processors. The communication capacity of parallel computers often fallsfar short of their computational power. The articles in this thesis cover fourteen years of research into how to harness thousands of processors on a single weather forecast or climate simulation, so that the application can benefit as much as possible from the power of parallel high performance computers. The resultsattained in these articles have already been widely applied, so that currently most of the organizations that carry out global weather forecasting or climate simulation anywhere in the world use methods introduced in them. Some further studies extend parallelization opportunities into other parts of the weather forecasting environment, in particular to data assimilation of satellite observations.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
In the last decade defeasible argumentation frameworks have evolved to become a sound setting to formalize commonsense, qualitative reasoning. The logic programming paradigm has shown to be particularly useful for developing different argument-based frameworks on the basis of different variants of logic programming which incorporate defeasible rules. Most of such frameworks, however, are unable to deal with explicit uncertainty, nor with vague knowledge, as defeasibility is directly encoded in the object language. This paper presents Possibilistic Logic Programming (P-DeLP), a new logic programming language which combines features from argumentation theory and logic programming, incorporating as well the treatment of possibilistic uncertainty. Such features are formalized on the basis of PGL, a possibilistic logic based on G¨odel fuzzy logic. One of the applications of P-DeLP is providing an intelligent agent with non-monotonic, argumentative inference capabilities. In this paper we also provide a better understanding of such capabilities by defining two non-monotonic operators which model the expansion of a given program P by adding new weighed facts associated with argument conclusions and warranted literals, respectively. Different logical properties for the proposed operators are studied
Resumo:
Demyelinating diseases are characterized by a loss of oligodendrocytes leading to axonal degeneration and impaired brain function. Current strategies used for the treatment of demyelinating disease such as multiple sclerosis largely rely on modulation of the immune system. Only limited treatment options are available for treating the later stages of the disease, and these treatments require regenerative therapies to ameliorate the consequences of oligodendrocyte loss and axonal impairment. Directed differentiation of adult hippocampal neural stem/progenitor cells (NSPCs) into oligodendrocytes may represent an endogenous source of glial cells for cell-replacement strategies aiming to treat demyelinating disease. Here, we show that Ascl1-mediated conversion of hippocampal NSPCs into mature oligodendrocytes enhances remyelination in a diphtheria-toxin (DT)-inducible, genetic model for demyelination. These findings highlight the potential of targeting hippocampal NSPCs for the treatment of demyelinated lesions in the adult brain.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
We present a model for transport in multiply scattering media based on a three-dimensional generalization of the persistent random walk. The model assumes that photons move along directions that are parallel to the axes. Although this hypothesis is not realistic, it allows us to solve exactly the problem of multiple scattering propagation in a thin slab. Among other quantities, the transmission probability and the mean transmission time can be calculated exactly. Besides being completely solvable, the model could be used as a benchmark for approximation schemes to multiple light scattering.