908 resultados para Photoinduced CS in Molecular system
Resumo:
Evolution of compositions in time, space, temperature or other covariates is frequent in practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of the sample, thus producing a transfer of mass from some components to other ones, but preserving the total mass present in the system. This evolution is traditionally modelled as a system of ordinary di erential equations of the mass of each component. However, this kind of evolution can be decomposed into a compositional change, expressed in terms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despite of some subcompositions behaving linearly. The goal is to study the characteristics of such simplicial systems of di erential equa- tions such as linearity and stability. This is performed extracting the compositional dif ferential equations from the mass equations. Then, simplicial derivatives are expressed in coordinates of the simplex, thus reducing the problem to the standard theory of systems of di erential equations, including stability. The characterisation of stability of these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and the associated behaviour of the orbits are the main tools. For a three component system, these orbits can be plotted both in coordinates of the simplex or in a ternary diagram. A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is a radioactive decay
Resumo:
This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible
Resumo:
This a short presentation which introduces how models and modelling help us to solve large scale problems in the real world. It introduces the idea that dynamic behaviour is caused by interacting components in the system. Feedback in the system makes behaviour prediction difficult unless we use modelling to support understanding
Resumo:
La butirilcolinesterasa humana (BChE; EC 3.1.1.8) es una enzima polimórfica sintetizada en el hígado y en el tejido adiposo, ampliamente distribuida en el organismo y encargada de hidrolizar algunos ésteres de colina como la procaína, ésteres alifáticos como el ácido acetilsalicílico, fármacos como la metilprednisolona, el mivacurium y la succinilcolina y drogas de uso y/o abuso como la heroína y la cocaína. Es codificada por el gen BCHE (OMIM 177400), habiéndose identificado más de 100 variantes, algunas no estudiadas plenamente, además de la forma más frecuente, llamada usual o silvestre. Diferentes polimorfismos del gen BCHE se han relacionado con la síntesis de enzimas con niveles variados de actividad catalítica. Las bases moleculares de algunas de esas variantes genéticas han sido reportadas, entre las que se encuentra las variantes Atípica (A), fluoruro-resistente del tipo 1 y 2 (F-1 y F-2), silente (S), Kalow (K), James (J) y Hammersmith (H). En este estudio, en un grupo de pacientes se aplicó el instrumento validado Lifetime Severity Index for Cocaine Use Disorder (LSI-C) para evaluar la gravedad del consumo de “cocaína” a lo largo de la vida. Además, se determinaron Polimorfismos de Nucleótido Simple (SNPs) en el gen BCHE conocidos como responsables de reacciones adversas en pacientes consumidores de “cocaína” mediante secuenciación del gen y se predijo el efecto delos SNPs sobre la función y la estructura de la proteína, mediante el uso de herramientas bio-informáticas. El instrumento LSI-C ofreció resultados en cuatro dimensiones: consumo a lo largo de la vida, consumo reciente, dependencia psicológica e intento de abandono del consumo. Los estudios de análisis molecular permitieron observar dos SNPs codificantes (cSNPs) no sinónimos en el 27.3% de la muestra, c.293A>G (p.Asp98Gly) y c.1699G>A (p.Ala567Thr), localizados en los exones 2 y 4, que corresponden, desde el punto de vista funcional, a la variante Atípica (A) [dbSNP: rs1799807] y a la variante Kalow (K) [dbSNP: rs1803274] de la enzima BChE, respectivamente. Los estudios de predicción In silico establecieron para el SNP p.Asp98Gly un carácter patogénico, mientras que para el SNP p.Ala567Thr, mostraron un comportamiento neutro. El análisis de los resultados permite proponer la existencia de una relación entre polimorfismos o variantes genéticas responsables de una baja actividad catalítica y/o baja concentración plasmática de la enzima BChE y algunas de las reacciones adversas ocurridas en pacientes consumidores de cocaína.
Resumo:
El trastorno de hiperactividad y déficit de atención (THDA), es definido clínicamente como una alteración en el comportamiento, caracterizada por inatención, hiperactividad e impulsividad. Estos aspectos son clasificados en tres subtipos, que son: Inatento, hiperactivo impulsivo y mixto. Clínicamente se describe un espectro amplio que incluye desordenes académicos, trastornos de aprendizaje, déficit cognitivo, trastornos de conducta, personalidad antisocial, pobres relaciones interpersonales y aumento de la ansiedad, que pueden continuar hasta la adultez. A nivel global se ha estimado una prevalencia entre el 1% y el 22%, con amplias variaciones, dadas por la edad, procedencia y características sociales. En Colombia, se han realizado estudios en Bogotá y Antioquia, que han permitido establecer una prevalencia del 5% y 15%, respectivamente. La causa específica no ha sido totalmente esclarecida, sin embargo se ha calculado una heredabilidad cercana al 80% en algunas poblaciones, demostrando el papel fundamental de la genética en la etiología de la enfermedad. Los factores genéticos involucrados se relacionan con cambios neuroquímicos de los sistemas dopaminérgicos, serotoninérgicos y noradrenérgicos, particularmente en los sistemas frontales subcorticales, corteza cerebral prefrontal, en las regiones ventral, medial, dorsolateral y la porción anterior del cíngulo. Basados en los datos de estudios previos que sugieren una herencia poligénica multifactorial, se han realizado esfuerzos continuos en la búsqueda de genes candidatos, a través de diferentes estrategias. Particularmente los receptores Alfa 2 adrenérgicos, se encuentran en la corteza cerebral, cumpliendo funciones de asociación, memoria y es el sitio de acción de fármacos utilizados comúnmente en el tratamiento de este trastorno, siendo esta la principal evidencia de la asociación de este receptor con el desarrollo del THDA. Hasta la fecha se han descrito más de 80 polimorfismos en el gen (ADRA2A), algunos de los cuales se han asociado con la entidad. Sin embargo, los resultados son controversiales y varían según la metodología diagnóstica empleada y la población estudiada, antecedentes y comorbilidades. Este trabajo pretende establecer si las variaciones en la secuencia codificante del gen ADRA2A, podrían relacionarse con el fenotipo del Trastorno de Hiperactividad y el Déficit de Atención.
Resumo:
An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values
Resumo:
Por volta da década de 90, foram descobertos na família Camelidae anticorpos desprovidos de cadeias leves e em que o seu domínio variável era constituído unicamente por cadeias pesadas (VHH) e dois domínios constantes (CH2 e CH3). Estes fragmentos passaram a ser conhecidos por Nanoanticorpos, não só pelo seu pequeno tamanho e flexibilidade, mas também por se tratar de uma nova geração de anticorpos terapêuticos, os quais apresentam várias vantagens face aos anticorpos convencionais, uma vez que não são imunogénicos e têm uma alta estabilidade térmica e química, entre tantas outras características inerentes. As suas aplicações são diversas: podem ser usados como tratamento e diagnóstico médico, na veiculação de fármacos e no desenvolvimento de vacinas. Uma das tecnologias moleculares mais usadas na clonagem e expressão dos Nanoanticorpos é a tecnologia de «Phage Display» que pode ser categorizada em duas vertentes: o sistema vector de fago e o sistema vector de fagemídeo. Os vectores fágicos mais usados são os bacteriófagos filamentosos, como o M13, capazes de infetar bactérias gram negativas, como a Escherichia coli. Trata-se de uma ferramenta biotecnológica poderosa e promissora, destacando-se na área da medicina.
Resumo:
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.
Resumo:
Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
A simple and practical technique for assessing the risks, that is, the potential for error, and consequent loss, in software system development, acquired during a requirements engineering phase is described. The technique uses a goal-based requirements analysis as a framework to identify and rate a set of key issues in order to arrive at estimates of the feasibility and adequacy of the requirements. The technique is illustrated and how it has been applied to a real systems development project is shown. How problems in this project could have been identified earlier is shown, thereby avoiding costly additional work and unhappy users.
Resumo:
Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset.
Resumo:
The structure and size of the eyes generated in numerically simulated tropical cyclones and polar lows have been studied. A primitive-equation numerical model simulated systems in which the structures of the eyes formed were consistent with available observations. Whilst the tropical cyclone eyes generated were usually rapidly rotating, it appeared impossible for an eye formed in a system with a polar environment to develop this type of structure. The polar low eyes were found to be unable to warm through the subsidence of air with high values of potential temperature, as the environment was approximately statically neutral. Factors affecting the size of the eye were investigated through a series of controlled experiments. In mature tropical cyclone systems the size of the eye was insensitive to small changes in initial conditions, surface friction and latent and sensible heating from the ocean. In contrast, the eye size was strongly dependent on these parameters in the mature polar lows. Consistent with the findings, a mechanism is proposed in which the size of the eye in simulated polar lows is controlled by the strength of subsidence within it.
Resumo:
Current gas-based in vitro evaluation systems are extremely powerful research techniques. However they have the potential to generate a great deal more than simple fermentation dynamics. Details from four experiments are presented in which adaptation, and novel application, of an in vitro system allowed widely differing objectives to be examined. In the first two studies, complement methodologies were utilised. In such assays, an activity or outcome is inferred through the occurrence of a secondary event rather than by direct observation. Using an N-deficient incubation medium, the increase in starch fermentation, when supplemented with individual amino acids (i.e., known level of N) relative to that of urea (i.e., known quantity and N availability), provided an estimate of their microbial utilisation. Due to the low level of response observed with some arnino acids (notably methionine and lysine), it was concluded, that they may not need to be offered in a rumen-inert form to escape rumen microbial degradation. In another experiment, the extent to which degradation of plant cell wall components was inhibited by lipid supplementation was evaluated using fermentation gas release profiles of washed hay. The different responses due to lipid source and level of inclusion suggested that the degree of rumen protection required to ameliorate this depression was supplement dependent. That in vitro inocula differ in their microbial composition is of little interest per se, as long as the outcome is the same (i.e., that similar substrates are degraded at comparable rates and end-product release is equivalent). However where a microbial population is deficient in a particular activity, increasing the level of inoculation will have no benefit. Estimates of hydrolytic activity were obtained by examining fermentation kinetics of specific substrates. A number of studies identified a fundamental difference between rumen fluid and faecal inocula, with the latter having a lower fibrolytic activity, which could not be completely attributed to microbial numbers. The majority of forage maize is offered as an ensiled feed, however most of the information on which decisions such as choice of variety, crop management and harvesting date are made is based on fresh crop measurements. As such, an attempt was made to estimate ensiled maize quality from an in vitro analysis of the fresh crop. Fermentation profiles and chemical analysis confirmed changes in crop composition over the growing season, and loss of labile carbohydrates during ensiling. In addition, examination of degradation residues allowed metabolizable energy (ME) contents to be estimated. Due to difficulties associated with starch analysis, the observation that this parameter could be predicted by difference (together with an assumed degradability), allowed an estimate of ensiled maize ME to be developed from fresh material. In addition, the contribution of the main carbohydrates towards ME showed the importance of delaying harvest until maximum starch content has been achieved. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Background noise should in theory hinder detection of auditory cues associated with approaching danger. We tested whether foraging chaffinches Fringilla coelebs responded to background noise by increasing vigilance, and examined whether this was explained by predation risk compensation or by a novel stimulus hypothesis. The former predicts that only inter-scan interval should be modified in the presence of background noise, not vigilance levels generally. This is because noise hampers auditory cue detection and increases perceived predation risk primarily when in the head-down position, and also because previous tests have shown that only interscan interval is correlated with predator detection ability in this system. Chaffinches only modified interscan interval supporting this hypothesis. At the same time they made significantly fewer pecks when feeding during the background noise treatment and so the increased vigilance led to a reduction in intake rate, suggesting that compensating for the increased predation risk could indirectly lead to a fitness cost. Finally, the novel stimulus hypothesis predicts that chaffinches should habituate to the noise, which did not occur within a trial or over 5 subsequent trials. We conclude that auditory cues may be an important component of the trade-off between vigilance and feeding, and discuss possible implications for anti-predation theory and ecological processes
Resumo:
We developed a stochastic simulation model incorporating most processes likely to be important in the spread of Phytophthora ramorum and similar diseases across the British landscape (covering Rhododendron ponticum in woodland and nurseries, and Vaccinium myrtillus in heathland). The simulation allows for movements of diseased plants within a realistically modelled trade network and long-distance natural dispersal. A series of simulation experiments were run with the model, representing an experiment varying the epidemic pressure and linkage between natural vegetation and horticultural trade, with or without disease spread in commercial trade, and with or without inspections-with-eradication, to give a 2 x 2 x 2 x 2 factorial started at 10 arbitrary locations spread across England. Fifty replicate simulations were made at each set of parameter values. Individual epidemics varied dramatically in size due to stochastic effects throughout the model. Across a range of epidemic pressures, the size of the epidemic was 5-13 times larger when commercial movement of plants was included. A key unknown factor in the system is the area of susceptible habitat outside the nursery system. Inspections, with a probability of detection and efficiency of infected-plant removal of 80% and made at 90-day intervals, reduced the size of epidemics by about 60% across the three sectors with a density of 1% susceptible plants in broadleaf woodland and heathland. Reducing this density to 0.1% largely isolated the trade network, so that inspections reduced the final epidemic size by over 90%, and most epidemics ended without escape into nature. Even in this case, however, major wild epidemics developed in a few percent of cases. Provided the number of new introductions remains low, the current inspection policy will control most epidemics. However, as the rate of introduction increases, it can overwhelm any reasonable inspection regime, largely due to spread prior to detection. (C) 2009 Elsevier B.V. All rights reserved.