964 resultados para Computational modelling by homology


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Reactive oxygen intermediates generated by the phagocyte NADPH oxidase are critically important components of host defense. However, these highly toxic oxidants can cause significant tissue injury during inflammation; thus, it is essential that their generation and inactivation are tightly regulated. We show here that an endogenous proline-arginine (PR)-rich antibacterial peptide, PR-39, inhibits NADPH oxidase activity by blocking assembly of this enzyme through interactions with Src homology 3 domains of a cytosolic component. This neutrophil-derived peptide inhibited oxygen-dependent microbicidal activity of neutrophils in whole cells and in a cell-free assay of NADPH oxidase. Both oxidase inhibitory and direct antimicrobial activities were defined within the amino-terminal 26 residues of PR-39. Oxidase inhibition was attributed to binding of PR-39 to the p47phox cytosolic oxidase component. Its effects involve both a polybasic amino-terminal segment and a proline-rich core region of PR-39 that binds to the p47phox Src homology 3 domains and, thereby, inhibits interaction with the small subunit of cytochrome b558, p22phox. These findings suggest that PR-39, which has been shown to be involved in tissue repair processes, is a multifunctional peptide that can regulate NADPH oxidase production of superoxide anion O2-. thus limiting excessive tissue damage during inflammation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A previously undescribed 62-kDa protein (p62) that does not contain phosphotyrosine but, nevertheless, binds specifically to the isolated src homology 2 (SH2) domain of p56lck has been identified. The additional presence of the unique N-terminal region of p56lck prevents p62 binding to the SH2 domain. However, phosphorylation at Ser-59 (or alternatively, its mutation to Glu) reverses the inhibition and allows interaction of the p56lck SH2 domain with p62. Moreover, p62 is associated with a serine/threonine kinase activity and also binds to ras GTPase-activating protein, a negative regulator of the ras signaling pathway. Thus, phosphotyrosine-independent binding of p62 to the p56lck SH2 domain appears to provide an alternative pathway for p56lck signaling that is regulated by Ser-59 phosphorylation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Azomethine ylides, generated from imine-derived O-cinnamyl or O-crotonyl salicylaldeyde and α-amino acids, undergo intramolecular 1,3-dipolar cycloaddition, leading to chromene[4,3-b]pyrrolidines. Two reaction conditions are used: (a) microwave-assisted heating (200 W, 185 °C) of a neat mixture of reagents, and (b) conventional heating (170 °C) in PEG-400 as solvent. In both cases, a mixture of two epimers at the α-position of the nitrogen atom in the pyrrolidine nucleus was formed through the less energetic endo-approach (B/C ring fusion). In many cases, the formation of the stereoisomer bearing a trans-arrangement into the B/C ring fusion was observed in high proportions. Comprehensive computational and kinetic simulation studies are detailed. An analysis of the stability of transient 1,3-dipoles, followed by an assessment of the intramolecular pathways and kinetics are also reported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Atmospheric inverse modelling has the potential to provide observation-based estimates of greenhouse gas emissions at the country scale, thereby allowing for an independent validation of national emission inventories. Here, we present a regional-scale inverse modelling study to quantify the emissions of methane (CH₄) from Switzerland, making use of the newly established CarboCount-CH measurement network and a high-resolution Lagrangian transport model. In our reference inversion, prior emissions were taken from the "bottom-up" Swiss Greenhouse Gas Inventory (SGHGI) as published by the Swiss Federal Office for the Environment in 2014 for the year 2012. Overall we estimate national CH₄ emissions to be 196 ± 18 Gg yr⁻¹ for the year 2013 (1σ uncertainty). This result is in close agreement with the recently revised SGHGI estimate of 206 ± 33 Gg yr⁻¹ as reported in 2015 for the year 2012. Results from sensitivity inversions using alternative prior emissions, uncertainty covariance settings, large-scale background mole fractions, two different inverse algorithms (Bayesian and extended Kalman filter), and two different transport models confirm the robustness and independent character of our estimate. According to the latest SGHGI estimate the main CH₄ source categories in Switzerland are agriculture (78 %), waste handling (15 %) and natural gas distribution and combustion (6 %). The spatial distribution and seasonal variability of our posterior emissions suggest an overestimation of agricultural CH₄ emissions by 10 to 20 % in the most recent SGHGI, which is likely due to an overestimation of emissions from manure handling. Urban areas do not appear as emission hotspots in our posterior results, suggesting that leakages from natural gas distribution are only a minor source of CH₄ in Switzerland. This is consistent with rather low emissions of 8.4 Gg yr⁻¹ reported by the SGHGI but inconsistent with the much higher value of 32 Gg yr⁻¹ implied by the EDGARv4.2 inventory for this sector. Increased CH₄ emissions (up to 30 % compared to the prior) were deduced for the north-eastern parts of Switzerland. This feature was common to most sensitivity inversions, which is a strong indicator that it is a real feature and not an artefact of the transport model and the inversion system. However, it was not possible to assign an unambiguous source process to the region. The observations of the CarboCount-CH network provided invaluable and independent information for the validation of the national bottom-up inventory. Similar systems need to be sustained to provide independent monitoring of future climate agreements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The C2 domain is one of the most frequent and widely distributed calcium-binding motifs. Its structure comprises an eight-stranded beta-sandwich with two structural types as if the result of a circular permutation. Combining sequence, structural and modelling information, we have explored, at different levels of granularity, the functional characteristics of several families of C2 domains. At the coarsest level,the similarity correlates with key structural determinants of the C2 domain fold and, at the finest level, with the domain architecture of the proteins containing them, highlighting the functional diversity between the various subfamilies. The functional diversity appears as different conserved surface patches throughout this common fold. In some cases, these patches are related to substrate-binding sites whereas in others they correspond to interfaces of presumably permanent interaction between other domains within the same polypeptide chain. For those related to substrate-binding sites, the predictions overlap with biochemical data in addition to providing some novel observations. For those acting as protein-protein interfaces' our modelling analysis suggests that slight variations between families are a result of not only complementary adaptations in the interfaces involved but also different domain architecture. In the light of the sequence and structural genomic projects, the work presented here shows that modelling approaches along with careful sub-typing of protein families will be a powerful combination for a broader coverage in proteomics. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper presents a computational system based upon formal principles to run spatial models for environmental processes. The simulator is named SimuMap because it is typically used to simulate spatial processes over a mapped representation of terrain. A model is formally represented in SimuMap as a set of coupled sub-models. The paper considers the situation where spatial processes operate at different time levels, but are still integrated. An example of such a situation commonly occurs in watershed hydrology where overland flow and stream channel flow have very different flow rates but are highly related as they are subject to the same terrain runoff processes. SimuMap is able to run a network of sub-models that express different time-space derivatives for water flow processes. Sub-models may be coded generically with a map algebra programming language that uses a surface data model. To address the problem of differing time levels in simulation, the paper: (i) reviews general approaches for numerical solvers, (ii) considers the constraints that need to be enforced to use more adaptive time steps in discrete time specified simulations, and (iii) scaling transfer rates in equations that use different time bases for time-space derivatives. A multistep scheme is proposed for SimuMap. This is presented along with a description of its visual programming interface, its modelling formalisms and future plans. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper investigates how demographic (socioeconomic) and land-use (physical and environmental) data can be integrated within a decision support framework to formulate and evaluate land-use planning scenarios. A case-study approach is undertaken with land-use planning scenarios for a rapidly growing coastal area in Australia, the Shire of Hervey Bay. The town and surrounding area require careful planning of the future urban growth between competing land uses. Three potential urban growth scenarios are put forth to address this issue. Scenario A ('continued growth') is based on existing socioeconomic trends. Scenario B ('maximising rates base') is derived using optimisation modelling of land-valuation data. Scenario C ('sustainable development') is derived using a number of social, economic, and environmental factors and assigning weightings of importance to each factor using a multiple criteria analysis approach. The land-use planning scenarios are presented through the use of maps and tables within a geographical information system, which delineate future possible land-use allocations up until 2021. The planning scenarios are evaluated by using a goal-achievement matrix approach. The matrix is constructed with a number of criteria derived from key policy objectives outlined in the regional growth management framework and town planning schemes. The authors of this paper examine the final efficiency scores calculated for each of the three planning scenarios and discuss the advantages and disadvantages of the three land-use modelling approaches used to formulate the final scenarios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background and Aims The morphogenesis and architecture of a rice plant, Oryza sativa, are critical factors in the yield equation, but they are not well studied because of the lack of appropriate tools for 3D measurement. The architecture of rice plants is characterized by a large number of tillers and leaves. The aims of this study were to specify rice plant architecture and to find appropriate functions to represent the 3D growth across all growth stages. Methods A japonica type rice, 'Namaga', was grown in pots under outdoor conditions. A 3D digitizer was used to measure the rice plant structure at intervals from the young seedling stage to maturity. The L-system formalism was applied to create '3D virtual rice' plants, incorporating models of phenological development and leaf emergence period as a function of temperature and photoperiod, which were used to determine the timing of tiller emergence. Key Results The relationships between the nodal positions and leaf lengths, leaf angles and tiller angles were analysed and used to determine growth functions for the models. The '3D virtual rice' reproduces the structural development of isolated plants and provides a good estimation of the fillering process, and of the accumulation of leaves. Conclusions The results indicated that the '3D virtual rice' has a possibility to demonstrate the differences in the structure and development between cultivars and under different environmental conditions. Future work, necessary to reflect both cultivar and environmental effects on the model performance, and to link with physiological models, is proposed in the discussion.