917 resultados para Lipschitz perturbation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a new, accurate form of the heat balance integral method, termed the Combined Integral Method (or CIM). The application of this method to Stefan problems is discussed. For simple test cases the results are compared with exact and asymptotic limits. In particular, it is shown that the CIM is more accurate than the second order, large Stefan number, perturbation solution for a wide range of Stefan numbers. In the initial examples it is shown that the CIM reduces the standard problem, consisting of a PDE defined over a domain specified by an ODE, to the solution of one or two algebraic equations. The latter examples, where the boundary temperature varies with time, reduce to a set of three first order ODEs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose of the study: Reconstruction of the anterior cruciate ligament (ACL) controls laxity but does not enable restoration of strictly normal 3D kinematics. The purpose of this study was to compare the kinematics of the pathological knee with that of the healthy knee after ACL plasty. This study applied a new ambulatory system using miniature captors. Material and method: Five patients with an isolated injury of the ACL participated in this study. The patients were assessed after injury (T1), at five months (T2), and at 14 months (T3) after surgery. The assessment included laxity (KT-1000), the IKDC score and the Lysholm score. The 3D angles of the knees were measured when walking 30 m on flat ground using a system composed of to small inertia units (3D accelerometer and 3D gyroscope) and a portable recorder. Functional settings were optimised and validating to ensure easy precise measurement of the 3D angles. Symmetry of the two knees was quantified using a symmetry index (SI) (difference in amplitude normalised in relation to mean amplitude) and the correlation coefficient CC. Results: Clinical indicators improved during the follow-up (IKDC T1: 3C, 2C; T2: 5B; T3: 2A, 3B; subjective IKD: 53-95; Lysholm 67-96). Mean laxity improved from 8.6m to 2.5 mm. The gait analysis showed increased symmetry in terms of amplitude for flexion-extension (SI: −17% at T1, −1% at T2, 1% at T3), and an increase in symmetry in terms of the rotation signature (CC: 0.16 at T1, 0.99 at T2, 0.99 at T3). There was no trend to varus-valgus. Discussion: This study demonstrates the clinical application of the new ambulatory system for measuring 3D angles of the knee joint. Joint symmetry increased after ACL plasty but still showed some perturbation at 14 months. The results observed here are in agreement with the literature. Other patients and other types of gait are being analysed. Conclusion: This portable system allows gait analysis outside the laboratory, before and after ACL injury. It is very useful for follow-up after surgery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MOTIVATION: Combinatorial interactions of transcription factors with cis-regulatory elements control the dynamic progression through successive cellular states and thus underpin all metazoan development. The construction of network models of cis-regulatory elements, therefore, has the potential to generate fundamental insights into cellular fate and differentiation. Haematopoiesis has long served as a model system to study mammalian differentiation, yet modelling based on experimentally informed cis-regulatory interactions has so far been restricted to pairs of interacting factors. Here, we have generated a Boolean network model based on detailed cis-regulatory functional data connecting 11 haematopoietic stem/progenitor cell (HSPC) regulator genes. RESULTS: Despite its apparent simplicity, the model exhibits surprisingly complex behaviour that we charted using strongly connected components and shortest-path analysis in its Boolean state space. This analysis of our model predicts that HSPCs display heterogeneous expression patterns and possess many intermediate states that can act as 'stepping stones' for the HSPC to achieve a final differentiated state. Importantly, an external perturbation or 'trigger' is required to exit the stem cell state, with distinct triggers characterizing maturation into the various different lineages. By focusing on intermediate states occurring during erythrocyte differentiation, from our model we predicted a novel negative regulation of Fli1 by Gata1, which we confirmed experimentally thus validating our model. In conclusion, we demonstrate that an advanced mammalian regulatory network model based on experimentally validated cis-regulatory interactions has allowed us to make novel, experimentally testable hypotheses about transcriptional mechanisms that control differentiation of mammalian stem cells. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MOTIVATION: In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. RESULTS: In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. AVAILABILITY: The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mepraia spinolai is a silvatic species of Triatominae which prefers microhabitats near to or in rock piles. It is also able to maintain similar or higher size populations near houses. The density of bugs in quarries near Santiago, Chile, differed within microhabitats and varied significantly within sites according to season. M. spinolai was not found in sites characterized by human perturbation of quarries. Our results confirm M. spinolai as a silvatic triatomine whose importance as a vector of Chagas disease will depend on contact with humans. This could occur if the habitats where populations of this species are found become exploited for the building of urban areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Division of labour is one of the most prominent features of social insects. The efficient allocation of individuals to different tasks requires dynamic adjustment in response to environmental perturbations. Theoretical models suggest that the colony-level flexibility in responding to external changes and internal perturbation may depend on the within-colony genetic diversity, which is affected by the number of breeding individuals. However, these models have not considered the genetic architecture underlying the propensity of workers to perform the various tasks. Here, we investigated how both within-colony genetic variability (stemming from variation in the number of matings by queens) and the number of genes influencing the stimulus (threshold) for a given task at which workers begin to perform that task jointly influence task allocation efficiency. We used a numerical agent-based model to investigate the situation where workers had to perform either a regulatory task or a foraging task. One hundred generations of artificial selection in populations consisting of 500 colonies revealed that an increased number of matings always improved colony performance, whatever the number of loci encoding the thresholds of the regulatory and foraging tasks. However, the beneficial effect of additional matings was particularly important when the genetic architecture of queens comprised one or a few genes for the foraging task's threshold. By contrast, a higher number of genes encoding the foraging task reduced colony performance with the detrimental effect being stronger when queens had mated with several males. Finally, the number of genes encoding the threshold for the regulatory task only had a minor effect on colony performance. Overall, our numerical experiments support the importance of mating frequency on efficiency of division of labour and also reveal complex interactions between the number of matings and genetic architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

P>To put constraints on the Mesozoic to recent growth of the Anti-Atlas system, we investigated the temperature-time history of rocks by applying extensive low-temperature thermochronological analysis to three Precambrian inliers along the coast and 250 km into the interior. Bedrocks yield old U-Th/He ages on zircon (248-193 Ma) and apatite (150-50 Ma) and also fission-track ages of 173-121 Ma on apatite. These datasets are interpreted as recording passive margin upward movements from central Atlantic rifting until the Early Cretaceous. A phase of sedimentary burial was evidenced for the Cretaceous-Eocene. The extension of this thin (1.5 km) basin is loosely constrained but can be extended to the western regions of northern Africa. Effects of the existing thermal perturbation of lithospheric origin 100 km below the Atlas show that the 120-60 degrees C isotherms are not much deflected. Large-scale uplift has possibly occurred in the western Anti-Atlas since c. 30 Ma and is associated with a mean denudation rate of 0.08 km Ma-1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

HIV-1 infects CD4+ T cells and completes its replication cycle in approximately 24 hours. We employed repeated measurements in a standardized cell system and rigorous mathematical modeling to characterize the emergence of the viral replication intermediates and their impact on the cellular transcriptional response with high temporal resolution. We observed 7,991 (73%) of the 10,958 expressed genes to be modulated in concordance with key steps of viral replication. Fifty-two percent of the overall variability in the host transcriptome was explained by linear regression on the viral life cycle. This profound perturbation of cellular physiology was investigated in the light of several regulatory mechanisms, including transcription factors, miRNAs, host-pathogen interaction, and proviral integration. Key features were validated in primary CD4+ T cells, and with viral constructs using alternative entry strategies. We propose a model of early massive cellular shutdown and progressive upregulation of the cellular machinery to complete the viral life cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Es repassa la formulació de la Teoria de Pertorbacions en notació matricial i s'exposa una aplicació senzilla com és la solució del problema de la partícula sotmesa a un potencial d'atracció dins la caixa quàntica monodimensional

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principalcomponent analysis allow to model compositional changes compared with a reference point.The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling

Relevância:

10.00% 10.00%

Publicador:

Resumo:

R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La thrombocytopénie immune primaire (ITP) est une affection auto-immune acquise avec diminution de la survie des plaquettes et perturbation de la production plaquettaire. Il n'existe aucun test clinique simple permettant de prouver la nature auto-immune de l'affection. Pour cette raison, il s'agit presque toujours d'un diagnostic par exclusion d'autres causes. Bien que les plaquettes soient souvent inférieures à 10 x 109/l lors de la présentation initiale, la tendance hémorragique est étonnamment modérée chez la majorité des patients. Le traitement initial fait toujours appel aux corticostéroïdes, combinés à des immunoglobulines intraveineuses et à des transfusions de plaquettes dans les formes compliquées avec hémorragies significatives. Chez l'enfant, la maladie est souvent induite par des infections virales et son évolution est bénigne et spontanément régressive dans la majorité des cas. Chez l'adulte, la maladie est plus souvent persistante ou chroniquement récidivante, et le taux de plaquettes se situe souvent à un taux suffisant pour prévenir des hémorragies spontanées. Seule une faible proportion de patients souffre d'une thrombocytopénie sévère prolongée accompagnée de saignements réguliers avec risque d'hémorragies potentiellement fatales. C'est probablement ce groupe de patients restreint qui tirera surtout profit des nouvelles options thérapeutiques telles que les agonistes du récepteur de la thrombopoïétine. A la lumière de ces nouvelles possibilités, un groupe d'hématologues suisses s'est réuni pour élaborer des directives concernant la prise en charge de l'ITP conformément aux besoins et aux habitudes de notre pays.