101 resultados para Optimal solution
Resumo:
The shape of supercoiled DNA molecules in solution is directly visualized by cryo-electron microscopy of vitrified samples. We observe that: (i) supercoiled DNA molecules in solution adopt an interwound rather than a toroidal form, (ii) the diameter of the interwound superhelix changes from about 12 nm to 4 nm upon addition of magnesium salt to the solution and (iii) the partition of the linking deficit between twist and writhe can be quantitatively determined for individual molecules.
Identification of optimal structural connectivity using functional connectivity and neural modeling.
Resumo:
The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.
Resumo:
Study objectives: Many major drugs are not available in paediatric form. The aim of this study was to develop a stable liquid solution of captopril for oral paediatric use allowing individualised dosage and easy administration to newborn and young patients. Methods: A specific HPLC-UV method was developed. In a pilot study, a number of formulations described in the literature as affording one-month stability were examined. In the proper long-term study, the formulation that gave the best results was then prepared in large batches and its stability monitored for two years at 5°C and room temperature, and for one year at 40°C. Results: Most formulations described in the literature were found wanting in our pilot study. A simple solution of the drug (1 mg/mL) in purified water (European Pharmacopeia) containing 0.1% disodium edetate (EDTA-Na) as preservative proved chemically and microbiologically stable at 5°C and room temperature for two years. Conclusion: The proposed in-house formulation fulfils stringent criteria of purity and stability and is fully acceptable for oral administration to newborn and young patients.
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'
Resumo:
With increased activity and reduced financial and human resources, there is a need for automation in clinical bacteriology. Initial processing of clinical samples includes repetitive and fastidious steps. These tasks are suitable for automation, and several instruments are now available on the market, including the WASP (Copan), Previ-Isola (BioMerieux), Innova (Becton-Dickinson) and Inoqula (KIESTRA) systems. These new instruments allow efficient and accurate inoculation of samples, including four main steps: (i) selecting the appropriate Petri dish; (ii) inoculating the sample; (iii) spreading the inoculum on agar plates to obtain, upon incubation, well-separated bacterial colonies; and (iv) accurate labelling and sorting of each inoculated media. The challenge for clinical bacteriologists is to determine what is the ideal automated system for their own laboratory. Indeed, different solutions will be preferred, according to the number and variety of samples, and to the types of sample that will be processed with the automated system. The final choice is troublesome, because audits proposed by industrials risk being biased towards the solution proposed by their company, and because these automated systems may not be easily tested on site prior to the final decision, owing to the complexity of computer connections between the laboratory information system and the instrument. This article thus summarizes the main parameters that need to be taken into account for choosing the optimal system, and provides some clues to help clinical bacteriologists to make their choice.
Resumo:
We study optimal public health care rationing and private sector price responses. Consumers differ in their wealth and illness severity (defined as treatment cost). Due to a limited budget, some consumers must be rationed. Rationed consumers may purchase from a monopolistic private market. We consider two information regimes. In the first, the public supplier rations consumers according to their wealth information (means testing). In equilibrium, the public supplier must ration both rich and poor consumers. Rationing some poor consumers implements price reduction in the private market. In the second information regime, the public supplier rations consumers according to consumers' wealth and cost information. In equilibrium, consumers are allocated the good if and only if their costs are below a threshold (cost effectiveness). Rationing based on cost results in higher equilibrium consumer surplus than rationing based on wealth.
Resumo:
An autoregulation-oriented strategy has been proposed to guide neurocritical therapy toward the optimal cerebral perfusion pressure (CPPOPT). The influence of ventilation changes is, however, unclear. We sought to find out whether short-term moderate hypocapnia (HC) shifts the CPPOPT or affects its detection. Thirty patients with traumatic brain injury (TBI), who required sedation and mechanical ventilation, were studied during 20 min of normocapnia (5.1±0.4 kPa) and 30 min of moderate HC (4.4±3.0 kPa). Monitoring included bilateral transcranial Doppler of the middle cerebral arteries (MCA), invasive arterial blood pressure (ABP), and intracranial pressure (ICP). Mx -autoregulatory index provided a measure for the CPP responsiveness of MCA flow velocity. CPPOPT was assessed as the CPP at which autoregulation (Mx) was working with the maximal efficiency. During normocapnia, CPPOPT (left: 80.65±6.18; right: 79.11±5.84 mm Hg) was detectable in 12 of 30 patients. Moderate HC did not shift this CPPOPT but enabled its detection in another 17 patients (CPPOPT left: 83.94±14.82; right: 85.28±14.73 mm Hg). The detection of CPPOPT was achieved via significantly improved Mx-autoregulatory index and an increase of CPP mean. It appeared that short-term moderate HC augmented the detection of an optimum CPP, and may therefore usefully support CPP-guided therapy in patients with TBI.
Resumo:
Cytotoxic T cell (CTL) activation by antigen requires the specific detection of peptide-major histocompatibility class I (pMHC) molecules on the target-cell surface by the T cell receptor (TCR). We examined the effect of mutations in the antigen-binding site of a Kb-restricted TCR on T cell activation, antigen binding and dissociation from antigen.These parameters were also examined for variants derived from a Kd-restricted peptide that was recognized by a CTL clone. Using these two independent systems, we show that T cell activation can be impaired by mutations that either decrease or increase the binding half-life of the TCR-pMHC interaction. Our data indicate that efficient T cell activation occurs within an optimal dwell-time range of TCR-pMHC interaction. This restricted dwell-time range is consistent with the exclusion of either extremely low or high affinity T cells from the expanded population during immune responses.
Resumo:
In the traditional actuarial risk model, if the surplus is negative, the company is ruined and has to go out of business. In this paper we distinguish between ruin (negative surplus) and bankruptcy (going out of business), where the probability of bankruptcy is a function of the level of negative surplus. The idea for this notion of bankruptcy comes from the observation that in some industries, companies can continue doing business even though they are technically ruined. Assuming that dividends can only be paid with a certain probability at each point of time, we derive closed-form formulas for the expected discounted dividends until bankruptcy under a barrier strategy. Subsequently, the optimal barrier is determined, and several explicit identities for the optimal value are found. The surplus process of the company is modeled by a Wiener process (Brownian motion).
Resumo:
Lung transplantation is an established therapy for end-stage pulmonary disorders in selected patients without significant comorbidities. The particular constraints associated with organ transplantation from deceased donors involve specific allocation rules in order to optimise the medical efficacy of the procedure. Comparison of different policies adopted by national transplant agencies reveals that an optimal and unique allocation system is an elusive goal, and that practical, geographical and logistic parameters must be taken into account. A solution to attenuate the imbalance between the number of lung transplant candidates and the limited availability of organs is to consider marginal donors. In particular, assessment and restoration of gas exchange capacity ex vivo in explanted lungs is a new and promising approach that some lung transplant programmes have started to apply in clinical practice. Chronic lung allograft dysfunction, and especially bronchiolitis obliterans, remains the major medium- and long-term problem in lung transplantation with a major impact on survival. Although there is to date no cure for established bronchiolitis obliterans, new preventive strategies have the potential to limit the burden of this feared complication. Unfortunately, randomised prospective studies are infrequent in the field of lung transplantation, and data obtained from larger studies involving kidney or liver recipients are not always relevant for this purpose.
Resumo:
Some methadone maintenance treatment (MMT) programs prescribe inadequate daily methadone doses. Patients complain of withdrawal symptoms and continue illicit opioid use, yet practitioners are reluctant to increase doses above certain arbitrary thresholds. Serum methadone levels (SMLs) may guide practitioners dosing decisions, especially for those patients who have low SMLs despite higher methadone doses. Such variation is due in part to the complexities of methadone metabolism. The medication itself is a racemic (50:50) mixture of 2 enantiomers: an active "R" form and an essentially inactive "S" form. Methadone is metabolized primarily in the liver, by up to five cytochrome P450 isoforms, and individual differences in enzyme activity help explain wide ranges of active R-enantiomer concentrations in patients given identical doses of racemic methadone. Most clinical research studies have used methadone doses of less than 100 mg/day [d] and have not reported corresponding SMLs. New research suggests that doses ranging from 120 mg/d to more than 700 mg/d, with correspondingly higher SMLs, may be optimal for many patients. Each patient presents a unique clinical challenge, and there is no way of prescribing a single best methadone dose to achieve a specific blood level as a "gold standard" for all patients. Clinical signs and patient-reported symptoms of abstinence syndrome, and continuing illicit opioid use, are effective indicators of dose inadequacy. There does not appear to be a maximum daily dose limit when determining what is adequately "enough" methadone in MMT.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
The population density of an organism is one of the main aspects of its environment, and shoud therefore strongly influence its adaptive strategy. The r/K theory, based on the logistic model, was developed to formalize this influence. K-selectioon is classically thought to favour large body sizes. This prediction, however, cannot be directly derived from the logistic model: some auxiliary hypotheses are therefor implicit. These are to be made explicit if the theory is to be tested. An alternative approach, based on the Euler-Lotka equation, shows that density itself is irrelevant, but that the relative effect of density on adult and juvenile features is crucial. For instance, increasing population will select for a smaller body size if the density affects mainly juvenile growth and/or survival. In this case, density shoud indeed favour large body sizes. The theory appears nevertheless inconsistent, since a probable consequence of increasing body size will be a decrease in the carrying capacity
Resumo:
The HOT study (hypertension-optimal treatment) is an international clinical study on primary prevention of cardiovascular events in 19,193 hypertensive patients worldwide. It aims at the recognition of the optimal diastolic blood pressure value (< 90, < 85 or < 80 mmHg?) in order to maximize the possible benefit of an antihypertensive therapy. In addition, the HOT study investigates whether low doses of aspirin (75 mg/day) are able to reduce the occurrence of severe cardiovascular events. In Switzerland a total of 797 patients have been enrolled in the study. Antihypertensive therapy was initiated with felodipine = Plendil (5 mg/day). This vasoelective calcium antagonist could reduce diastolic blood pressure values to < 90 or < 80 mg/Hg, respectively, in one of two or one of three patients within the first three months. In nine or six patients, respectively out of ten a reduction of diastolic blood pressure values to < 90 or < 80 mmHg was reached within one year by combination of felodipine with other antihypertensive drugs (ACE inhibitors, beta blockers and diuretics).