952 resultados para Helicity method, subtraction method, numerical methods, random polarizations
Resumo:
OBJECT: In this study the accuracy of multislice computerized tomography (MSCT) angiography in the postoperative examination of clip-occluded intracranial aneurysms was compared with that of intraarterial digital subtraction (DS) angiography METHODS: Forty-nine consecutive patients with 60 clipped aneurysms (41 of which had ruptured) were studied with the aid of postoperative MSCT and DS angiography. Both types of radiological studies were reviewed independently by two observers to assess the quality of the images, the artifacts left by the clips, the completeness of aneurysm occlusion, the patency of the parent vessel, and the duration and cost of the examination. The quality of MSCT angiography was good in 42 patients (86%). Poor-quality MSCT angiograms (14%) were a result of the late acquisition of images in three patients and the presence of clip or motion artifacts in four. Occlusion of the aneurysm on good-quality MSCT angiograms was confirmed in all but two patients in whom a small (2-mm) remnant was confirmed on DS angiograms. In one patient, occlusion of a parent vessel was seen on DS angiograms but missed on MSCT angiograms. The sensitivity and specificity for detecting neck remnants on MSCT angiography were both 100%, and the sensitivity and specificity for evaluating vessel patency were 80 and 100%, respectively (95% confidence interval 29.2-100%). Interobserver agreements were 0.765 and 0.86, respectively. The mean duration of the examination was 13 minutes for MSCT angiography and 75 minutes for DS angiography (p < 0.05). Multislice CT angiography was highly cost effective (p < 0.01). CONCLUSIONS: Current-generation MSCT angiography is an accurate noninvasive tool used for assessment of clipped aneurysms in the anterior circulation. Its high sensitivity and low cost warrant its use for postoperative routine control examinations following clip placement on an aneurysm. Digital subtraction angiography must be performed if the interpretation of MSCT angiograms is doubtful or if the aneurysm is located in the posterior circulation.
Resumo:
Background: Local antibiotics may significantly improve the treatmentoutcome in bone infection without systemic toxicity. For impregnationof polymethylmethacrylate (PMMA), gentamicin, vancomycin and/orclindamycin are currently used. A new lipopeptid antibiotic,daptomycin, is a promising candidate for local treatment due to itsspectrum against staphylococci and enterococci (including multiresistantstrains), and concentration-dependent rapid bactericidalactivity. We investigated activity of antibiotic-loaded PMMA againstStaphylococcus epidermidis biofilms using an ultra-sensitive bacterialheat detection method (microcalorimetry).Methods: Staphylococcus epidermidis (strain RP62A, susceptibleto daptomycin, vancomycin and gentamicin) at concentration 106bacteria/ml was incubated with 2 g-PMMA block (Palacos, HeraeusMedical, Hanau, Germany) in 25 ml tryptic soy broth (TSB)supplemented with calcium. PMMA blocks were preloaded withdaptomycin, vancomycin and gentamicin each at 2 g/40 mg (= 100 mg/block) PMMA. After 72 h-incubation at 35 °C under static conditions,PMMA blocks were rinsed in phosphate-buffered solution (PBS) 5times and transferred in 4 ml-microcalorimetry ampoule filled with 1 mlTSB. Bacterial heat production, which is proportional to the quantityof biofilm on PMMA surface, was measured by isothermalmicrocalorimetry. The detection time was calculated as the time untilthe heat flow reached 20 microwatt.Results: Biomechanical properties did not differ between antibioticloadedand non-loaded PMMA blocks. The mean detection time (±standard deviation) of bacterial heat was 6.5 ± 0.4 h for PMMA withoutantibiotics (negative control), 13.5 ± 4.6 h for PMMA with daptomycin,14.0 ± 4.1 h for PMMA with vancomycin and 5.0 ± 0.4 h for PMMAwith gentamicin.Conclusion: Our data indicates that antibiotics at 2 g/40 mg PMMAdid not change the biomechanical properties of bone cement. Daptomycinand vancomycin were more active than gentamicin against S.epidermidis biofilms when all tested at 2 g/40 mg PMMA. In the nextstep, higher concentrations of daptomycin and their elution kineticneeds to be determined to optimize its antibiofilm activity before usingin the clinical setting.
Resumo:
Liquid scintillation counting (LSC) is one of the most widely used methods for determining the activity of 241Pu. One of the main challenges of this counting method is the efficiency calibration of the system for the low beta energies of 241Pu (Emax = 20.8 keV). In this paper we compare the two most frequently used methods, the CIEMAT/NIST efficiency tracing (CNET) method and the experimental quench correction curve method. Both methods proved to be reliable, and agree within their uncertainties, for the expected quenching conditions of the sources.
Resumo:
An active strain formulation for orthotropic constitutive laws arising in cardiac mechanics modeling is introduced and studied. The passive mechanical properties of the tissue are described by the Holzapfel-Ogden relation. In the active strain formulation, the Euler-Lagrange equations for minimizing the total energy are written in terms of active and passive deformation factors, where the active part is assumed to depend, at the cell level, on the electrodynamics and on the specific orientation of the cardiac cells. The well-posedness of the linear system derived from a generic Newton iteration of the original problem is analyzed and different mechanical activation functions are considered. In addition, the active strain formulation is compared with the classical active stress formulation from both numerical and modeling perspectives. Taylor-Hood and MINI finite elements are employed to discretize the mechanical problem. The results of several numerical experiments show that the proposed formulation is mathematically consistent and is able to represent the main key features of the phenomenon, while allowing savings in computational costs.
Resumo:
Glutathione (GSH) dysregulation at the gene, protein, and functional levels has been observed in schizophrenia patients. Together with disease-like anomalies in GSH deficit experimental models, it suggests that such redox dysregulation can play a critical role in altering neural connectivity and synchronization, and thus possibly causing schizophrenia symptoms. To determine whether increased GSH levels would modulate EEG synchronization, N-acetyl-cysteine (NAC), a glutathione precursor, was administered to patients in a randomized, double-blind, crossover protocol for 60 days, followed by placebo for another 60 days (or vice versa). We analyzed whole-head topography of the multivariate phase synchronization (MPS) for 128-channel resting-state EEGs that were recorded at the onset, at the point of crossover, and at the end of the protocol. In this proof of concept study, the treatment with NAC significantly increased MPS compared to placebo over the left parieto-temporal, the right temporal, and the bilateral prefrontal regions. These changes were robust both at the group and at the individual level. Although MPS increase was observed in the absence of clinical improvement at a group level, it correlated with individual change estimated by Liddle's disorganization scale. Therefore, significant changes in EEG synchronization induced by NAC administration may precede clinically detectable improvement, highlighting its possible utility as a biomarker of treatment efficacy. TRIAL REGISTRATION: ClinicalTrials.gov NCT01506765.
Resumo:
A simple and sensitive LC-MS method was developed and validated for the simultaneous quantification of aripiprazole (ARI), atomoxetine (ATO), duloxetine (DUL), clozapine (CLO), olanzapine (OLA), sertindole (STN), venlafaxine (VEN) and their active metabolites dehydroaripiprazole (DARI), norclozapine (NCLO), dehydrosertindole (DSTN) and O-desmethylvenlafaxine (OVEN) in human plasma. The above mentioned compounds and the internal standard (remoxipride) were extracted from 0.5 mL plasma by solid-phase extraction (mix mode support). The analytical separation was carried out on a reverse phase liquid chromatography at basic pH (pH 8.1) in gradient mode. All analytes were monitored by MS detection in the single ion monitoring mode and the method was validated covering the corresponding therapeutic range: 2-200 ng/mL for DUL, OLA, and STN, 4-200 ng/mL for DSTN, 5-1000 ng/mL for ARI, DARI and finally 2-1000 ng/mL for ATO, CLO, NCLO, VEN, OVEN. For all investigated compounds, good performance in terms of recoveries, selectivity, stability, repeatability, intermediate precision, trueness and accuracy, was obtained. Real patient plasma samples were then successfully analysed.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this slight, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this light, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
BACKGROUND AND PURPOSE: To determine whether infarct core or penumbra is the more significant predictor of outcome in acute ischemic stroke, and whether the results are affected by the statistical method used. METHODS: Clinical and imaging data were collected in 165 patients with acute ischemic stroke. We reviewed the noncontrast head computed tomography (CT) to determine the Alberta Score Program Early CT score and assess for hyperdense middle cerebral artery. We reviewed CT-angiogram for site of occlusion and collateral flow score. From perfusion-CT, we calculated the volumes of infarct core and ischemic penumbra. Recanalization status was assessed on early follow-up imaging. Clinical data included age, several time points, National Institutes of Health Stroke Scale at admission, treatment type, and modified Rankin score at 90 days. Two multivariate regression analyses were conducted to determine which variables predicted outcome best. In the first analysis, we did not include recanalization status among the potential predicting variables. In the second, we included recanalization status and its interaction between perfusion-CT variables. RESULTS: Among the 165 study patients, 76 had a good outcome (modified Rankin score ≤2) and 89 had a poor outcome (modified Rankin score >2). In our first analysis, the most important predictors were age (P<0.001) and National Institutes of Health Stroke Scale at admission (P=0.001). The imaging variables were not important predictors of outcome (P>0.05). In the second analysis, when the recanalization status and its interaction with perfusion-CT variables were included, recanalization status and perfusion-CT penumbra volume became the significant predictors (P<0.001). CONCLUSIONS: Imaging prediction of tissue fate, more specifically imaging of the ischemic penumbra, matters only if recanalization can also be predicted.
Resumo:
U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.
Resumo:
The main objective of this research is to examine the effects that different methods of RAP stockpile fractionation would have on the volumetric mix design properties for high-RAP content surface mixes, with the goal of meeting all specified criteria for standard HMA mix designs. To determine the distribution of fine aggregates and binder in RAP stockpile, RAP materials were divided by each sieve size. The composition of RAP materials retained on each sieve was analyzed to determine the optimum fractionation method. Fractionation methods were designed to separate the stockpile at a specified sieve size to control the amount of fine RAP materials which contain higher amounts of fine aggregates and dust contents. These fine RAP materials were used in reduced proportions or completely eliminated, thereby decreasing the amount of fine aggregate materials introduced to the mix. Mix designs were performed using RAP materials from four different stockpiles and the two fractionated methods were used with high-RAP contents up to 50% by virgin binder replacement. By using a fractionation method, a mix with up to 50% RAP was successfully designed while meeting all Superpave criteria and asphalt film thickness requirement by controlling the dust content from RAP stockpiles.
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
In this paper, we consider active sampling to label pixels grouped with hierarchical clustering. The objective of the method is to match the data relationships discovered by the clustering algorithm with the user's desired class semantics. The first is represented as a complete tree to be pruned and the second is iteratively provided by the user. The active learning algorithm proposed searches the pruning of the tree that best matches the labels of the sampled points. By choosing the part of the tree to sample from according to current pruning's uncertainty, sampling is focused on most uncertain clusters. This way, large clusters for which the class membership is already fixed are no longer queried and sampling is focused on division of clusters showing mixed labels. The model is tested on a VHR image in a multiclass classification setting. The method clearly outperforms random sampling in a transductive setting, but cannot generalize to unseen data, since it aims at optimizing the classification of a given cluster structure.
Resumo:
Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. L'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Récemment, l?eau liquide a été décrite comme une structure formée d?un réseau aléatoire de liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure à basse température, certaines liaisons hydrogènes sont détruites ce qui est énergétiquement défavorable. Les molécules d?eau s?arrangent alors autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l?eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise les liaisons hydrogènes. Maintenant, la dissolution des particules devient énergétiquement défavorable, et les particules se séparent de l?eau en formant des agrégats qui minimisent leur surface exposée à l?eau. Pourtant, à très haute température, les effets entropiques deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d?eau. En utilisant un modèle basé sur ces changements de structure formée par des liaisons hydrogènes j?ai pu reproduire les phénomènes principaux liés à l?hydrophobicité. J?ai trouvé une région de coexistence de deux phases entre les températures critiques inférieure et supérieure de solubilité, dans laquelle les particules hydrophobes s?agrègent. En dehors de cette région, les particules sont dissoutes dans l?eau. J?ai démontré que l?interaction hydrophobe est décrite par un modèle qui prend uniquement en compte les changements de structure de l?eau liquide en présence d?une particule hydrophobe, plutôt que les interactions directes entre les particules. Encouragée par ces résultats prometteurs, j?ai étudié des solutions aqueuses de particules hydrophobes en présence de co-solvants cosmotropiques et chaotropiques. Ce sont des substances qui stabilisent ou déstabilisent les agrégats de particules hydrophobes. La présence de ces substances peut être incluse dans le modèle en décrivant leur effet sur la structure de l?eau. J?ai pu reproduire la concentration élevée de co-solvants chaotropiques dans le voisinage immédiat de la particule, et l?effet inverse dans le cas de co-solvants cosmotropiques. Ce changement de concentration du co-solvant à proximité de particules hydrophobes est la cause principale de son effet sur la solubilité des particules hydrophobes. J?ai démontré que le modèle adapté prédit correctement les effets implicites des co-solvants sur les interactions de plusieurs corps entre les particules hydrophobes. En outre, j?ai étendu le modèle à la description de particules amphiphiles comme des lipides. J?ai trouvé la formation de différents types de micelles en fonction de la distribution des regions hydrophobes à la surface des particules. L?hydrophobicité reste également un sujet controversé en science des protéines. J?ai défini une nouvelle échelle d?hydrophobicité pour les acides aminés qui forment des protéines, basée sur leurs surfaces exposées à l?eau dans des protéines natives. Cette échelle permet une comparaison meilleure entre les expériences et les résultats théoriques. Ainsi, le modèle développé dans mon travail contribue à mieux comprendre les solutions aqueuses de particules hydrophobes. Je pense que les résultats analytiques et numériques obtenus éclaircissent en partie les processus physiques qui sont à la base de l?interaction hydrophobe.<br/><br/>Despite the importance of water in our daily lives, some of its properties remain unexplained. Indeed, the interactions of water with organic particles are investigated in research groups all over the world, but controversy still surrounds many aspects of their description. In my work I have tried to understand these interactions on a molecular level using both analytical and numerical methods. Recent investigations describe liquid water as random network formed by hydrogen bonds. The insertion of a hydrophobic particle at low temperature breaks some of the hydrogen bonds, which is energetically unfavorable. The water molecules, however, rearrange in a cage-like structure around the solute particle. Even stronger hydrogen bonds are formed between water molecules, and thus the solute particles are soluble. At higher temperatures, this strict ordering is disrupted by thermal movements, and the solution of particles becomes unfavorable. They minimize their exposed surface to water by aggregating. At even higher temperatures, entropy effects become dominant and water and solute particles mix again. Using a model based on these changes in water structure I have reproduced the essential phenomena connected to hydrophobicity. These include an upper and a lower critical solution temperature, which define temperature and density ranges in which aggregation occurs. Outside of this region the solute particles are soluble in water. Because I was able to demonstrate that the simple mixture model contains implicitly many-body interactions between the solute molecules, I feel that the study contributes to an important advance in the qualitative understanding of the hydrophobic effect. I have also studied the aggregation of hydrophobic particles in aqueous solutions in the presence of cosolvents. Here I have demonstrated that the important features of the destabilizing effect of chaotropic cosolvents on hydrophobic aggregates may be described within the same two-state model, with adaptations to focus on the ability of such substances to alter the structure of water. The relevant phenomena include a significant enhancement of the solubility of non-polar solute particles and preferential binding of chaotropic substances to solute molecules. In a similar fashion, I have analyzed the stabilizing effect of kosmotropic cosolvents in these solutions. Including the ability of kosmotropic substances to enhance the structure of liquid water, leads to reduced solubility, larger aggregation regime and the preferential exclusion of the cosolvent from the hydration shell of hydrophobic solute particles. I have further adapted the MLG model to include the solvation of amphiphilic solute particles in water, by allowing different distributions of hydrophobic regions at the molecular surface, I have found aggregation of the amphiphiles, and formation of various types of micelle as a function of the hydrophobicity pattern. I have demonstrated that certain features of micelle formation may be reproduced by the adapted model to describe alterations of water structure near different surface regions of the dissolved amphiphiles. Hydrophobicity remains a controversial quantity also in protein science. Based on the surface exposure of the 20 amino-acids in native proteins I have defined the a new hydrophobicity scale, which may lead to an improvement in the comparison of experimental data with the results from theoretical HP models. Overall, I have shown that the primary features of the hydrophobic interaction in aqueous solutions may be captured within a model which focuses on alterations in water structure around non-polar solute particles. The results obtained within this model may illuminate the processes underlying the hydrophobic interaction.<br/><br/>La vie sur notre planète a commencé dans l'eau et ne pourrait pas exister en son absence : les cellules des animaux et des plantes contiennent jusqu'à 95% d'eau. Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. En particulier, l'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Bien que l?eau soit généralement un bon solvant, un grand groupe de molécules, appelées molécules hydrophobes (du grecque "hydro"="eau" et "phobia"="peur"), n'est pas facilement soluble dans l'eau. Ces particules hydrophobes essayent d'éviter le contact avec l'eau, et forment donc un agrégat pour minimiser leur surface exposée à l'eau. Cette force entre les particules est appelée interaction hydrophobe, et les mécanismes physiques qui conduisent à ces interactions ne sont pas bien compris à l'heure actuelle. Dans mon étude j'ai décrit l'effet des particules hydrophobes sur l'eau liquide. L'objectif était d'éclaircir le mécanisme de l'interaction hydrophobe qui est fondamentale pour la formation des membranes et le fonctionnement des processus biologiques dans notre corps. Récemment, l'eau liquide a été décrite comme un réseau aléatoire formé par des liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure, certaines liaisons hydrogènes sont détruites tandis que les molécules d'eau s'arrangent autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l'eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise la structure de cage autour des particules hydrophobes. Maintenant, la dissolution des particules devient défavorable, et les particules se séparent de l'eau en formant deux phases. A très haute température, les mouvements thermiques dans le système deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d'eau. A l'aide d'un modèle qui décrit le système en termes de restructuration dans l'eau liquide, j'ai réussi à reproduire les phénomènes physiques liés à l?hydrophobicité. J'ai démontré que les interactions hydrophobes entre plusieurs particules peuvent être exprimées dans un modèle qui prend uniquement en compte les liaisons hydrogènes entre les molécules d'eau. Encouragée par ces résultats prometteurs, j'ai inclus dans mon modèle des substances fréquemment utilisées pour stabiliser ou déstabiliser des solutions aqueuses de particules hydrophobes. J'ai réussi à reproduire les effets dûs à la présence de ces substances. De plus, j'ai pu décrire la formation de micelles par des particules amphiphiles comme des lipides dont la surface est partiellement hydrophobe et partiellement hydrophile ("hydro-phile"="aime l'eau"), ainsi que le repliement des protéines dû à l'hydrophobicité, qui garantit le fonctionnement correct des processus biologiques de notre corps. Dans mes études futures je poursuivrai l'étude des solutions aqueuses de différentes particules en utilisant les techniques acquises pendant mon travail de thèse, et en essayant de comprendre les propriétés physiques du liquide le plus important pour notre vie : l'eau.