96 resultados para Model-based optimization
Resumo:
This paper presents a very fine grid hydrological model based on the spatiotemporal repartition of precipitation and on the topography. The goal is to estimate the flood on a catchment area, using a Probable Maximum Precipitation (PMP) leading to a Probable Maximum Flood (PMF). The spatiotemporal distribution of the precipitation was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flows, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the exfiltration and the snowmelt. This model was applied to several Swiss basins using measured rain, with results showing a good correlation between the simulated and observed flows. This good correlation proves that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type. In this article we present some results obtained using a PMP rainfall and the developed model.
Resumo:
Introduction: Ankle arthrodesis (AD) and total ankle replacement (TAR) are typical treatments for ankle osteoarthritis (AO). Despite clinical interest, there is a lack of their outcome evaluation using objective criteria. Gait analysis and plantar pressure assessment are appropriate to detect pathologies in orthopaedics but they are mostly used in lab with few gait cycles. In this study, we propose an ambulatory device based on inertial and plantar pressure sensors to compare the gait during long-distance trials between healthy subjects (H) and patients with AO or treated by AD and TAR. Methods: Our study included four groups: 11 patients with AO, 9 treated by TAR, 7 treated by AD and 6 control subjects. An ambulatory system (Physilog®, CH) was used for gait analysis; plantar pressure measurements were done using a portable insole (Pedar®-X, DE). The subjects were asked to walk 50 meters in two trials. Mean value and coefficient of variation of spatio-temporal gait parameters were calculated for each trial. Pressure distribution was analyzed in ten subregions of foot. All parameters were compared among the four groups using multi-level model-based statistical analysis. Results: Significant difference (p <0.05) with control was noticed for AO patients in maximum force in medial hindfoot and forefoot and in central forefoot. These differences were no longer significant in TAR and AD groups. Cadence and speed of all pathologic groups showed significant difference with control. Both treatments showed a significant improvement in double support and stance. TAR decreased variability in speed, stride length and knee ROM. Conclusions: In spite of a small sample size, this study showed that ankle function after AO treatments can be evaluated objectively based on plantar pressure and spatio-temporal gait parameters measured during unconstrained walking outside the lab. The combination of these two ambulatory techniques provides a promising way to evaluate foot function in clinics.
Resumo:
INTRODUCTION: Osteoset(®) T is a calcium sulphate void filler containing 4% tobramycin sulphate, used to treat bone and soft tissue infections. Despite systemic exposure to the antibiotic, there are no pharmacokinetic studies in humans published so far. Based on the observations made in our patients, a model predicting tobramycin serum levels and evaluating their toxicity potential is presented. METHODS: Following implantation of Osteoset(®) T, tobramycin serum concentrations were monitored systematically. A pharmacokinetic analysis was performed using a non-linear mixed effects model based on a one compartment model with first-degree absorption. RESULTS: Data from 12 patients treated between October 2006 and March 2008 were analysed. Concentration profiles were consistent with the first-order slow release and single-compartment kinetics, whilst showing important variability. Predicted tobramycin serum concentrations depended clearly on both implanted drug amount and renal function. DISCUSSION AND CONCLUSION: Despite the popularity of aminoglycosides for local antibiotic therapy, pharmacokinetic data for this indication are scarce, and not available for calcium sulphate as carrier material. Systemic exposure to tobramycin after implantation of Osteoset(®) T appears reassuring regarding toxicity potential, except in case of markedly impaired renal function. We recommend in adapting the dosage to the estimated creatinine clearance rather than solely to the patient's weight.
Resumo:
Recent ink dating methods focused mainly on changes in solvent amounts occurring over time. A promising method was developed at the Landeskriminalamt of Munich using thermal desorption (TD) followed by gas chromatography / mass spectrometry (GC/MS) analysis. Sequential extractions of the phenoxyethanol present in ballpoint pen ink entries were carried out at two different temperatures. This method is applied in forensic practice and is currently implemented in several laboratories participating to the InCID group (International Collaboration on Ink Dating). However, harmonization of the method between the laboratories proved to be a particularly sensitive and time consuming task. The main aim of this work was therefore to implement the TD-GC/MS method at the Bundeskriminalamt (Wiesbaden, Germany) in order to evaluate if results were comparable to those obtained in Munich. At first validation criteria such as limits of reliable measurements, linearity and repeatability were determined. Samples were prepared in three different laboratories using the same inks and analyzed using two TDS-GC/MS instruments (one in Munich and one in Wiesbaden). The inter- and intra-laboratory variability of the ageing parameter was determined and ageing curves were compared. While inks stored in similar conditions yielded comparable ageing curves, it was observed that significantly different storage conditions had an influence on the resulting ageing curves. Finally, interpretation models, such as thresholds and trend tests, were evaluated and discussed in view of the obtained results. Trend tests were considered more suitable than threshold models. As both approaches showed limitations, an alternative model, based on the slopes of the ageing curves, was also proposed.
Resumo:
Meiotic drive has attracted much interest because it concerns the robustness of Mendelian segregation and its genetic and evolutionary stability. We studied chromosomal meiotic drive in the common shrew (Sorex araneus, Insectivora, Mammalia), which exhibits one of the most remarkable chromosomal polymorphisms within mammalian species. The open question of the evolutionary success of metacentric chromosomes (Robertsonian fusions) versus acrocentrics in the common shrew prompted us to test whether a segregation distortion in favor of metacentrics is present in female and/or male meiosis. Performing crosses under controlled laboratory conditions with animals from natural populations, we found a clear trend toward a segregation distortion in favor of metacentrics during male meiosis, two chromosome combinations (gm and jl) being significantly preferred over their acrocentric homologs. Apart for one Robertsonian fusion (hi), this trend was absent in female meiosis. We propose a model based on recombination events between twin acrocentrics to explain the difference in transmission ratios of the same metacentric in different sexes and unequal drive of particular metacentrics in the same sex. Pooled data for female and male meiosis revealed a trend toward stronger segregation distortion for larger metacentrics. This is partially in agreement with the frequency of metacentrics occurring in natural populations of a chromosome race showing a high degree of chromosomal polymorphism.
Resumo:
General clustering deals with weighted objects and fuzzy memberships. We investigate the group- or object-aggregation-invariance properties possessed by the relevant functionals (effective number of groups or objects, centroids, dispersion, mutual object-group information, etc.). The classical squared Euclidean case can be generalized to non-Euclidean distances, as well as to non-linear transformations of the memberships, yielding the c-means clustering algorithm as well as two presumably new procedures, the convex and pairwise convex clustering. Cluster stability and aggregation-invariance of the optimal memberships associated to the various clustering schemes are examined as well.
Resumo:
Crystal size distributions (CSD) of periclase in contact metamorphic dolomite marbles are presented for two profiles near the Cima Uzza summit in the southern Adamello Massif (Italy). The database was combined with geochemical and petrological information to deduce the controls on the periclase-forming reaction. The contact metamorphic dolomite marbles are exposed at the contact of mafic intrusive rocks and are partially surrounded by them. Brucite is retrograde and pseudomorphs spherical periclase crystals. Prograde periclase growth is the consequence of limited infiltration of water-rich fluid at T near 605C. Stable isotope data show depletion in (13)C and (18)O over a narrow region (40 cm) near the magmatic contact, whereas the periclase-forming reaction front extends up to 4 m from the contact. CSD analyses along the two profiles show that the median grain size of the periclase crystals does not change, but that there is a progressively greater distribution of grain sizes, including a greater proportion of larger grains, with increasing distance from the contact. A qualitative model, based on the textural and geochemical data, attributes these variations in grain size to changing reaction affinities along a kinetically dispersed infiltration front. This study highlights the need to invoke disequilibrium processes for metamorphic mineral growth and expands the use of CSDs to systems of mineral formation driven by fluid infiltration.
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
BACKGROUND AND PURPOSE: Hyperglycemia after stroke is associated with larger infarct volume and poorer functional outcome. In an animal stroke model, the association between serum glucose and infarct volume is described by a U-shaped curve with a nadir ≈7 mmol/L. However, a similar curve in human studies was never reported. The objective of the present study is to investigate the association between serum glucose levels and functional outcome in patients with acute ischemic stroke. METHODS: We analyzed 1446 consecutive patients with acute ischemic stroke. Serum glucose was measured on admission at the emergency department together with multiple other metabolic, clinical, and radiological parameters. National Institutes of Health Stroke Scale (NIHSS) score was recorded at 24 hours, and Rankin score was recorded at 3 and 12 months. The association between serum glucose and favorable outcome (Rankin score ≤2) was explored in univariate and multivariate analysis. The model was further analyzed in a robust regression model based on fractional polynomial (-2-2) functions. RESULTS: Serum glucose is independently correlated with functional outcome at 12 months (OR, 1.15; P=0.01). Other predictors of outcome include admission NIHSS score (OR, 1.18; P<0001), age (OR, 1.06; P<0.001), prestroke Rankin score (OR, 20.8; P=0.004), and leukoaraiosis (OR, 2.21; P=0.016). Using these factors in multiple logistic regression analysis, the area under the receiver-operator characteristic curve is 0.869. The association between serum glucose and Rankin score at 12 months is described by a J-shaped curve with a nadir of 5 mmol/L. Glucose values between 3.7 and 7.3 mmol/L are associated with favorable outcome. A similar curve was generated for the association of glucose and 24-hour NIHSS score, for which glucose values between 4.0 and 7.2 mmol/L are associated with a NIHSS score <7. Discussion-Both hypoglycemia and hyperglycemia are dangerous in acute ischemic stroke as shown by a J-shaped association between serum glucose and 24-hour and 12-month outcome. Initial serum glucose values between 3.7 and 7.3 mmol/L are associated with favorable outcome.
Resumo:
The circadian timing system controls cell cycle, apoptosis, drug bioactivation, and transport and detoxification mechanisms in healthy tissues. As a consequence, the tolerability of cancer chemotherapy varies up to several folds as a function of circadian timing of drug administration in experimental models. Best antitumor efficacy of single-agent or combination chemotherapy usually corresponds to the delivery of anticancer drugs near their respective times of best tolerability. Mathematical models reveal that such coincidence between chronotolerance and chronoefficacy is best explained by differences in the circadian and cell cycle dynamics of host and cancer cells, especially with regard circadian entrainment and cell cycle variability. In the clinic, a large improvement in tolerability was shown in international randomized trials where cancer patients received the same sinusoidal chronotherapy schedule over 24h as compared to constant-rate infusion or wrongly timed chronotherapy. However, sex, genetic background, and lifestyle were found to influence optimal chronotherapy scheduling. These findings support systems biology approaches to cancer chronotherapeutics. They involve the systematic experimental mapping and modeling of chronopharmacology pathways in synchronized cell cultures and their adjustment to mouse models of both sexes and distinct genetic background, as recently shown for irinotecan. Model-based personalized circadian drug delivery aims at jointly improving tolerability and efficacy of anticancer drugs based on the circadian timing system of individual patients, using dedicated circadian biomarker and drug delivery technologies.
Resumo:
At the beginning of the 21st century, a new social arrangement of work poses a series of questions and challenges to scholars who aim to help people develop their working lives. Given the globalization of career counseling, we decided to address these issues and then to formulate potentially innovative responses in an international forum. We used this approach to avoid the difficulties of creating models and methods in one country and then trying to export them to other countries where they would be adapted for use. This article presents the initial outcome of this collaboration, a counseling model and methods. The life-designing model for career intervention endorses five presuppositions about people and their work lives: contextual possibilities, dynamic processes, non-linear progression, multiple perspectives, and personal patterns. Thinking from these five presuppositions, we have crafted a contextualized model based on the epistemology of social constructionism, particularly recognizing that an individual's knowledge and identity are the product of social interaction and that meaning is co-constructed through discourse. The life-design framework for counseling implements the theories of self-constructing [Guichard, J. (2005). Life-long self-construction. International Journal for Educational and Vocational Guidance, 5, 111-124] and career construction [Savickas, M. L. (2005). The theory and practice of career construction. In S. D. Brown & R. W. Lent (Eds.), Career development and counselling: putting theory and research to work (pp. 42-70). Hoboken, NJ: Wiley] that describe vocational behavior and its development. Thus, the framework is structured to be life-long, holistic, contextual, and preventive.
Resumo:
BACKGROUND: The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. HYPOTHESIS: The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. METHOD: A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. RESULTS: Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. CONCLUSION: Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.
Resumo:
The transcriptome is the readout of the genome. Identifying common features in it across distant species can reveal fundamental principles. To this end, the ENCODE and modENCODE consortia have generated large amounts of matched RNA-sequencing data for human, worm and fly. Uniform processing and comprehensive annotation of these data allow comparison across metazoan phyla, extending beyond earlier within-phylum transcriptome comparisons and revealing ancient, conserved features. Specifically, we discover co-expression modules shared across animals, many of which are enriched in developmental genes. Moreover, we use expression patterns to align the stages in worm and fly development and find a novel pairing between worm embryo and fly pupae, in addition to the embryo-to-embryo and larvae-to-larvae pairings. Furthermore, we find that the extent of non-canonical, non-coding transcription is similar in each organism, per base pair. Finally, we find in all three organisms that the gene-expression levels, both coding and non-coding, can be quantitatively predicted from chromatin features at the promoter using a 'universal model' based on a single set of organism-independent parameters.
Resumo:
Background: As imatinib pharmacokinetics are highly variable, plasma levels differ largely between patients under the same dosage. Retrospective studies in chronic myeloid leukemia (CML) patients showed significant correlations between low levels and suboptimal response, and between high levels and poor tolerability. Monitoring of plasma levels is thus increasingly advised, targeting trough concentrations of 1000 μg/L and above. Objectives: Our study was launched to assess the clinical usefulness of systematic imatinib TDM in CML patients. The present preliminary evaluation questions the appropriateness of dosage adjustment following plasma level measurement to reach the recommended trough level, while allowing an interval of 4-24 h after last drug intake for blood sampling. Methods: Initial blood samples from the first 9 patients in the intervention arm were obtained 4-25 h after last dose. Trough levels in 7 patients were predicted to be significantly away from the target (6 <750 μg/L, and 1 >1500 μg/L with poor tolerance), based on a Bayesian approach using a population pharmacokinetic model. Individual dosage adjustments were taken up in 5 patients, who had a control measurement 1-4 weeks after dosage change. Predicted trough levels were confronted to anterior model-based extrapolations. Results: Before dosage adjustment, observed concentrations extrapolated at trough ranged from 359 to 1832 μg/L (median 710; mean 804, CV 53%) in the 9 patients. After dosage adjustment they were expected to target between 720 and 1090 μg/L (median 878; mean 872, CV 13%). Observed levels of the 5 recheck measurements extrapolated at trough actually ranged from 710 to 1069 μg/L (median 1015; mean 950, CV 16%) and had absolute differences of 21 to 241 μg/L to the model-based predictions (median 175; mean 157, CV 52%). Differences between observed and predicted trough levels were larger when intervals between last drug intake and sampling were very short (~4 h). Conclusion: These preliminary results suggest that TDM of imatinib using a Bayesian interpretation is able to bring trough levels closer to 1000 μg/L (with CV decreasing from 53% to 16%). While this may simplify blood collection in daily practice, as samples do not have to be drawn exactly at trough, the largest possible interval to last drug intake yet remains preferable. This encourages the evaluation of the clinical benefit of a routine TDM intervention in CML patients, which the randomized Swiss I-COME study aims to.
Resumo:
Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. L'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Récemment, l?eau liquide a été décrite comme une structure formée d?un réseau aléatoire de liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure à basse température, certaines liaisons hydrogènes sont détruites ce qui est énergétiquement défavorable. Les molécules d?eau s?arrangent alors autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l?eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise les liaisons hydrogènes. Maintenant, la dissolution des particules devient énergétiquement défavorable, et les particules se séparent de l?eau en formant des agrégats qui minimisent leur surface exposée à l?eau. Pourtant, à très haute température, les effets entropiques deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d?eau. En utilisant un modèle basé sur ces changements de structure formée par des liaisons hydrogènes j?ai pu reproduire les phénomènes principaux liés à l?hydrophobicité. J?ai trouvé une région de coexistence de deux phases entre les températures critiques inférieure et supérieure de solubilité, dans laquelle les particules hydrophobes s?agrègent. En dehors de cette région, les particules sont dissoutes dans l?eau. J?ai démontré que l?interaction hydrophobe est décrite par un modèle qui prend uniquement en compte les changements de structure de l?eau liquide en présence d?une particule hydrophobe, plutôt que les interactions directes entre les particules. Encouragée par ces résultats prometteurs, j?ai étudié des solutions aqueuses de particules hydrophobes en présence de co-solvants cosmotropiques et chaotropiques. Ce sont des substances qui stabilisent ou déstabilisent les agrégats de particules hydrophobes. La présence de ces substances peut être incluse dans le modèle en décrivant leur effet sur la structure de l?eau. J?ai pu reproduire la concentration élevée de co-solvants chaotropiques dans le voisinage immédiat de la particule, et l?effet inverse dans le cas de co-solvants cosmotropiques. Ce changement de concentration du co-solvant à proximité de particules hydrophobes est la cause principale de son effet sur la solubilité des particules hydrophobes. J?ai démontré que le modèle adapté prédit correctement les effets implicites des co-solvants sur les interactions de plusieurs corps entre les particules hydrophobes. En outre, j?ai étendu le modèle à la description de particules amphiphiles comme des lipides. J?ai trouvé la formation de différents types de micelles en fonction de la distribution des regions hydrophobes à la surface des particules. L?hydrophobicité reste également un sujet controversé en science des protéines. J?ai défini une nouvelle échelle d?hydrophobicité pour les acides aminés qui forment des protéines, basée sur leurs surfaces exposées à l?eau dans des protéines natives. Cette échelle permet une comparaison meilleure entre les expériences et les résultats théoriques. Ainsi, le modèle développé dans mon travail contribue à mieux comprendre les solutions aqueuses de particules hydrophobes. Je pense que les résultats analytiques et numériques obtenus éclaircissent en partie les processus physiques qui sont à la base de l?interaction hydrophobe.<br/><br/>Despite the importance of water in our daily lives, some of its properties remain unexplained. Indeed, the interactions of water with organic particles are investigated in research groups all over the world, but controversy still surrounds many aspects of their description. In my work I have tried to understand these interactions on a molecular level using both analytical and numerical methods. Recent investigations describe liquid water as random network formed by hydrogen bonds. The insertion of a hydrophobic particle at low temperature breaks some of the hydrogen bonds, which is energetically unfavorable. The water molecules, however, rearrange in a cage-like structure around the solute particle. Even stronger hydrogen bonds are formed between water molecules, and thus the solute particles are soluble. At higher temperatures, this strict ordering is disrupted by thermal movements, and the solution of particles becomes unfavorable. They minimize their exposed surface to water by aggregating. At even higher temperatures, entropy effects become dominant and water and solute particles mix again. Using a model based on these changes in water structure I have reproduced the essential phenomena connected to hydrophobicity. These include an upper and a lower critical solution temperature, which define temperature and density ranges in which aggregation occurs. Outside of this region the solute particles are soluble in water. Because I was able to demonstrate that the simple mixture model contains implicitly many-body interactions between the solute molecules, I feel that the study contributes to an important advance in the qualitative understanding of the hydrophobic effect. I have also studied the aggregation of hydrophobic particles in aqueous solutions in the presence of cosolvents. Here I have demonstrated that the important features of the destabilizing effect of chaotropic cosolvents on hydrophobic aggregates may be described within the same two-state model, with adaptations to focus on the ability of such substances to alter the structure of water. The relevant phenomena include a significant enhancement of the solubility of non-polar solute particles and preferential binding of chaotropic substances to solute molecules. In a similar fashion, I have analyzed the stabilizing effect of kosmotropic cosolvents in these solutions. Including the ability of kosmotropic substances to enhance the structure of liquid water, leads to reduced solubility, larger aggregation regime and the preferential exclusion of the cosolvent from the hydration shell of hydrophobic solute particles. I have further adapted the MLG model to include the solvation of amphiphilic solute particles in water, by allowing different distributions of hydrophobic regions at the molecular surface, I have found aggregation of the amphiphiles, and formation of various types of micelle as a function of the hydrophobicity pattern. I have demonstrated that certain features of micelle formation may be reproduced by the adapted model to describe alterations of water structure near different surface regions of the dissolved amphiphiles. Hydrophobicity remains a controversial quantity also in protein science. Based on the surface exposure of the 20 amino-acids in native proteins I have defined the a new hydrophobicity scale, which may lead to an improvement in the comparison of experimental data with the results from theoretical HP models. Overall, I have shown that the primary features of the hydrophobic interaction in aqueous solutions may be captured within a model which focuses on alterations in water structure around non-polar solute particles. The results obtained within this model may illuminate the processes underlying the hydrophobic interaction.<br/><br/>La vie sur notre planète a commencé dans l'eau et ne pourrait pas exister en son absence : les cellules des animaux et des plantes contiennent jusqu'à 95% d'eau. Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. En particulier, l'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Bien que l?eau soit généralement un bon solvant, un grand groupe de molécules, appelées molécules hydrophobes (du grecque "hydro"="eau" et "phobia"="peur"), n'est pas facilement soluble dans l'eau. Ces particules hydrophobes essayent d'éviter le contact avec l'eau, et forment donc un agrégat pour minimiser leur surface exposée à l'eau. Cette force entre les particules est appelée interaction hydrophobe, et les mécanismes physiques qui conduisent à ces interactions ne sont pas bien compris à l'heure actuelle. Dans mon étude j'ai décrit l'effet des particules hydrophobes sur l'eau liquide. L'objectif était d'éclaircir le mécanisme de l'interaction hydrophobe qui est fondamentale pour la formation des membranes et le fonctionnement des processus biologiques dans notre corps. Récemment, l'eau liquide a été décrite comme un réseau aléatoire formé par des liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure, certaines liaisons hydrogènes sont détruites tandis que les molécules d'eau s'arrangent autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l'eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise la structure de cage autour des particules hydrophobes. Maintenant, la dissolution des particules devient défavorable, et les particules se séparent de l'eau en formant deux phases. A très haute température, les mouvements thermiques dans le système deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d'eau. A l'aide d'un modèle qui décrit le système en termes de restructuration dans l'eau liquide, j'ai réussi à reproduire les phénomènes physiques liés à l?hydrophobicité. J'ai démontré que les interactions hydrophobes entre plusieurs particules peuvent être exprimées dans un modèle qui prend uniquement en compte les liaisons hydrogènes entre les molécules d'eau. Encouragée par ces résultats prometteurs, j'ai inclus dans mon modèle des substances fréquemment utilisées pour stabiliser ou déstabiliser des solutions aqueuses de particules hydrophobes. J'ai réussi à reproduire les effets dûs à la présence de ces substances. De plus, j'ai pu décrire la formation de micelles par des particules amphiphiles comme des lipides dont la surface est partiellement hydrophobe et partiellement hydrophile ("hydro-phile"="aime l'eau"), ainsi que le repliement des protéines dû à l'hydrophobicité, qui garantit le fonctionnement correct des processus biologiques de notre corps. Dans mes études futures je poursuivrai l'étude des solutions aqueuses de différentes particules en utilisant les techniques acquises pendant mon travail de thèse, et en essayant de comprendre les propriétés physiques du liquide le plus important pour notre vie : l'eau.