980 resultados para Model Correlation
Resumo:
Les biofilms sont des communautés de microorganismes incorporés dans une matrice exo-polymérique complexe. Ils sont reconnus pour jouer un rôle important comme barrière de diffusion dans les systèmes environnementaux et la santé humaine, donnant lieu à une résistance accrue aux antibiotiques et aux désinfectants. Comme le transfert de masse dans un biofilm est principalement dû à la diffusion moléculaire, il est primordial de comprendre les principaux paramètres influençant les flux de diffusion. Dans ce travail, nous avons étudié un biofilm de Pseudomonas fluorescens et deux hydrogels modèles (agarose et alginate) pour lesquels l’autodiffusion (mouvement Brownien) et les coefficients de diffusion mutuels ont été quantifiés. La spectroscopie par corrélation de fluorescence a été utilisée pour mesurer les coefficients d'autodiffusion dans une volume confocal de ca. 1 m3 dans les gels ou les biofilms, tandis que les mesures de diffusion mutuelle ont été faites par cellule de diffusion. En outre, la voltamétrie sur microélectrode a été utilisée pour évaluer le potentiel de Donnan des gels afin de déterminer son impact sur la diffusion. Pour l'hydrogel d'agarose, les observations combinées d'une diminution du coefficient d’autodiffusion et de l’augmentation de la diffusion mutuelle pour une force ionique décroissante ont été attribuées au potentiel de Donnan du gel. Des mesures de l'effet Donnan (différence de -30 mV entre des forces ioniques de 10-4 et 10-1 M) et l'accumulation correspondante d’ions dans l'hydrogel (augmentation d’un facteur de 13 par rapport à la solution) ont indiqué que les interactions électrostatiques peuvent fortement influencer le flux de diffusion de cations, même dans un hydrogel faiblement chargé tel que l'agarose. Curieusement, pour un gel plus chargé comme l'alginate de calcium, la variation de la force ionique et du pH n'a donné lieu qu'à de légères variations de la diffusion de sondes chargées dans l'hydrogel. Ces résultats suggèrent qu’en influençant la diffusion du soluté, l'effet direct des cations sur la structure du gel (compression et/ou gonflement induits) était beaucoup plus efficace que l'effet Donnan. De même, pour un biofilm bactérien, les coefficients d'autodiffusion étaient pratiquement constants sur toute une gamme de force ionique (10-4-10-1 M), aussi bien pour des petits solutés chargés négativement ou positivement (le rapport du coefficient d’autodiffusion dans biofilm sur celui dans la solution, Db/Dw ≈ 85 %) que pour des nanoparticules (Db/Dw≈ 50 %), suggérant que l'effet d'obstruction des biofilms l’emporte sur l'effet de charge. Les résultats de cette étude ont montré que parmi les divers facteurs majeurs qui affectent la diffusion dans un biofilm environnemental oligotrophe (exclusion stérique, interactions électrostatiques et hydrophobes), les effets d'obstruction semblent être les plus importants lorsque l'on tente de comprendre la diffusion du soluté. Alors que les effets de charge ne semblaient pas être importants pour l'autodiffusion de substrats chargés dans l'hydrogel d'alginate ou dans le biofilm bactérien, ils ont joué un rôle clé dans la compréhension de la diffusion à travers l’agarose. L’ensemble de ces résultats devraient être très utiles pour l'évaluation de la biodisponibilité des contaminants traces et des nanoparticules dans l'environnement.
Resumo:
Le traitement chirurgical des anévrismes de l'aorte abdominale est de plus en plus remplacé par la réparation endovasculaire de l’anévrisme (« endovascular aneurysm repair », EVAR) en utilisant des endoprothèses (« stent-grafts », SGs). Cependant, l'efficacité de cette approche moins invasive est compromise par l'incidence de l'écoulement persistant dans l'anévrisme, appelé endofuites menant à une rupture d'anévrisme si elle n'est pas détectée. Par conséquent, une surveillance de longue durée par tomodensitométrie sur une base annuelle est nécessaire ce qui augmente le coût de la procédure EVAR, exposant le patient à un rayonnement ionisants et un agent de contraste néphrotoxique. Le mécanisme de rupture d'anévrisme secondaire à l'endofuite est lié à une pression du sac de l'anévrisme proche de la pression systémique. Il existe une relation entre la contraction ou l'expansion du sac et la pressurisation du sac. La pressurisation résiduelle de l'anévrisme aortique abdominale va induire une pulsation et une circulation sanguine à l'intérieur du sac empêchant ainsi la thrombose du sac et la guérison de l'anévrisme. L'élastographie vasculaire non-invasive (« non-invasive vascular elastography », NIVE) utilisant le « Lagrangian Speckle Model Estimator » (LSME) peut devenir une technique d'imagerie complémentaire pour le suivi des anévrismes après réparation endovasculaire. NIVE a la capacité de fournir des informations importantes sur l'organisation d'un thrombus dans le sac de l'anévrisme et sur la détection des endofuites. La caractérisation de l'organisation d'un thrombus n'a pas été possible dans une étude NIVE précédente. Une limitation de cette étude était l'absence d'examen tomodensitométrique comme étalon-or pour le diagnostic d'endofuites. Nous avons cherché à appliquer et optimiser la technique NIVE pour le suivi des anévrismes de l'aorte abdominale (AAA) après EVAR avec endoprothèse dans un modèle canin dans le but de détecter et caractériser les endofuites et l'organisation du thrombus. Des SGs ont été implantés dans un groupe de 18 chiens avec un anévrisme créé dans l'aorte abdominale. Des endofuites de type I ont été créés dans 4 anévrismes, de type II dans 13 anévrismes tandis qu’un anévrisme n’avait aucune endofuite. L'échographie Doppler (« Doppler ultrasound », DUS) et les examens NIVE ont été réalisés avant puis à 1 semaine, 1 mois, 3 mois et 6 mois après l’EVAR. Une angiographie, une tomodensitométrie et des coupes macroscopiques ont été réalisées au moment du sacrifice. Les valeurs de contrainte ont été calculées en utilisant l`algorithme LSME. Les régions d'endofuite, de thrombus frais (non organisé) et de thrombus solide (organisé) ont été identifiées et segmentées en comparant les résultats de la tomodensitométrie et de l’étude macroscopique. Les valeurs de contrainte dans les zones avec endofuite, thrombus frais et organisé ont été comparées. Les valeurs de contrainte étaient significativement différentes entre les zones d'endofuites, les zones de thrombus frais ou organisé et entre les zones de thrombus frais et organisé. Toutes les endofuites ont été clairement caractérisées par les examens d'élastographie. Aucune corrélation n'a été trouvée entre les valeurs de contrainte et le type d'endofuite, la pression de sac, la taille des endofuites et la taille de l'anévrisme.
Resumo:
We study the analytical solution of the Monte Carlo dynamics in the spherical Sherrington-Kirkpatrick model using the technique of the generating function. Explicit solutions for one-time observables (like the energy) and two-time observables (like the correlation and response function) are obtained. We show that the crucial quantity which governs the dynamics is the acceptance rate. At zero temperature, an adiabatic approximation reveals that the relaxational behavior of the model corresponds to that of a single harmonic oscillator with an effective renormalized mass.
Resumo:
We propose a short-range generalization of the p-spin interaction spin-glass model. The model is well suited to test the idea that an entropy collapse is at the bottom line of the dynamical singularity encountered in structural glasses. The model is studied in three dimensions through Monte Carlo simulations, which put in evidence fragile glass behavior with stretched exponential relaxation and super-Arrhenius behavior of the relaxation time. Our data are in favor of a Vogel-Fulcher behavior of the relaxation time, related to an entropy collapse at the Kauzmann temperature. We, however, encounter difficulties analogous to those found in experimental systems when extrapolating thermodynamical data at low temperatures. We study the spin-glass susceptibility, investigating the behavior of the correlation length in the system. We find that the increase of the relaxation time is accompanied by a very slow growth of the correlation length. We discuss the scaling properties of off-equilibrium dynamics in the glassy regime, finding qualitative agreement with the mean-field theory.
Resumo:
Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme
Resumo:
The aim of this study is to investigate the role of operational flexibility for effective project management in the construction industry. The specific objectives are to: a) Identify the determinants of operational flexibility potential in construction project management b) Investigate the contribution of each of the determinants to operational flexibility potential in the construction industry c) Investigate on the moderating factors of operational flexibility potential in a construction project environment d) Investigate whether moderated operational flexibility potential mediates the path between predictors and effective construction project management e) Develop and test a conceptual model of achieving operational flexibility for effective project management The purpose of this study is to findout ways to utilize flexibility inorder to manage uncertain project environment and ultimately achieve effective project management. In what configuration these operational flexibility determinants are demanded by construction project environment in order to achieve project success. This research was conducted in three phases, namely: (i) exploratory phase (ii) questionnaire development phase; and (iii) data collection and analysis phase. The study needs firm level analysis and therefore real estate developers who are members of CREDAI, Kerala Chapter were considered. This study provides a framework on the functioning of operational flexibility, offering guidance to researchers and practitioners for discovering means to gain operational flexibility in construction firms. The findings provide an empirical understanding on kinds of resources and capabilities a construction firm must accumulate to respond flexibly to the changing project environment offering practitioners insights into practices that build firms operational flexibility potential. Firms are dealing with complex, continuous changing and uncertain environments due trends of globalization, technical changes and innovations and changes in the customers’ needs and expectations. To cope with the increasingly uncertain and quickly changing environment firms strive for flexibility. To achieve the level of flexibility that adds value to the customers, firms should look to flexibility from a day to day operational perspective. Each dimension of operational flexibility is derived from competences and capabilities. In this thesis only the influence on customer satisfaction and learning exploitation of flexibility dimensions which directly add value in the customers eyes are studied to answer the followingresearch questions: “What is the impact of operational flexibility on customer satisfaction?.” What are the predictors of operational flexibility in construction industry? .These questions can only be answered after answering the questions like “Why do firms need operational flexibility?” and “how can firms achieve operational flexibility?” in the context of the construction industry. The need for construction firms to be flexible, via the effective utilization of organizational resources and capabilities for improved responsiveness, is important because of the increasing rate of changes in the business environment within which they operate. Achieving operational flexibility is also important because it has a significant correlation with a project effectiveness and hence a firm’s turnover. It is essential for academics and practitioners to recognize that the attainment of operational flexibility involves different types namely: (i) Modification (ii) new product development and (iii) demand management requires different configurations of predictors (i.e., resources, capabilities and strategies). Construction firms should consider these relationships and implement appropriate management practices for developing and configuring the right kind of resources, capabilities and strategies towards achieving different operational flexibility types.
Resumo:
For the angular dependence of quasimolecular X-ray emission in heavy ion colliding systems we present a semiclassical adiabatic model taking into account spontaneous dipole radiation. Using the most characteristic levels from a DFS-correlation diagram we are able to explain the behaviour of the observed anisotropy.
Resumo:
The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes and rivers. A new deterministic-mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including, light, nutrients and temperature. A technique called generalised sensitivity analysis was applied to the model to identify the critical parameter uncertainties in the model and investigates the interaction between the chosen parameters of the model. The result of the analysis suggested that 8 out of 12 parameters were significant in obtaining the observed cyanobacterial behaviour in a simulation. It was found that there was a high degree of correlation between the half-saturation rate constants used in the model.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
To gain a new perspective on the interaction of the Atlantic Ocean and the atmosphere, the relationship between the atmospheric and oceanic meridional energy transports is studied in a version of HadCM3, the U.K. Hadley Centre's coupled climate model. The correlation structure of the energy transports in the atmosphere and Atlantic Ocean as a function of latitude, and the cross correlation between the two systems are analyzed. The processes that give rise to the correlations are then elucidated using regression analyses. In northern midlatitudes, the interannual variability of the Atlantic Ocean energy transport is dominated by Ekman processes. Anticorrelated zonal winds in the subtropics and midlatitudes, particularly associated with the North Atlantic Oscillation (NAO), drive anticorrelated meridional Ekman transports. Variability in the atmospheric energy transport is associated with changes in the stationary waves, but is only weakly related to the NAO. Nevertheless, atmospheric driving of the oceanic Ekman transports is responsible for a bipolar pattern in the correlation between the atmosphere and Atlantic Ocean energy transports. In the Tropics, the interannual variability of the Atlantic Ocean energy transport is dominated by an adjustment of the tropical ocean to coastal upwelling induced along the Venezuelan coast by a strengthening of the easterly trade winds. Variability in the atmospheric energy transport is associated with a cross-equatorial meridional overturning circulation that is only weakly associated with variability in the trade winds along the Venezuelan coast. In consequence, there is only very limited correlation between the atmosphere and Atlantic Ocean energy transports in the Tropics of HadCM3
Resumo:
In this study, the processes affecting sea surface temperature variability over the 1992–98 period, encompassing the very strong 1997–98 El Niño event, are analyzed. A tropical Pacific Ocean general circulation model, forced by a combination of weekly ERS1–2 and TAO wind stresses, and climatological heat and freshwater fluxes, is first validated against observations. The model reproduces the main features of the tropical Pacific mean state, despite a weaker than observed thermal stratification, a 0.1 m s−1 too strong (weak) South Equatorial Current (North Equatorial Countercurrent), and a slight underestimate of the Equatorial Undercurrent. Good agreement is found between the model dynamic height and TOPEX/Poseidon sea level variability, with correlation/rms differences of 0.80/4.7 cm on average in the 10°N–10°S band. The model sea surface temperature variability is a bit weak, but reproduces the main features of interannual variability during the 1992–98 period. The model compares well with the TAO current variability at the equator, with correlation/rms differences of 0.81/0.23 m s−1 for surface currents. The model therefore reproduces well the observed interannual variability, with wind stress as the only interannually varying forcing. This good agreement with observations provides confidence in the comprehensive three-dimensional circulation and thermal structure of the model. A close examination of mixed layer heat balance is thus undertaken, contrasting the mean seasonal cycle of the 1993–96 period and the 1997–98 El Niño. In the eastern Pacific, cooling by exchanges with the subsurface (vertical advection, mixing, and entrainment), the atmospheric forcing, and the eddies (mainly the tropical instability waves) are the three main contributors to the heat budget. In the central–western Pacific, the zonal advection by low-frequency currents becomes the main contributor. Westerly wind bursts (in December 1996 and March and June 1997) were found to play a decisive role in the onset of the 1997–98 El Niño. They contributed to the early warming in the eastern Pacific because the downwelling Kelvin waves that they excited diminished subsurface cooling there. But it is mainly through eastward advection of the warm pool that they generated temperature anomalies in the central Pacific. The end of El Niño can be linked to the large-scale easterly anomalies that developed in the western Pacific and spread eastward, from the end of 1997 onward. In the far-western Pacific, because of the shallower than normal thermocline, these easterlies cooled the SST by vertical processes. In the central Pacific, easterlies pushed the warm pool back to the west. In the east, they led to a shallower thermocline, which ultimately allowed subsurface cooling to resume and to quickly cool the surface layer.
Resumo:
Heat waves are expected to increase in frequency and magnitude with climate change. The first part of a study to produce projections of the effect of future climate change on heat-related mortality is presented. Separate city-specific empirical statistical models that quantify significant relationships between summer daily maximum temperature (T max) and daily heat-related deaths are constructed from historical data for six cities: Boston, Budapest, Dallas, Lisbon, London, and Sydney. ‘Threshold temperatures’ above which heat-related deaths begin to occur are identified. The results demonstrate significantly lower thresholds in ‘cooler’ cities exhibiting lower mean summer temperatures than in ‘warmer’ cities exhibiting higher mean summer temperatures. Analysis of individual ‘heat waves’ illustrates that a greater proportion of mortality is due to mortality displacement in cities with less sensitive temperature–mortality relationships than in those with more sensitive relationships, and that mortality displacement is no longer a feature more than 12 days after the end of the heat wave. Validation techniques through residual and correlation analyses of modelled and observed values and comparisons with other studies indicate that the observed temperature–mortality relationships are represented well by each of the models. The models can therefore be used with confidence to examine future heat-related deaths under various climate change scenarios for the respective cities (presented in Part 2).
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.