78 resultados para good lives model
Resumo:
During April-May 2010 volcanic ash clouds from the Icelandic Eyjafjallajökull volcano reached Europe causing an unprecedented disruption of the EUR/NAT region airspace. Civil aviation authorities banned all flight operations because of the threat posed by volcanic ash to modern turbine aircraft. New quantitative airborne ash mass concentration thresholds, still under discussion, were adopted for discerning regions contaminated by ash. This has implications for ash dispersal models routinely used to forecast the evolution of ash clouds. In this new context, quantitative model validation and assessment of the accuracies of current state-of-the-art models is of paramount importance. The passage of volcanic ash clouds over central Europe, a territory hosting a dense network of meteorological and air quality observatories, generated a quantity of observations unusual for volcanic clouds. From the ground, the cloud was observed by aerosol lidars, lidar ceilometers, sun photometers, other remote-sensing instru- ments and in-situ collectors. From the air, sondes and multiple aircraft measurements also took extremely valuable in-situ and remote-sensing measurements. These measurements constitute an excellent database for model validation. Here we validate the FALL3D ash dispersal model by comparing model results with ground and airplane-based measurements obtained during the initial 14e23 April 2010 Eyjafjallajökull explosive phase. We run the model at high spatial resolution using as input hourly- averaged observed heights of the eruption column and the total grain size distribution reconstructed from field observations. Model results are then compared against remote ground-based and in-situ aircraft-based measurements, including lidar ceilometers from the German Meteorological Service, aerosol lidars and sun photometers from EARLINET and AERONET networks, and flight missions of the German DLR Falcon aircraft. We find good quantitative agreement, with an error similar to the spread in the observations (however depending on the method used to estimate mass eruption rate) for both airborne and ground mass concentration. Such verification results help us understand and constrain the accuracy and reliability of ash transport models and it is of enormous relevance for designing future operational mitigation strategies at Volcanic Ash Advisory Centers.
Resumo:
Nitrogen adsorption on carbon nanotubes is wide- ly studied because nitrogen adsorption isotherm measurement is a standard method applied for porosity characterization. A further reason is that carbon nanotubes are potential adsorbents for separation of nitrogen from oxygen in air. The study presented here describes the results of GCMC simulations of nitrogen (three site model) adsorption on single and multi walled closed nanotubes. The results obtained are described by a new adsorption isotherm model proposed in this study. The model can be treated as the tube analogue of the GAB isotherm taking into account the lateral adsorbate-adsorbate interactions. We show that the model describes the simulated data satisfactorily. Next this new approach is applied for a description of experimental data measured on different commercially available (and characterized using HRTEM) carbon nanotubes. We show that generally a quite good fit is observed and therefore it is suggested that the observed mechanism of adsorption in the studied materials is mainly determined by adsorption on tubes separated at large distances, so the tubes behave almost independently.
Resumo:
The aim of the work was to study the survival of Lactobacillus plantarum NCIMB 8826 in model solutions and develop a mathematical model describing its dependence on pH, citric acid and ascorbic acid. A Central Composite Design (CCD) was developed studying each of the three factors at five levels within the following ranges, i.e., pH (3.0-4.2), citric acid (6-40 g/L), and ascorbic acid (100-1000 mg/L). In total, 17 experimental runs were carried out. The initial cell concentration in the model solutions was approximately 1 × 10(8)CFU/mL; the solutions were stored at 4°C for 6 weeks. Analysis of variance (ANOVA) of the stepwise regression demonstrated that a second order polynomial model fits well the data. The results demonstrated that high pH and citric acid concentration enhanced cell survival; one the other hand, ascorbic acid did not have an effect. Cell survival during storage was also investigated in various types of juices, including orange, grapefruit, blackcurrant, pineapple, pomegranate, cranberry and lemon juice. The model predicted well the cell survival in orange, blackcurrant and pineapple, however it failed to predict cell survival in grapefruit and pomegranate, indicating the influence of additional factors, besides pH and citric acid, on cell survival. Very good cell survival (less than 0.4 log decrease) was observed after 6 weeks of storage in orange, blackcurrant and pineapple juice, all of which had a pH of about 3.8. Cell survival in cranberry and pomegranate decreased very quickly, whereas in the case of lemon juice, the cell concentration decreased approximately 1.1 logs after 6 weeks of storage, albeit the fact that lemon juice had the lowest pH (pH~2.5) among all the juices tested. Taking into account the results from the compositional analysis of the juices and the model, it was deduced that in certain juices, other compounds seemed to protect the cells during storage; these were likely to be proteins and dietary fibre In contrast, in certain juices, such as pomegranate, cell survival was much lower than expected; this could be due to the presence of antimicrobial compounds, such as phenolic compounds.
Resumo:
Ethnopharmacological relevance: Studies on traditional Chinese medicine (TCM), like those of other systems of traditional medicine (TM), are very variable in their quality, content and focus, resulting in issues around their acceptability to the global scientific community. In an attempt to address these issues, an European Union funded FP7 consortium, composed of both Chinese and European scientists and named “Good practice in traditional Chinese medicine” (GP-TCM), has devised a series of guidelines and technical notes to facilitate good practice in collecting, assessing and publishing TCM literature as well as highlighting the scope of information that should be in future publications on TMs. This paper summarises these guidelines, together with what has been learned through GP-TCM collaborations, focusing on some common problems and proposing solutions. The recommendations also provide a template for the evaluation of other types of traditional medicine such as Ayurveda, Kampo and Unani. Materials and methods: GP-TCM provided a means by which experts in different areas relating to TCM were able to collaborate in forming a literature review good practice panel which operated through e-mail exchanges, teleconferences and focused discussions at annual meetings. The panel involved coordinators and representatives of each GP-TCM work package (WP) with the latter managing the testing and refining of such guidelines within the context of their respective WPs and providing feedback. Results: A Good Practice Handbook for Scientific Publications on TCM was drafted during the three years of the consortium, showing the value of such networks. A “deliverable – central questions – labour division” model had been established to guide the literature evaluation studies of each WP. The model investigated various scoring systems and their ability to provide consistent and reliable semi-quantitative assessments of the literature, notably in respect of the botanical ingredients involved and the scientific quality of the work described. This resulted in the compilation of (i) a robust scoring system and (ii) a set of minimum standards for publishing in the herbal medicines field, based on an analysis of the main problems identified in published TCM literature.
Resumo:
The impact of the Reformation was felt strongly in the nature and character of the priesthood, and in the function and reputation of the priest. A shift in the understanding of the priesthood was one of the most tangible manifestations of doctrinal change, evident in the physical arrangement of the church, in the language of the liturgy, and in the relaxation of the discipline of celibacy, which had for centuries bound priests in the Latin tradition to a life of perpetual continence. Clerical celibacy, and accusations of clerical incontinence, featured prominently in evangelical criticisms of the Catholic church and priesthood, which made a good deal of polemical capital out of the perceived relationship of the priest and the efficacy of his sacred function. Citing St Paul, Protestant polemicists presented clerical marriage as the only, and appropriate remedy, for priestly immorality. But did the advent of a married priesthood create more problems than it solved? The polemical certainties that informed evangelical writing on sacerdotal celibacy did not guarantee the immediate acceptance of a married priesthood, and the vocabulary that had been used to denounce clergy who failed in their obligation to celibacy was all too readily turned against the married clergy. The anti-clerical lexicon, and its usage, remained remarkably static despite the substantial doctrinal and practical challenges posed to the traditional model of priesthood by the Protestant Reformation.
Resumo:
Salmonella are closely related to commensal Escherichia coli but have gained virulence factors enabling them to behave as enteric pathogens. Less well studied are the similarities and differences that exist between the metabolic properties of these organisms that may contribute toward niche adaptation of Salmonella pathogens. To address this, we have constructed a genome scale Salmonella metabolic model (iMA945). The model comprises 945 open reading frames or genes, 1964 reactions, and 1036 metabolites. There was significant overlap with genes present in E. coli MG1655 model iAF1260. In silico growth predictions were simulated using the model on different carbon, nitrogen, phosphorous, and sulfur sources. These were compared with substrate utilization data gathered from high throughput phenotyping microarrays revealing good agreement. Of the compounds tested, the majority were utilizable by both Salmonella and E. coli. Nevertheless a number of differences were identified both between Salmonella and E. coli and also within the Salmonella strains included. These differences provide valuable insight into differences between a commensal and a closely related pathogen and within different pathogenic strains opening new avenues for future explorations.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
Government and institutionally-driven ‘good practice transfer’ initiatives are consistently presented as a means to enhance construction firm and industry performance. Two implicit tenets of these initiatives appear to be: knowledge embedded in good practice will transfer automatically; and, the potential of implementing good practice will be capitalised regardless of the context where it is to be used. The validity of these tenets is increasingly being questioned and, concurrently, more nuanced knowledge production understandings are being developed which recognise and incorporate context-specificity. This research contributes to this growing, more critical agenda by examining the actual benefits accrued from good practice transfer from the perspective of a small specialist trade contracting firm. A concept model for successful good practice transfer is developed from a single longitudinal case study within a small heating and plumbing firm. The concept model consists of five key variables: environment, strategy, people, technology, and organisation of work. The key findings challenge the implicit assumptions prevailing in the existing literature and support a contingency approach that argues successful good practice transfer is not just adopting and mechanistically inserting into the firm, but requires addressing ‘behavioural’ aspects. For successful good practice transfer, small specialist trade contracting firms need to develop and operationalise organisation slack, mechanisms for scanning external stimuli and absorbing knowledge. They also need to formulate and communicate client-driven external strategies; to motive and educate people at all levels; to possess internal or accessible complementary skills and knowledge; to have ‘soft focus’ immediate/mid-term benefits at a project level; and, to embed good practice in current work practices.
Resumo:
An analytical model is developed to predict the surface drag exerted by internal gravity waves on an isolated axisymmetric mountain over which there is a stratified flow with a velocity profile that varies relatively slowly with height. The model is linear with respect to the perturbations induced by the mountain, and solves the Taylor–Goldstein equation with variable coefficients using a Wentzel–Kramers–Brillouin (WKB) approximation, formally valid for high Richardson numbers, Ri. The WKB solution is extended to a higher order than in previous studies, enabling a rigorous treatment of the effects of shear and curvature of the wind profile on the surface drag. In the hydrostatic approximation, closed formulas for the drag are derived for generic wind profiles, where the relative magnitude of the corrections to the leading-order drag (valid for a constant wind profile) does not depend on the detailed shape of the orography. The drag is found to vary proportionally to Ri21, decreasing as Ri decreases for a wind that varies linearly with height, and increasing as Ri decreases for a wind that rotates with height maintaining its magnitude. In these two cases the surface drag is predicted to be aligned with the surface wind. When one of the wind components varies linearly with height and the other is constant, the surface drag is misaligned with the surface wind, especially for relatively small Ri. All these results are shown to be in fairly good agreement with numerical simulations of mesoscale nonhydrostatic models, for high and even moderate values of Ri.
Resumo:
During winter the ocean surface in polar regions freezes over to form sea ice. In the summer the upper layers of sea ice and snow melts producing meltwater that accumulates in Arctic melt ponds on the surface of sea ice. An accurate estimate of the fraction of the sea ice surface covered in melt ponds is essential for a realistic estimate of the albedo for global climate models. We present a melt-pond–sea-ice model that simulates the three-dimensional evolution of melt ponds on an Arctic sea ice surface. The advancements of this model compared to previous models are the inclusion of snow topography; meltwater transport rates are calculated from hydraulic gradients and ice permeability; and the incorporation of a detailed one-dimensional, thermodynamic radiative balance. Results of model runs simulating first-year and multiyear sea ice are presented. Model results show good agreement with observations, with duration of pond coverage, pond area, and ice ablation comparing well for both the first-year ice and multiyear ice cases. We investigate the sensitivity of the melt pond cover to changes in ice topography, snow topography, and vertical ice permeability. Snow was found to have an important impact mainly at the start of the melt season, whereas initial ice topography strongly controlled pond size and pond fraction throughout the melt season. A reduction in ice permeability allowed surface flooding of relatively flat, first-year ice but had little impact on the pond coverage of rougher, multiyear ice. We discuss our results, including model shortcomings and areas of experimental uncertainty.
Resumo:
The fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) includes a comparison of observation-based and modeling-based estimates of the aerosol direct radiative forcing. In this comparison, satellite-based studies suggest a more negative aerosol direct radiative forcing than modeling studies. A previous satellite-based study, part of the IPCC comparison, uses aerosol optical depths and accumulation-mode fractions retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) at collection 4. The latest version of MODIS products, named collection 5, improves aerosol retrievals. Using these products, the direct forcing in the shortwave spectrum defined with respect to present-day natural aerosols is now estimated at −1.30 and −0.65 Wm−2 on a global clear-sky and all-sky average, respectively, for 2002. These values are still significantly more negative than the numbers reported by modeling studies. By accounting for differences between present-day natural and preindustrial aerosol concentrations, sampling biases, and investigating the impact of differences in the zonal distribution of anthropogenic aerosols, good agreement is reached between the direct forcing derived from MODIS and the Hadley Centre climate model HadGEM2-A over clear-sky oceans. Results also suggest that satellite estimates of anthropogenic aerosol optical depth over land should be coupled with a robust validation strategy in order to refine the observation-based estimate of aerosol direct radiative forcing. In addition, the complex problem of deriving the aerosol direct radiative forcing when aerosols are located above cloud still needs to be addressed.
Resumo:
The wood mouse is a common and abundant species in agricultural landscape and is a focal species in pesticide risk assessment. Empirical studies on the ecology of the wood mouse have provided sufficient information for the species to be modelled mechanistically. An individual-based model was constructed to explicitly represent the locations and movement patterns of individual mice. This together with the schedule of pesticide application allows prediction of the risk to the population from pesticide exposure. The model included life-history traits of wood mice as well as typical landscape dynamics in agricultural farmland in the UK. The model obtains a good fit to the available population data and is fit for risk assessment purposes. It can help identify spatio-temporal situations with the largest potential risk of exposure and enables extrapolation from individual-level endpoints to population-level effects. Largest risk of exposure to pesticides was found when good crop growth in the “sink” fields coincided with high “source” population densities in the hedgerows. Keywords: Population dynamics, Pesticides, Ecological risk assessment, Habitat choice, Agent-based model, NetLogo
Resumo:
Climate models predict a large range of possible future temperatures for a particular scenario of future emissions of greenhouse gases and other anthropogenic forcings of climate. Given that further warming in coming decades could threaten increasing risks of climatic disruption, it is important to determine whether model projections are consistent with temperature changes already observed. This can be achieved by quantifying the extent to which increases in well mixed greenhouse gases and changes in other anthropogenic and natural forcings have already altered temperature patterns around the globe. Here, for the first time, we combine multiple climate models into a single synthesized estimate of future warming rates consistent with past temperature changes. We show that the observed evolution of near-surface temperatures appears to indicate lower ranges (5–95%) for warming (0.35–0.82 K and 0.45–0.93 K by the 2020s (2020–9) relative to 1986–2005 under the RCP4.5 and 8.5 scenarios respectively) than the equivalent ranges projected by the CMIP5 climate models (0.48–1.00 K and 0.51–1.16 K respectively). Our results indicate that for each RCP the upper end of the range of CMIP5 climate model projections is inconsistent with past warming.
Resumo:
A simple four-dimensional assimilation technique, called Newtonian relaxation, has been applied to the Hamburg climate model (ECHAM), to enable comparison of model output with observations for short periods of time. The prognostic model variables vorticity, divergence, temperature, and surface pressure have been relaxed toward European Center for Medium-Range Weather Forecasts (ECMWF) global meteorological analyses. Several experiments have been carried out, in which the values of the relaxation coefficients have been varied to find out which values are most usable for our purpose. To be able to use the method for validation of model physics or chemistry, good agreement of the model simulated mass and wind field is required. In addition, the model physics should not be disturbed too strongly by the relaxation forcing itself. Both aspects have been investigated. Good agreement with basic observed quantities, like wind, temperature, and pressure is obtained for most simulations in the extratropics. Derived variables, like precipitation and evaporation, have been compared with ECMWF forecasts and observations. Agreement for these variables is smaller than for the basic observed quantities. Nevertheless, considerable improvement is obtained relative to a control run without assimilation. Differences between tropics and extratropics are smaller than for the basic observed quantities. Results also show that precipitation and evaporation are affected by a sort of continuous spin-up which is introduced by the relaxation: the bias (ECMWF-ECHAM) is increasing with increasing relaxation forcing. In agreement with this result we found that with increasing relaxation forcing the vertical exchange of tracers by turbulent boundary layer mixing and, in a lesser extent, by convection, is reduced.
Resumo:
Mesospheric temperature inversions are well established observed phenomena, yet their properties remain the subject of ongoing research. Comparisons between Rayleigh-scatter lidar temperature measurements obtained by the University of Western Ontario's Purple Crow Lidar (42.9°N, 81.4°W) and the Canadian Middle Atmosphere Model are used to quantify the statistics of inversions. In both model and measurements, inversions occur most frequently in the winter and exhibit an average amplitude of ∼10 K. The model exhibits virtually no inversions in the summer, while the measurements show a strongly reduced frequency of occurrence with an amplitude about half that in the winter. A simple theory of mesospheric inversions based on wave saturation is developed, with no adjustable parameters. It predicts that the environmental lapse rate must be less than half the adiabatic lapse rate for an inversion to form, and it predicts the ratio of the inversion amplitude and thickness as a function of environmental lapse rate. Comparison of this prediction to the actual amplitude/thickness ratio using the lidar measurements shows good agreement between theory and measurements.