172 resultados para Emerging Modelling Paradigms and Model Coupling
Resumo:
Model differences in projections of extratropical regional climate change due to increasing greenhouse gases are investigated using two atmospheric general circulation models (AGCMs): ECHAM4 (Max Planck Institute, version 4) and CCM3 (National Center for Atmospheric Research Community Climate Model version 3). Sea-surface temperature (SST) fields calculated from observations and coupled versions of the two models are used to force each AGCM in experiments based on time-slice methodology. Results from the forced AGCMs are then compared to coupled model results from the Coupled Model Intercomparison Project 2 (CMIP2) database. The time-slice methodology is verified by showing that the response of each model to doubled CO2 and SST forcing from the CMIP2 experiments is consistent with the results of the coupled GCMs. The differences in the responses of the models are attributed to (1) the different tropical SST warmings in the coupled simulations and (2) the different atmospheric model responses to the same tropical SST warmings. Both are found to have important contributions to differences in implied Northern Hemisphere (NH) winter extratropical regional 500 mb height and tropical precipitation climate changes. Forced teleconnection patterns from tropical SST differences are primarily responsible for sensitivity differences in the extratropical North Pacific, but have relatively little impact on the North Atlantic. There are also significant differences in the extratropical response of the models to the same tropical SST anomalies due to differences in numerical and physical parameterizations. Differences due to parameterizations dominate in the North Atlantic. Differences in the control climates of the two coupled models from the current climate, in particular for the coupled model containing CCM3, are also demonstrated to be important in leading to differences in extratropical regional sensitivity.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
This article describes a case study involving information technology managers and their new programmer recruitment policy, but the primary interest is methodological. The processes of issue generation and selection and model conceptualization are described. Early use of “magnetic hexagons” allowed the generation of a range of issues, most of which would not have emerged if system dynamics elicitation techniques had been employed. With the selection of a specific issue, flow diagraming was used to conceptualize a model, computer implementation and scenario generation following naturally. Observations are made on the processes of system dynamics modeling, particularly on the need to employ general techniques of knowledge elicitation in the early stages of interventions. It is proposed that flexible approaches should be used to generate, select, and study the issues, since these reduce any biasing of the elicitation toward system dynamics problems and also allow the participants to take up the most appropriate problem- structuring approach.
Resumo:
Calculations using a numerical model of the convection dominated high latitude ionosphere are compared with observations made by EISCAT as part of the UK-POLAR Special Programme. The data used were for 24–25 October 1984, which was characterized by an unusually steady IMF, with Bz < 0 and By > 0; in the calculations it was assumed that a steady IMF implies steady convection conditions. Using the electric field models of Heppner and Maynard (1983) appropriate to By > 0 and precipitation data taken from Spiroet al. (1982), we calculated the velocities and electron densities appropriate to the EISCAT observations. Many of the general features of the velocity data were reproduced by the model. In particular, the phasing of the change from eastward to westward flow in the vicinity of the Harang discontinuity, flows near the dayside throat and a region of slow flow at higher latitudes near dusk were well reproduced. In the afternoon sector modelled velocity values were significantly less than those observed. Electron density calculations showed good agreement with EISCAT observations near the F-peak, but compared poorly with observations near 211 km. In both cases, the greatest disagreement occurred in the early part of the observations, where the convection pattern was poorly known and showed some evidence of long term temporal change. Possible causes for the disagreement between observations and calculations are discussed and shown to raise interesting and, as yet, unresolved questions concerning the interpretation of the data. For the data set used, the late afternoon dip in electron density observed near the F-peak and interpreted as the signature of the mid-latitude trough is well reproduced by the calculations. Calculations indicate that it does not arise from long residence times of plasma on the nightside, but is the signature of a gap between two major ionization sources, viz. photoionization and particle precipitation.
Resumo:
This paper addresses the challenging domain of vehicle classification from pole-mounted roadway cameras, specifically from side-profile views. A new public vehicle dataset is made available consisting of over 10000 side profile images (86 make/model and 9 sub-type classes). 5 state-of-the-art classifiers are applied to the dataset, with the best achieving high classification rates of 98.7% for sub-type and 99.7- 99.9% for make and model recognition, confirming the assertion made that single vehicle side profile images can be used for robust classification.
Resumo:
The interaction between tryptophan-rich puroindoline proteins and model bacterial membranes at the air-liquid interface has been investigated by FTIR spectroscopy, surface pressure measurements and Brewster angle microscopy. The role of different lipid constituents on the interactions between lipid membrane and protein was studied using wild type (Pin-b) and mutant (Trp44 to Arg44 mutant, Pin-bs) puroindoline proteins. The results show differences in the lipid selectivity of the two proteins in terms of preferential binding to specific lipid head groups in mixed lipid systems. Pin-b wild type was able to penetrate mixed layers of phosphatidylethanolamine (PE) and phosphatidylglycerol (PG) head groups more deeply compared to the mutant Pin-bs. Increasing saturation of the lipid tails increased penetration and adsorption of Pin-b wild type, but again the response of the mutant form differed. The results provide insight as to the role of membrane architecture, lipid composition and fluidity, on antimicrobial activity of proteins. Data show distinct differences in the lipid binding behavior of Pin-b as a result of a single residue mutation, highlighting the importance of hydrophobic and charged amino acids in antimicrobial protein and peptide activity.
Resumo:
Given capacity limits, only a subset of stimuli 1 give rise to a conscious percept. Neurocognitive models suggest that humans have evolved mechanisms that operate without awareness and prioritize threatening stimuli over neutral stimuli in subsequent perception. In this meta analysis, we review evidence for this ‘standard hypothesis’ emanating from three widely used, but rather different experimental paradigms that have been used to manipulate awareness. We found a small pooled threat-bias effect in the masked visual probe paradigm, a medium effect in the binocular rivalry paradigm and highly inconsistent effects in the breaking continuous flash suppression paradigm. Substantial heterogeneity was explained by the stimulus type: the only threat stimuli that were robustly prioritized across all three paradigms were fearful faces. Meta regression revealed that anxiety may modulate threat biases, but only under specific presentation conditions. We also found that insufficiently rigorous awareness measures, inadequate control of response biases and low level confounds may undermine claims of genuine unconscious threat processing. Considering the data together, we suggest that uncritical acceptance of the standard hypothesis is premature: current behavioral evidence for threat-sensitive visual processing that operates without awareness is weak.
Resumo:
We present a kinetic double layer model coupling aerosol surface and bulk chemistry (K2-SUB) based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007). K2-SUB is applied to a popular model system of atmospheric heterogeneous chemistry: the interaction of ozone with oleic acid. We show that our modelling approach allows de-convoluting surface and bulk processes, which has been a controversial topic and remains an important challenge for the understanding and description of atmospheric aerosol transformation. In particular, we demonstrate how a detailed treatment of adsorption and reaction at the surface can be coupled to a description of bulk reaction and transport that is consistent with traditional resistor model formulations. From literature data we have derived a consistent set of kinetic parameters that characterise mass transport and chemical reaction of ozone at the surface and in the bulk of oleic acid droplets. Due to the wide range of rate coefficients reported from different experimental studies, the exact proportions between surface and bulk reaction rates remain uncertain. Nevertheless, the model results suggest an important role of chemical reaction in the bulk and an approximate upper limit of similar to 10(-11) cm(2) s(-1) for the surface reaction rate coefficient. Sensitivity studies show that the surface accommodation coefficient of the gas-phase reactant has a strong non-linear influence on both surface and bulk chemical reactions. We suggest that K2-SUB may be used to design, interpret and analyse future experiments for better discrimination between surface and bulk processes in the oleic acid-ozone system as well as in other heterogeneous reaction systems of atmospheric relevance.
Resumo:
Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Soil organic carbon (SOC) plays a vital role in ecosystem function, determining soil fertility, water holding capacity and susceptibility to land degradation. In addition, SOC is related to atmospheric CO, levels with soils having the potential for C release or sequestration, depending on land use, land management and climate. The United Nations Convention on Climate Change and its Kyoto Protocol, and other United Nations Conventions to Combat Desertification and on Biodiversity all recognize the importance of SOC and point to the need for quantification of SOC stocks and changes. An understanding of SOC stocks and changes at the national and regional scale is necessary to further our understanding of the global C cycle, to assess the responses of terrestrial ecosystems to climate change and to aid policy makers in making land use/management decisions. Several studies have considered SOC stocks at the plot scale, but these are site specific and of limited value in making inferences about larger areas. Some studies have used empirical methods to estimate SOC stocks and changes at the regional scale, but such studies are limited in their ability to project future changes, and most have been carried out using temperate data sets. The computational method outlined by the Intergovernmental Panel on Climate Change (IPCC) has been used to estimate SOC stock changes at the regional scale in several studies, including a recent study considering five contrasting eco regions. This 'one step' approach fails to account for the dynamic manner in which SOC changes are likely to occur following changes in land use and land management. A dynamic modelling approach allows estimates to be made in a manner that accounts for the underlying processes leading to SOC change. Ecosystem models, designed for site scale applications can be linked to spatial databases, giving spatially explicit results that allow geographic areas of change in SOC stocks to be identified. Some studies have used variations on this approach to estimate SOC stock changes at the sub-national and national scale for areas of the USA and Europe and at the watershed scale for areas of Mexico and Cuba. However, a need remained for a national and regional scale, spatially explicit system that is generically applicable and can be applied to as wide a range of soil types, climates and land uses as possible. The Global Environment Facility Soil Organic Carbon (GEFSOC) Modelling System was developed in response to this need. The GEFSOC system allows estimates of SOC stocks and changes to be made for diverse conditions, providing essential information for countries wishing to take part in an emerging C market, and bringing us closer to an understanding of the future role of soils in the global C cycle. (C) 2007 Elsevier B.V. All rights reserved.