873 resultados para bare public-key model
Resumo:
Many urban surface energy balance models now exist. These vary in complexity from simple schemes that represent the city as a concrete slab, to those which incorporate detailed representations of momentum and energy fluxes distributed within the atmospheric boundary layer. While many of these schemes have been evaluated against observations, with some models even compared with the same data sets, such evaluations have not been undertaken in a controlled manner to enable direct comparison. For other types of climate model, for instance the Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS) experiments (Henderson-Sellers et al., 1993), such controlled comparisons have been shown to provide important insights into both the mechanics of the models and the physics of the real world. This paper describes the progress that has been made to date on a systematic and controlled comparison of urban surface schemes. The models to be considered, and their key attributes, are described, along with the methodology to be used for the evaluation.
Resumo:
A novel analytical model for mixed-phase, unblocked and unseeded orographic precipitation with embedded convection is developed and evaluated. The model takes an idealised background flow and terrain geometry, and calculates the area-averaged precipitation rate and other microphysical quantities. The results provide insight into key physical processes, including cloud condensation, vapour deposition, evaporation, sublimation, as well as precipitation formation and sedimentation (fallout). To account for embedded convection in nominally stratiform clouds, diagnostics for purely convective and purely stratiform clouds are calculated independently and combined using weighting functions based on relevant dynamical and microphysical time scales. An in-depth description of the model is presented, as well as a quantitative assessment of its performance against idealised, convection-permitting numerical simulations with a sophisticated microphysics parameterisation. The model is found to accurately reproduce the simulation diagnostics over most of the parameter space considered.
Resumo:
A new model has been developed for assessing multiple sources of nitrogen in catchments. The model (INCA) is process based and uses reaction kinetic equations to simulate the principal mechanisms operating. The model allows for plant uptake, surface and sub-surface pathways and can simulate up to six land uses simultaneously. The model can be applied to catchment as a semi-distributed simulation and has an inbuilt multi-reach structure for river systems. Sources of nitrogen can be from atmospheric deposition, from the terrestrial environment (e.g. agriculture, leakage from forest systems etc.), from urban areas or from direct discharges via sewage or intensive farm units. The model is a daily simulation model and can provide information in the form of time series at key sites, or as profiles down river systems or as statistical distributions. The process model is described and in a companion paper the model is applied to the River Tywi catchment in South Wales and the Great Ouse in Bedfordshire.
Resumo:
Climate model ensembles are widely heralded for their potential to quantify uncertainties and generate probabilistic climate projections. However, such technical improvements to modeling science will do little to deliver on their ultimate promise of improving climate policymaking and adaptation unless the insights they generate can be effectively communicated to decision makers. While some of these communicative challenges are unique to climate ensembles, others are common to hydrometeorological modeling more generally, and to the tensions arising between the imperatives for saliency, robustness, and richness in risk communication. The paper reviews emerging approaches to visualizing and communicating climate ensembles and compares them to the more established and thoroughly evaluated communication methods used in the numerical weather prediction domains of day-to-day weather forecasting (in particular probabilities of precipitation), hurricane and flood warning, and seasonal forecasting. This comparative analysis informs recommendations on best practice for climate modelers, as well as prompting some further thoughts on key research challenges to improve the future communication of climate change uncertainties.
Resumo:
This paper introduces and evaluates DryMOD, a dynamic water balance model of the key hydrological process in drylands that is based on free, public-domain datasets. The rainfall model of DryMOD makes optimal use of spatially disaggregated Tropical Rainfall Measuring Mission (TRMM) datasets to simulate hourly rainfall intensities at a spatial resolution of 1-km. Regional-scale applications of the model in seasonal catchments in Tunisia and Senegal characterize runoff and soil moisture distribution and dynamics in response to varying rainfall data inputs and soil properties. The results highlight the need for hourly-based rainfall simulation and for correcting TRMM 3B42 rainfall intensities for the fractional cover of rainfall (FCR). Without FCR correction and disaggregation to 1 km, TRMM 3B42 based rainfall intensities are too low to generate surface runoff and to induce substantial changes to soil moisture storage. The outcomes from the sensitivity analysis show that topsoil porosity is the most important soil property for simulation of runoff and soil moisture. Thus, we demonstrate the benefit of hydrological investigations at a scale, for which reliable information on soil profile characteristics exists and which is sufficiently fine to account for the heterogeneities of these. Where such information is available, application of DryMOD can assist in the spatial and temporal planning of water harvesting according to runoff-generating areas and the runoff ratio, as well as in the optimization of agricultural activities based on realistic representation of soil moisture conditions.
Resumo:
Cholesterol is one of the key constituents for maintaining the cellular membrane and thus the integrity of the cell itself. In contrast high levels of cholesterol in the blood are known to be a major risk factor in the development of cardiovascular disease. We formulate a deterministic nonlinear ordinary differential equation model of the sterol regulatory element binding protein 2 (SREBP-2) cholesterol genetic regulatory pathway in an hepatocyte. The mathematical model includes a description of genetic transcription by SREBP-2 which is subsequently translated to mRNA leading to the formation of 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGCR), a main precursor of cholesterol synthesis. Cholesterol synthesis subsequently leads to the regulation of SREBP-2 via a negative feedback formulation. Parameterised with data from the literature, the model is used to understand how SREBP-2 transcription and regulation affects cellular cholesterol concentration. Model stability analysis shows that the only positive steady-state of the system exhibits purely oscillatory, damped oscillatory or monotic behaviour under certain parameter conditions. In light of our findings we postulate how cholesterol homestasis is maintained within the cell and the advantages of our model formulation are discussed with respect to other models of genetic regulation within the literature.
Resumo:
The Plaut, McClelland, Seidenberg and Patterson (1996) connectionist model of reading was evaluated at two points early in its training against reading data collected from British children on two occasions during their first year of literacy instruction. First, the network’s non-word reading was poor relative to word reading when compared with the children. Second, the network made more non-lexical than lexical errors, the opposite pattern to the children. Three adaptations were made to the training of the network to bring it closer to the learning environment of a child: an incremental training regime was adopted; the network was trained on grapheme– phoneme correspondences; and a training corpus based on words found in children’s early reading materials was used. The modifications caused a sharp improvement in non-word reading, relative to word reading, resulting in a near perfect match to the children’s data on this measure. The modified network, however, continued to make predominantly non-lexical errors, although evidence from a small-scale implementation of the full triangle framework suggests that this limitation stems from the lack of a semantic pathway. Taken together, these results suggest that, when properly trained, connectionist models of word reading can offer insights into key aspects of reading development in children.
Resumo:
This paper evaluates environmental externality when the structure of the externality is cumulative. The evaluation exercise is based on the assumption that the agents in question form conjectural variations. A number of environments are encompassed within this classification and have received due attention in the literature. Each of these heterogeneous environments, however, possesses considerable analytical homogeneity and permit subscription to a general model treatment. These environments include environmental externality, oligopoly and the analysis of the private provision of public goods. We highlight the general analytical approach by focusing on this latter context, in which debate centers around four issues: the existence of free-riding, the extent to which contributions are matched equally across individuals, the nature of conjectures consistent with equilibrium, and the allocative inefficiency of alternative regimes. This paper resolves each of these issues, with the following conclusions: A consistent-conjectures equilibrium exists in the private provision of public goods. It is the monopolistic-conjectures equilibrium. Agents act identically, contributing positive amounts of the public good in an efficient allocation of resources. There is complete matching of contributions among agents, no free-riding, and the allocation is independent of the number of members within the community. Thus the Olson conjecture—that inefficiency is exacerbated by community size—has no foundation in a consistent-conjectures, cumulative-externality, context (212 words).
Resumo:
This article builds on advances in social ontology to develop a new understanding of how mainstream economic modelling affects reality. We propose a new framework for analysing and describing how models intervene in the social sphere. This framework allows us to identify and articulate three key epistemic features of models as interventions: specificity, portability and formal precision. The second part of the article uses our framework to demonstrate how specificity, portability and formal precision explain the use of moral hazard models in a variety of different policy contexts, including worker compensation schemes, bank regulation and the euro-sovereign debt crisis.
Resumo:
This article provides a critical overview of Public-Private Partnerships (PPPs) in Russia and Kazakhstan and examines the rationale underpinning such partnerships. The analysis discusses the reasons why governments in Russia and Kazakhstan focus principally on concessions as a form of PPP and goes on to provide a critical assessment of the key approaches and situational factors relating to concessions in these two countries. The article finds that external globalization impulses pressed Russia and Kazakhstan to align their policies and institutions with western orthodoxy and perceived international best practice. An ever-increasing emphasis on use of PPPs has been a key feature of this alignment. However, the governments of Russia and Kazakhstan have increasingly resorted to concessions as progress with the development and implementation of Western style PPP models has stalled. This article concludes that the governments of Russia and Kazakhstan have demonstrated an overly optimistic approach to PPP and as a result may have substantially understated their overall concessional risks and costs. Features of Russian and Kazakhstani PPP arrangements such as ambiguity in output specification and extensive reliance on government subsidies, combined with lack of expertise of private partners, may significantly decrease concession benefits.
Resumo:
Risk assessment for mammals is currently based on external exposure measurements, but effects of toxicants are better correlated with the systemically available dose than with the external administered dose. So for risk assessment of pesticides, toxicokinetics should be interpreted in the context of potential exposure in the field taking account of the timescale of exposure and individual patterns of feeding. Internal concentration is the net result of absorption, distribution, metabolism and excretion (ADME). We present a case study for thiamethoxam to show how data from ADME study on rats can be used to parameterize a body burden model which predicts body residue levels after exposures to LD50 dose either as a bolus or eaten at different feeding rates. Kinetic parameters were determined in male and female rats after an intravenous and oral administration of 14C labelled by fitting one-compartment models to measured pesticide concentrations in blood for each individual separately. The concentration of thiamethoxam in blood over time correlated closely with concentrations in other tissues and so was considered representative of pesticide concentration in the whole body. Body burden model simulations showed that maximum body weight-normalized doses of thiamethoxam were lower if the same external dose was ingested normally than if it was force fed in a single bolus dose. This indicates lower risk to rats through dietary exposure than would be estimated from the bolus LD50. The importance of key questions that should be answered before using the body burden approach in risk assessment, data requirements and assumptions made in this study are discussed in detail.
Resumo:
Abstract We present a refined parametric model for forecasting electricity demand which performed particularly well in the recent Global Energy Forecasting Competition (GEFCom 2012). We begin by motivating and presenting a simple parametric model, treating the electricity demand as a function of the temperature and day of the data. We then set out a series of refinements of the model, explaining the rationale for each, and using the competition scores to demonstrate that each successive refinement step increases the accuracy of the model’s predictions. These refinements include combining models from multiple weather stations, removing outliers from the historical data, and special treatments of public holidays.
Resumo:
This article considers the issue of low levels of motivation for foreign language learning in England by exploring how language learning is conceptualised by different key voices in that country through the examination of written data: policy documents and reports on the UK's language needs, curriculum documents, and press articles. The extent to which this conceptualisation has changed over time is explored, through the consideration of documents from two time points, before and after a change in government in the UK. The study uses corpus analysis methods in this exploration. The picture that emerges is a complex one regarding how the 'problems' and 'solutions' surrounding language learning in that context are presented in public discourse. This, we conclude, has implications for the likely success of measures adopted to increase language learning uptake in that context.
Resumo:
Accident and Emergency (A&E) units provide a route for patients requiring urgent admission to acute hospitals. Public concern over long waiting times for admissions motivated this study, whose aim is to explore the factors which contribute to such delays. The paper discusses the formulation and calibration of a system dynamics model of the interaction of demand pattern, A&E resource deployment, other hospital processes and bed numbers; and the outputs of policy analysis runs of the model which vary a number of the key parameters. Two significant findings have policy implications. One is that while some delays to patients are unavoidable, reductions can be achieved by selective augmentation of resources within, and relating to, the A&E unit. The second is that reductions in bed numbers do not increase waiting times for emergency admissions, their effect instead being to increase sharply the number of cancellations of admissions for elective surgery. This suggests that basing A&E policy solely on any single criterion will merely succeed in transferring the effects of a resource deficit to a different patient group.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.