944 resultados para Model-based bootstrap


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In their contribution to PNAS, Penner et al. (1) used a climate model to estimate the radiative forcing by the aerosol first indirect effect (cloud albedo effect) in two different ways: first, by deriving a statistical relationship between the logarithm of cloud droplet number concentration, ln Nc, and the logarithm of aerosol optical depth, ln AOD (or the logarithm of the aerosol index, ln AI) for present-day and preindustrial aerosol fields, a method that was applied earlier to satellite data (2), and, second, by computing the radiative flux perturbation between two simulations with and without anthropogenic aerosol sources. They find a radiative forcing that is a factor of 3 lower in the former approach than in the latter [as Penner et al. (1) correctly noted, only their “inline” results are useful for the comparison]. This study is a very interesting contribution, but we believe it deserves several clarifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The MATLAB model is contained within the compressed folders (versions are available as .zip and .tgz). This model uses MERRA reanalysis data (>34 years available) to estimate the hourly aggregated wind power generation for a predefined (fixed) distribution of wind farms. A ready made example is included for the wind farm distribution of Great Britain, April 2014 ("CF.dat"). This consists of an hourly time series of GB-total capacity factor spanning the period 1980-2013 inclusive. Given the global nature of reanalysis data, the model can be applied to any specified distribution of wind farms in any region of the world. Users are, however, strongly advised to bear in mind the limitations of reanalysis data when using this model/data. This is discussed in our paper: Cannon, Brayshaw, Methven, Coker, Lenaghan. "Using reanalysis data to quantify extreme wind power generation statistics: a 33 year case study in Great Britain". Submitted to Renewable Energy in March, 2014. Additional information about the model is contained in the model code itself, in the accompanying ReadMe file, and on our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many lower-income countries, the establishment of marine protected areas (MPAs) involves significant opportunity costs for artisanal fishers, reflected in changes in how they allocate their labor in response to the MPA. The resource economics literature rarely addresses such labor allocation decisions of artisanal fishers and how, in turn, these contribute to the impact of MPAs on fish stocks, yield, and income. This paper develops a spatial bio-economic model of a fishery adjacent to a village of people who allocate their labor between fishing and on-shore wage opportunities to establish a spatial Nash equilibrium at a steady state fish stock in response to various locations for no-take zone MPAs and managed access MPAs. Villagers’ fishing location decisions are based on distance costs, fishing returns, and wages. Here, the MPA location determines its impact on fish stocks, fish yield, and villager income due to distance costs, congestion, and fish dispersal. Incorporating wage labor opportunities into the framework allows examination of the MPA’s impact on rural incomes, with results determining that win-wins between yield and stocks occur in very different MPA locations than do win-wins between income and stocks. Similarly, villagers in a high-wage setting face a lower burden from MPAs than do those in low-wage settings. Motivated by issues of central importance in Tanzania and Costa Rica, we impose various policies on this fishery – location specific no-take zones, increasing on-shore wages, and restricting MPA access to a subset of villagers – to analyze the impact of an MPA on fish stocks and rural incomes in such settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Security administrators face the challenge of designing, deploying and maintaining a variety of configuration files related to security systems, especially in large-scale networks. These files have heterogeneous syntaxes and follow differing semantic concepts. Nevertheless, they are interdependent due to security services having to cooperate and their configuration to be consistent with each other, so that global security policies are completely and correctly enforced. To tackle this problem, our approach supports a comfortable definition of an abstract high-level security policy and provides an automated derivation of the desired configuration files. It is an extension of policy-based management and policy hierarchies, combining model-based management (MBM) with system modularization. MBM employs an object-oriented model of the managed system to obtain the details needed for automated policy refinement. The modularization into abstract subsystems (ASs) segment the system-and the model-into units which more closely encapsulate related system components and provide focused abstract views. As a result, scalability is achieved and even comprehensive IT systems can be modelled in a unified manner. The associated tool MoBaSeC (Model-Based-Service-Configuration) supports interactive graphical modelling, automated model analysis and policy refinement with the derivation of configuration files. We describe the MBM and AS approaches, outline the tool functions and exemplify their applications and results obtained. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the exact solution of an N-state vertex model based on the representation of the U(q)[SU(2)] algebra at roots of unity with diagonal open boundaries. We find that the respective reflection equation provides us one general class of diagonal K-matrices having one free-parameter. We determine the eigenvalues of the double-row transfer matrix and the respective Bethe ansatz equation within the algebraic Bethe ansatz framework. The structure of the Bethe ansatz equation combine a pseudomomenta function depending on a free-parameter with scattering phase-shifts that are fixed by the roots of unity and boundary variables. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A series of hot-compression tests and Taylor-model simulations were carried out with the intention of developing a simple expression for the proof stress of magnesium alloy AZ31 during hot working. A crude approximation of wrought textures as a mixture of a single ideal texture component and a random background was employed. The shears carried by each deformation system were calculated using a full-constraint Taylor model for a selection of ideal orientations as well as for random textures. These shears, in combination with the measured proof stresses, were employed to estimate the critical resolved shear stresses for basal slip, prismatic slip, ⟨c+a⟩ second-order pyramidal slip, and { } twinning. The model thus established provides a semianalytical estimation of the proof stress (a one-off Taylor simulation is required) and also indicates whether or not twinning is expected. The approach is valid for temperatures between ∼150 °C and ∼450 °C, depending on the texture, strain rate, and strain path.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the development of a new approach to the use of ICT for the teaching of courses in the interpretation and evaluation of evidence. It is based on ideas developed for the teaching of science to school children, in particular the importance of models and qualitative reasoning skills. In the first part, we make an analysis of the basis of current research into “evidence scholarship” and the demands such a system would have to meet. In the second part, we introduce the details of such a system that we developed initially to assist police in the interpretation of evidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a new conceptual model detailing consumer complaint responses relating to exposure to unacceptable advertising. The model is initiated by consumer perceptions of negative inequity which elicits one of three consumer complaint responses based on the identified triggers that may influence complaining propensity such as demographic, psychographic, cultural, situational and social factors. Complainant perception of the process encountered together with the overall outcome of their experience affect future complaint behaviour as shown by this evolving model as the end reaction flows on to form the consumer’s next response to a similar situation. The advertising industry in Australia is valued annually at over $8 billion and some advertisements have been identified as ‘unacceptable’ by elements in society. Industry and regulatory response to consumer complaints is thus an important area to address and there is no extant literature utilising such an holistic model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The communication via email is one of the most popular services of the Internet. Emails have brought us great convenience in our daily work and life. However, unsolicited messages or spam, flood our email boxes, which results in bandwidth, time and money wasting. To this end, this paper presents a rough set based model to classify emails into three categories - spam, no-spam and suspicious, rather than two classes (spam and non-spam) in most currently used approaches. By comparing with popular classification methods like Naive Bayes classification, the error ratio that a non-spam is discriminated to spam can be reduced using our proposed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spam is commonly defined as unsolicited email messages and the goal of spam categorization is to distinguish between spam and legitimate email messages. Many researchers have been trying to separate spam from legitimate emails using machine learning algorithms based on statistical learning methods. In this paper, an innovative and intelligent spam filtering model has been proposed based on support vector machine (SVM). This model combines both linear and nonlinear SVM techniques where linear SVM performs better for text based spam classification that share similar characteristics. The proposed model considers both text and image based email messages for classification by selecting an appropriate kernel function for information transformation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several sets of model-based estimates (synthetic estimates) of the prevalence of risk factors for coronary heart disease for small areas in England have been developed. These have been used in policy documents to indicate which areas are in need of intervention. In general, these models have not been subjected to validity assessment. This paper describes a validity assessment of 16 sets of synthetic estimates, by comparison of the models with national, regional and local survey-based estimates, and local mortality rate estimates. Model-based estimates of the prevalence of smoking, low fruit and vegetable consumption, obesity, hypertension and raised cholesterol are found to be valid.