978 resultados para input parameter value recommendation
Resumo:
Calliandra calothyrsus is a tree legume native to Mexico and Central America. The species has attracted considerable attention for its capacity to produce both fuelwood and foliage for either green manure or fodder. Its high content of proanthocyanidins (condensed tannins) and associated low digestibility has, however, limited its use as a feed for ruminants, and there is also a widespread perception that wilting the leaves further reduces their nutritive value. Nevertheless, there has been increasing uptake of calliandra as fodder in certain regions, notably the Central Highlands of Kenya. The present study, conducted in Embu, Kenya, investigated effects of provenance, wilting, cutting frequency and seasonal variation both in the laboratory (in vitro digestibility, crude protein, neutral detergent fibre, extractable and bound proanthocyanidins) and in on-station animal production trials with growing lambs and lactating goats. The local Kenyan landrace of calliandra (Embu) and a closely-related Guatemalan provenance (Patulul) were found to be significantly different, and superior, to a provenance from Nicaragua (San Ramon) in most of the laboratory traits measured, as well as in animal production and feed efficiency. Cutting frequency had no important effect on quality; and although all quality traits displayed seasonal variation there was little discernible pattern to this variation. Wilting had a much less negative effect than expected, and for lambs fed calliandra as a supplement to a low quality basal feed (maize stover), wilting was actually found to give higher live-weight gain and feed efficiency. Conversely, with a high quality basal diet (Napier grass) wilting enhanced intake but not live-weight gain, so feed efficiency was greater for fresh material. The difference between fresh and wilted leaves was not great enough to justify the current widespread recommendation that calliandra should always be fed fresh.
Resumo:
We present a novel topology of the radial basis function (RBF) neural network, referred to as the boundary value constraints (BVC)-RBF, which is able to automatically satisfy a set of BVC. Unlike most existing neural networks whereby the model is identified via learning from observational data only, the proposed BVC-RBF offers a generic framework by taking into account both the deterministic prior knowledge and the stochastic data in an intelligent manner. Like a conventional RBF, the proposed BVC-RBF has a linear-in-the-parameter structure, such that it is advantageous that many of the existing algorithms for linear-in-the-parameters models are directly applicable. The BVC satisfaction properties of the proposed BVC-RBF are discussed. Finally, numerical examples based on the combined D-optimality-based orthogonal least squares algorithm are utilized to illustrate the performance of the proposed BVC-RBF for completeness.
Resumo:
This letter argues that the current controversy about whether Wbuoyancy, the power input due to the surface buoyancy fluxes, is large or small in the oceans stems from two distinct and incompatible views on how Wbuoyancy relates to the volume-integrated work of expansion/contraction B. The current prevailing view is that Wbuoyancy should be identified with the net value of B, which current theories estimate to be small. The alternative view, defended here, is that only the positive part of B, i.e., the one converting internal energy into mechanical energy, should enter the definition of Wbuoyancy, since the negative part of B is associated with the non-viscous dissipation of mechanical energy. Two indirect methods suggest that by contrast, the positive part of B is potentially large.
Resumo:
A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses dynamic integrated system optimisation and parameter estimation (DISOPE) which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A new method for approximating some Jacobian trajectories required by the algorithm is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch chemical processes.
Resumo:
In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.
Resumo:
A mechanism for amplification of mountain waves, and their associated drag, by parametric resonance is investigated using linear theory and numerical simulations. This mechanism, which is active when the Scorer parameter oscillates with height, was recently classified by previous authors as intrinsically nonlinear. Here it is shown that, if friction is included in the simplest possible form as a Rayleigh damping, and the solution to the Taylor-Goldstein equation is expanded in a power series of the amplitude of the Scorer parameter oscillation, linear theory can replicate the resonant amplification produced by numerical simulations with some accuracy. The drag is significantly altered by resonance in the vicinity of n/l_0 = 2, where l_0 is the unperturbed value of the Scorer parameter and n is the wave number of its oscillation. Depending on the phase of this oscillation, the drag may be substantially amplified or attenuated relative to its non-resonant value, displaying either single maxima or minima, or double extrema near n/l_0 = 2. Both non-hydrostatic effects and friction tend to reduce the magnitude of the drag extrema. However, in exactly inviscid conditions, the single drag maximum and minimum are suppressed. As in the atmosphere friction is often small but non-zero outside the boundary layer, modelling of the drag amplification mechanism addressed here should be quite sensitive to the type of turbulence closure employed in numerical models, or to computational dissipation in nominally inviscid simulations.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
The permeability parameter (C) for the movement of cephalosporin C across the outer membrane of Pseudomonas aeruginosa was measured using the widely accepted method of Zimmermann & Rosselet. In one experiment, the value of C varied continuously from 4·2 to 10·8 cm3 min-1 (mg dry wt cells)-1 over a range of concentrations of the test substrate, cephalosporin C, from 50 to 5 μm. Dependence of C on the concentration of test substrate was still observed when the effect of a possible electric potential difference across the outer membrane was corrected for. In quantitative studies of β-lactam permeation the dependence of C on the concentration of β-lactam should be taken into account.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
The subgrid-scale spatial variability in cloud water content can be described by a parameter f called the fractional standard deviation. This is equal to the standard deviation of the cloud water content divided by the mean. This parameter is an input to schemes that calculate the impact of subgrid-scale cloud inhomogeneity on gridbox-mean radiative fluxes and microphysical process rates. A new regime-dependent parametrization of the spatial variability of cloud water content is derived from CloudSat observations of ice clouds. In addition to the dependencies on horizontal and vertical resolution and cloud fraction included in previous parametrizations, the new parametrization includes an explicit dependence on cloud type. The new parametrization is then implemented in the Global Atmosphere 6 (GA6) configuration of the Met Office Unified Model and used to model the effects of subgrid variability of both ice and liquid water content on radiative fluxes and autoconversion and accretion rates in three 20-year atmosphere-only climate simulations. These simulations show the impact of the new regime-dependent parametrization on diagnostic radiation calculations, interactive radiation calculations and both interactive radiation calculations and in a new warm microphysics scheme. The control simulation uses a globally constant f value of 0.75 to model the effect of cloud water content variability on radiative fluxes. The use of the new regime-dependent parametrization in the model results in a global mean which is higher than the control's fixed value and a global distribution of f which is closer to CloudSat observations. When the new regime-dependent parametrization is used in radiative transfer calculations only, the magnitudes of short-wave and long-wave top of atmosphere cloud radiative forcing are reduced, increasing the existing global mean biases in the control. When also applied in a new warm microphysics scheme, the short-wave global mean bias is reduced.
Resumo:
In this article, along with others, we take the position that the Null-Subject Parameter (NSP) (Chomsky 1981; Rizzi 1982) cluster of properties is narrower in scope than some originally contended. We test for the resetting of the NSP by English L2 learners of Spanish at the intermediate level, including poverty-of-the stimulus knowledge of the Overt Pronoun Constraint (Montalbetti 1984). Our participants are tested before and after five months' residency in Spain in an effort to see if increased amounts of native exposure are particularly beneficial for parameter resetting. Although we demonstrate NSP resetting for some of the L2 learners, our data essentially demonstrate that even with the advent of time/exposure to native input, there is no immediate gainful effect for NSP resetting.
Resumo:
For over three decades, negotiated planning obligations have been the primary form of land value capture in England. Diffusing and evolving over the last decade, a significant policy innovation has been the use of financial calculations to estimate the extent to which policies on planning obligations for actual, proposed development projects and in plan making affect the financial viability of development. This paper assesses the extent to which the use of financial appraisals has provided a robust, just and practical procedure to support land value capture. It is concluded that development viability appraisals are saturated with intrinsic uncertainty and that land value capture that is based on such calculations is, to some extent, capricious. In addition, clear incentives for developers and land owners to bias viability calculations, the economic dependence of many viability consultants on developers and land owners, a lack of transparency, contested or ambiguous guidance and the opportunities created by input uncertainty for bias are further failings. It is argued that how viability calculations are applied has been, is being and will continue to be shaped by power relations.
Resumo:
In this paper we consider the case of a Bose gas in low dimension in order to illustrate the applicability of a method that allows us to construct analytical relations, valid for a broad range of coupling parameters, for a function which asymptotic expansions are known. The method is well suitable to investigate the problem of stability of a collection of Bose particles trapped in one- dimensional configuration for the case where the scattering length presents a negative value. The eigenvalues for this interacting quantum one-dimensional many particle system become negative when the interactions overcome the trapping energy and, in this case, the system becomes unstable. Here we calculate the critical coupling parameter and apply for the case of Lithium atoms obtaining the critical number of particles for the limit of stability.
Resumo:
A major problem in e-service development is the prioritization of the requirements of different stakeholders. The main stakeholders are governments and their citizens, all of whom have different and sometimes conflicting requirements. In this paper, the prioritization problem is addressed by combining a value-based approach with an illustration technique. This paper examines the following research question: How can multiple stakeholder requirements be illustrated from a value-based perspective in order to be prioritizable? We used an e-service development case taken from a Swedish municipality to elaborate on our approach. Our contributions are: 1) a model of the relevant domains for requirement prioritization for government, citizens, technology, finances and laws and regulations; and 2) a requirement fulfillment analysis tool (RFA) that consists of a requirement-goal-value matrix (RGV), and a calculation and illustration module (CIM). The model reduces cognitive load, helps developers to focus on value fulfillment in e-service development and supports them in the formulation of requirements. It also offers an input to public policy makers, should they aim to target values in the design of e-services.
Resumo:
O objetivo deste estudo é propor a implementação de um modelo estatístico para cálculo da volatilidade, não difundido na literatura brasileira, o modelo de escala local (LSM), apresentando suas vantagens e desvantagens em relação aos modelos habitualmente utilizados para mensuração de risco. Para estimação dos parâmetros serão usadas as cotações diárias do Ibovespa, no período de janeiro de 2009 a dezembro de 2014, e para a aferição da acurácia empírica dos modelos serão realizados testes fora da amostra, comparando os VaR obtidos para o período de janeiro a dezembro de 2014. Foram introduzidas variáveis explicativas na tentativa de aprimorar os modelos e optou-se pelo correspondente americano do Ibovespa, o índice Dow Jones, por ter apresentado propriedades como: alta correlação, causalidade no sentido de Granger, e razão de log-verossimilhança significativa. Uma das inovações do modelo de escala local é não utilizar diretamente a variância, mas sim a sua recíproca, chamada de “precisão” da série, que segue uma espécie de passeio aleatório multiplicativo. O LSM captou todos os fatos estilizados das séries financeiras, e os resultados foram favoráveis a sua utilização, logo, o modelo torna-se uma alternativa de especificação eficiente e parcimoniosa para estimar e prever volatilidade, na medida em que possui apenas um parâmetro a ser estimado, o que representa uma mudança de paradigma em relação aos modelos de heterocedasticidade condicional.