930 resultados para Input-output data
Resumo:
The purpose of this paper is to design a control law for continuous systems with Boolean inputs allowing the output to track a desired trajectory. Such systems are controlled by items of commutation. This type of systems, with Boolean inputs, has found increasing use in the electric industry. Power supplies include such systems and a power converter represents one of theses systems. For instance, in power electronics the control variable is the switching OFF and ON of components such as thyristors or transistors. In this paper, a method is proposed for the designing of a control law in state space for such systems. This approach is implemented in simulation for the control of an electronic circuit.
Resumo:
Analyzes the use of linear and neural network models for financial distress classification, with emphasis on the issues of input variable selection and model pruning. A data-driven method for selecting input variables (financial ratios, in this case) is proposed. A case study involving 60 British firms in the period 1997-2000 is used for illustration. It is shown that the use of the Optimal Brain Damage pruning technique can considerably improve the generalization ability of a neural model. Moreover, the set of financial ratios obtained with the proposed selection procedure is shown to be an appropriate alternative to the ratios usually employed by practitioners.
Resumo:
Valuation is the process of estimating price. The methods used to determine value attempt to model the thought processes of the market and thus estimate price by reference to observed historic data. This can be done using either an explicit model, that models the worth calculation of the most likely bidder, or an implicit model, that that uses historic data suitably adjusted as a short cut to determine value by reference to previous similar sales. The former is generally referred to as the Discounted Cash Flow (DCF) model and the latter as the capitalisation (or All Risk Yield) model. However, regardless of the technique used, the valuation will be affected by uncertainties. Uncertainty in the comparable data available; uncertainty in the current and future market conditions and uncertainty in the specific inputs for the subject property. These input uncertainties will translate into an uncertainty with the output figure, the estimate of price. In a previous paper, we have considered the way in which uncertainty is allowed for in the capitalisation model in the UK. In this paper, we extend the analysis to look at the way in which uncertainty can be incorporated into the explicit DCF model. This is done by recognising that the input variables are uncertain and will have a probability distribution pertaining to each of them. Thus buy utilising a probability-based valuation model (using Crystal Ball) it is possible to incorporate uncertainty into the analysis and address the shortcomings of the current model. Although the capitalisation model is discussed, the paper concentrates upon the application of Crystal Ball to the Discounted Cash Flow approach.
Resumo:
ERA-Interim is the latest global atmospheric reanalysis produced by the European Centre for Medium-Range Weather Forecasts (ECMWF). The ERA-Interim project was conducted in part to prepare for a new atmospheric reanalysis to replace ERA-40, which will extend back to the early part of the twentieth century. This article describes the forecast model, data assimilation method, and input datasets used to produce ERA-Interim, and discusses the performance of the system. Special emphasis is placed on various difficulties encountered in the production of ERA-40, including the representation of the hydrological cycle, the quality of the stratospheric circulation, and the consistency in time of the reanalysed fields. We provide evidence for substantial improvements in each of these aspects. We also identify areas where further work is needed and describe opportunities and objectives for future reanalysis projects at ECMWF
Resumo:
This paper redefines technical efficiency by incorporating provision of environmental goods as one of the outputs of the farm. The proportion of permanent and rough grassland to total agricultural land area is used as a proxy for the provision of environmental goods. Stochastic frontier analysis was conducted using a Bayesian procedure. The methodology is applied to panel data on 215 dairy farms in England and Wales. Results show that farm efficiency rankings change when provision of environmental outputs by farms is incorporated in the efficiency analysis, which may have important political implications.
Resumo:
In this study, we systematically compare a wide range of observational and numerical precipitation datasets for Central Asia. Data considered include two re-analyses, three datasets based on direct observations, and the output of a regional climate model simulation driven by a global re-analysis. These are validated and intercompared with respect to their ability to represent the Central Asian precipitation climate. In each of the datasets, we consider the mean spatial distribution and the seasonal cycle of precipitation, the amplitude of interannual variability, the representation of individual yearly anomalies, the precipitation sensitivity (i.e. the response to wet and dry conditions), and the temporal homogeneity of precipitation. Additionally, we carried out part of these analyses for datasets available in real time. The mutual agreement between the observations is used as an indication of how far these data can be used for validating precipitation data from other sources. In particular, we show that the observations usually agree qualitatively on anomalies in individual years while it is not always possible to use them for the quantitative validation of the amplitude of interannual variability. The regional climate model is capable of improving the spatial distribution of precipitation. At the same time, it strongly underestimates summer precipitation and its variability, while interannual variations are well represented during the other seasons, in particular in the Central Asian mountains during winter and spring
Resumo:
This paper describes a method that employs Earth Observation (EO) data to calculate spatiotemporal estimates of soil heat flux, G, using a physically-based method (the Analytical Method). The method involves a harmonic analysis of land surface temperature (LST) data. It also requires an estimate of near-surface soil thermal inertia; this property depends on soil textural composition and varies as a function of soil moisture content. The EO data needed to drive the model equations, and the ground-based data required to provide verification of the method, were obtained over the Fakara domain within the African Monsoon Multidisciplinary Analysis (AMMA) program. LST estimates (3 km × 3 km, one image 15 min−1) were derived from MSG-SEVIRI data. Soil moisture estimates were obtained from ENVISAT-ASAR data, while estimates of leaf area index, LAI, (to calculate the effect of the canopy on G, largely due to radiation extinction) were obtained from SPOT-HRV images. The variation of these variables over the Fakara domain, and implications for values of G derived from them, were discussed. Results showed that this method provides reliable large-scale spatiotemporal estimates of G. Variations in G could largely be explained by the variability in the model input variables. Furthermore, it was shown that this method is relatively insensitive to model parameters related to the vegetation or soil texture. However, the strong sensitivity of thermal inertia to soil moisture content at low values of relative saturation (<0.2) means that in arid or semi-arid climates accurate estimates of surface soil moisture content are of utmost importance, if reliable estimates of G are to be obtained. This method has the potential to improve large-scale evaporation estimates, to aid land surface model prediction and to advance research that aims to explain failure in energy balance closure of meteorological field studies.
Resumo:
A new approach to the study of the local organization in amorphous polymer materials is presented. The method couples neutron diffraction experiments that explore the structure on the spatial scale 1–20 Å with the reverse Monte Carlo fitting procedure to predict structures that accurately represent the experimental scattering results over the whole momentum transfer range explored. Molecular mechanics and molecular dynamics techniques are also used to produce atomistic models independently from any experimental input, thereby providing a test of the viability of the reverse Monte Carlo method in generating realistic models for amorphous polymeric systems. An analysis of the obtained models in terms of single chain properties and of orientational correlations between chain segments is presented. We show the viability of the method with data from molten polyethylene. The analysis derives a model with average C-C and C-H bond lengths of 1.55 Å and 1.1 Å respectively, average backbone valence angle of 112, a torsional angle distribution characterized by a fraction of trans conformers of 0.67 and, finally, a weak interchain orientational correlation at around 4 Å.
Resumo:
This chapter aims to provide an overview of building simulation in a theoretical and practical context. The following sections demonstrate the importance of simulation programs at a time when society is shifting towards a low carbon future and the practice of sustainable design becomes mandatory. The initial sections acquaint the reader with basic terminology and comment on the capabilities and categories of simulation tools before discussing the historical development of programs. The main body of the chapter considers the primary benefits and users of simulation programs, looks at the role of simulation in the construction process and examines the validity and interpretation of simulation results. The latter half of the chapter looks at program selection and discusses software capability, product characteristics, input data and output formats. The inclusion of a case study demonstrates the simulation procedure and key concepts. Finally, the chapter closes with a sight into the future, commenting on the development of simulation capability, user interfaces and how simulation will continue to empower building professionals as society faces new challenges in a rapidly changing landscape.
Resumo:
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.
Resumo:
Background and Aims Forest trees directly contribute to carbon cycling in forest soils through the turnover of their fine roots. In this study we aimed to calculate root turnover rates of common European forest tree species and to compare them with most frequently published values. Methods We compiled available European data and applied various turnover rate calculation methods to the resulting database. We used Decision Matrix and Maximum-Minimum formula as suggested in the literature. Results Mean turnover rates obtained by the combination of sequential coring and Decision Matrix were 0.86 yr−1 for Fagus sylvatica and 0.88 yr−1 for Picea abies when maximum biomass data were used for the calculation, and 1.11 yr−1 for both species when mean biomass data were used. Using mean biomass rather than maximum resulted in about 30 % higher values of root turnover. Using the Decision Matrix to calculate turnover rate doubled the rates when compared to the Maximum-Minimum formula. The Decision Matrix, however, makes use of more input information than the Maximum-Minimum formula. Conclusions We propose that calculations using the Decision Matrix with mean biomass give the most reliable estimates of root turnover rates in European forests and should preferentially be used in models and C reporting.
Resumo:
Sensory afferent signals from neck muscles have been postulated to influence central cardiorespiratory control as components of postural reflexes, but neuronal pathways for this action have not been identified. The intermedius nucleus of the medulla (InM) is a target of neck muscle spindle afferents and is ideally located to influence such reflexes but is poorly investigated. To aid identification of the nucleus, we initially produced three-dimensional reconstructions of the InM in both mouse and rat. Neurochemical analysis including transgenic reporter mice expressing green fluorescent protein in GABA-synthesizing neurons, immunohistochemistry, and in situ hybridization revealed that the InM is neurochemically diverse, containing GABAegric and glutamatergic neurons with some degree of colocalization with parvalbumin, neuronal nitric oxide synthase, and calretinin. Projections from the InM to the nucleus tractus solitarius (NTS) were studied electrophysiologically in rat brainstem slices. Electrical stimulation of the NTS resulted in antidromically activated action potentials within InM neurons. In addition, electrical stimulation of the InM resulted in EPSPs that were mediated by excitatory amino acids and IPSPs mediated solely by GABA(A) receptors or by GABA(A) and glycine receptors. Chemical stimulation of the InM resulted in (1) a depolarization of NTS neurons that were blocked by NBQX (2,3-dioxo-6-nitro-1,2,3,4-tetrahydrobenzo[f]quinoxaline-7-sulfonoamide) or kynurenic acid and (2) a hyperpolarization of NTS neurons that were blocked by bicuculline. Thus, the InM contains neurochemically diverse neurons and sends both excitatory and inhibitory projections to the NTS. These data provide a novel pathway that may underlie possible reflex changes in autonomic variables after neck muscle spindle afferent activation.
Resumo:
Cross-bred cow adoption is an important and potent policy variable precipitating subsistence household entry into emerging milk markets. This paper focuses on the problem of designing policies that encourage and sustain milkmarket expansion among a sample of subsistence households in the Ethiopian highlands. In this context it is desirable to measure households’ ‘proximity’ to market in terms of the level of deficiency of essential inputs. This problem is compounded by four factors. One is the existence of cross-bred cow numbers (count data) as an important, endogenous decision by the household; second is the lack of a multivariate generalization of the Poisson regression model; third is the censored nature of the milk sales data (sales from non-participating households are, essentially, censored at zero); and fourth is an important simultaneity that exists between the decision to adopt a cross-bred cow, the decision about how much milk to produce, the decision about how much milk to consume and the decision to market that milk which is produced but not consumed internally by the household. Routine application of Gibbs sampling and data augmentation overcome these problems in a relatively straightforward manner. We model the count data from two sites close to Addis Ababa in a latent, categorical-variable setting with known bin boundaries. The single-equation model is then extended to a multivariate system that accommodates the covariance between crossbred-cow adoption, milk-output, and milk-sales equations. The latent-variable procedure proves tractable in extension to the multivariate setting and provides important information for policy formation in emerging-market settings
Resumo:
This paper summarizes and analyses available data on the surface energy balance of Arctic tundra and boreal forest. The complex interactions between ecosystems and their surface energy balance are also examined, including climatically induced shifts in ecosystem type that might amplify or reduce the effects of potential climatic change. High latitudes are characterized by large annual changes in solar input. Albedo decreases strongly from winter, when the surface is snow-covered, to summer, especially in nonforested regions such as Arctic tundra and boreal wetlands. Evapotranspiration (QE) of high-latitude ecosystems is less than from a freely evaporating surface and decreases late in the season, when soil moisture declines, indicating stomatal control over QE, particularly in evergreen forests. Evergreen conifer forests have a canopy conductance half that of deciduous forests and consequently lower QE and higher sensible heat flux (QH). There is a broad overlap in energy partitioning between Arctic and boreal ecosystems, although Arctic ecosystems and light taiga generally have higher ground heat flux because there is less leaf and stem area to shade the ground surface, and the thermal gradient from the surface to permafrost is steeper. Permafrost creates a strong heat sink in summer that reduces surface temperature and therefore heat flux to the atmosphere. Loss of permafrost would therefore amplify climatic warming. If warming caused an increase in productivity and leaf area, or fire caused a shift from evergreen to deciduous forest, this would increase QE and reduce QH. Potential future shifts in vegetation would have varying climate feedbacks, with largest effects caused by shifts from boreal conifer to shrubland or deciduous forest (or vice versa) and from Arctic coastal to wet tundra. An increase of logging activity in the boreal forests appears to reduce QE by roughly 50% with little change in QH, while the ground heat flux is strongly enhanced.
Resumo:
This article reports on a detailed empirical study of the way narrative task design influences the oral performance of second-language (L2) learners. Building on previous research findings, two dimensions of narrative design were chosen for investigation: narrative complexity and inherent narrative structure. Narrative complexity refers to the presence of simultaneous storylines; in this case, we compared single-story narratives with dual-story narratives. Inherent narrative structure refers to the order of events in a narrative; we compared narratives where this was fixed to others where the events could be reordered without loss of coherence. Additionally, we explored the influence of learning context on performance by gathering data from two comparable groups of participants: 60 learners in a foreign language context in Teheran and 40 in an L2 context in London. All participants recounted two of four narratives from cartoon pictures prompts, giving a between-subjects design for narrative complexity and a within-subjects design for inherent narrative structure. The results show clearly that for both groups, L2 performance was affected by the design of the task: Syntactic complexity was supported by narrative storyline complexity and grammatical accuracy was supported by an inherently fixed narrative structure. We reason that the task of recounting simultaneous events leads learners into attempting more hypotactic language, such as subordinate clauses that follow, for example, while, although, at the same time as, etc. We reason also that a tight narrative structure allows learners to achieve greater accuracy in the L2 (within minutes of performing less accurately on a loosely structured narrative) because the tight ordering of events releases attentional resources that would otherwise be spent on finding connections between the pictures. The learning context was shown to have no effect on either accuracy or fluency but an unexpectedly clear effect on syntactic complexity and lexical diversity. The learners in London seem to have benefited from being in the target language environment by developing not more accurate grammar but a more diverse resource of English words and syntactic choices. In a companion article (Foster & Tavakoli, 2009) we compared their performance with native-speaker baseline data and see that, in terms of nativelike selection of vocabulary and phrasing, the learners in London are closing in on native-speaker norms. The study provides empirical evidence that L2 performance is affected by task design in predictable ways. It also shows that living within the target language environment, and presumably using the L2 in a host of everyday tasks outside the classroom, confers a distinct lexical advantage, not a grammatical one.