930 resultados para Input-output data
Resumo:
Resumen tomado de la publicación
Resumo:
Resumen tomado de la publicación
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.
Resumo:
Global hydrological models (GHMs) model the land surface hydrologic dynamics of continental-scale river basins. Here we describe one such GHM, the Macro-scale - Probability-Distributed Moisture model.09 (Mac-PDM.09). The model has undergone a number of revisions since it was last applied in the hydrological literature. This paper serves to provide a detailed description of the latest version of the model. The main revisions include the following: (1) the ability for the model to be run for n repetitions, which provides more robust estimates of extreme hydrological behaviour, (2) the ability of the model to use a gridded field of coefficient of variation (CV) of daily rainfall for the stochastic disaggregation of monthly precipitation to daily precipitation, and (3) the model can now be forced with daily input climate data as well as monthly input climate data. We demonstrate the effects that each of these three revisions has on simulated runoff relative to before the revisions were applied. Importantly, we show that when Mac-PDM.09 is forced with monthly input data, it results in a negative runoff bias relative to when daily forcings are applied, for regions of the globe where the day-to-day variability in relative humidity is high. The runoff bias can be up to - 80% for a small selection of catchments but the absolute magnitude of the bias may be small. As such, we recommend future applications of Mac-PDM.09 that use monthly climate forcings acknowledge the bias as a limitation of the model. The performance of Mac-PDM.09 is evaluated by validating simulated runoff against observed runoff for 50 catchments. We also present a sensitivity analysis that demonstrates that simulated runoff is considerably more sensitive to method of PE calculation than to perturbations in soil moisture and field capacity parameters.
Resumo:
Tropospheric ozone is an air pollutant thought to reduce crop yields across Europe. Much experimental scientific work has been completed or is currently underway to quantify yield effects at ambient ozone levels. In this research, we seek to directly evaluate whether such effects are observed at the farm level. This is done by intersecting a farm level panel dataset for winter wheat farms in England & Wales with information on ambient ozone, and estimating a production function with ozone as a fixed input. Panel data methods, Generalised Method of Moments (GMM) techniques and nested exogeneity tests are employed in the estimation. The results confirm a small, but nevertheless statistically significant negative effect of ambient ozone levels on wheat yields.
Resumo:
Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.
Resumo:
Human-like computer interaction systems requires far more than just simple speech input/output. Such a system should communicate with the user verbally, using a conversational style language. It should be aware of its surroundings and use this context for any decisions it makes. As a synthetic character, it should have a computer generated human-like appearance. This, in turn, should be used to convey emotions, expressions and gestures. Finally, and perhaps most important of all, the system should interact with the user in real time, in a fluent and believable manner.
Resumo:
Two approaches are presented to calculate the weights for a Dynamic Recurrent Neural Network (DRNN) in order to identify the input-output dynamics of a class of nonlinear systems. The number of states of the identified network is constrained to be the same as the number of states of the plant.
Resumo:
Recent experimental evidence suggests a finer genetic, structural and functional subdivision of the layers which form a cortical column. The classical layer II/III (LII/III) of rodent neocortex integrates ascending sensory information with contextual cortical information for behavioral read-out. We systematically investigated to which extent regular-spiking supragranular pyramidal neurons, located at different depths within the cortex, show different input-output connectivity patterns. Combining glutamate-uncaging with whole-cell recordings and biocytin filling, we revealed a novel cellular organization of LII/III: (i) “Lower LII/III” pyramidal cells receive a very strong excitatory input from lemniscal LIV and much fewer inputs from paralemniscal LVa. They project to all layers of the home column, including a feedback projection to LIV whereas transcolumnar projections are relatively sparse. (ii) “Upper LII/III” pyramidal cells also receive their strongest input from LIV, but in addition, a very strong and dense excitatory input from LVa. They project extensively to LII/III as well as LVa and Vb of their home and neighboring columns, (iii) “Middle LII/III” pyramidal cell show an intermediate connectivity phenotype that stands in many ways in-between the features described for lower versus upper LII/III. “Lower LII/III” intracolumnarly segregates and transcolumnarly integrates lemniscal information whereas “upper LII/III” seems to integrate lemniscal with paralemniscal information. This suggests a finegrained functional subdivision of the supragranular compartment containing multiple circuits without any obvious cytoarchitectonic, other structural or functional correlate of a laminar border in rodent barrel cortex.
Resumo:
The current work discusses the compositional analysis of spectra that may be related to amorphous materials that lack discernible Lorentzian, Debye or Drude responses. We propose to model such response using a 3-dimensional random RLC network using a descriptor formulation which is converted into an input-output transfer function representation. A wavelet identification study of these networks is performed to infer the composition of the networks. It was concluded that wavelet filter banks enable a parsimonious representation of the dynamics in excited randomly connected RLC networks. Furthermore, chemometric classification using the proposed technique enables the discrimination of dielectric samples with different composition. The methodology is promising for the classification of amorphous dielectrics.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
In Sweden solar irradiation and space heating loads are unevenly distributed over the year. Domestic hot water loads may be nearly constant. Test results on solar collector performance are often reported as yearly output of a certain collector at fixed temperatures, e g 25, 50 and 75 C. These data are not suitable for dimensioning of solar systems, because the actual performance of the collector depends heavily on solar fraction and load distribution over the year.At higher latitudes it is difficult to attain high solar fractions for buildings, due to overheating in summer and small marginal output for added collector area. Solar collectors with internal reflectors offer possibilities to evade overheating problems and deliver more energy at seasons when the load is higher. There are methods for estimating the yearly angular irradiation distribution, but there is a lack of methods for describing the load and the storage in such a way as to enable optical design of season and load adapted collectors.This report describes two methods for estimation of solar system performance with relevance for season and load adaption. Results regarding attainable solar fractions as a function of collector features, load profiles, load levels and storage characteristics are reported. The first method uses monthly collector output data at fixed temperatures from the simulation program MINSUN for estimating solar fractions for different load profiles and load levels. The load level is defined as estimated yearly collector output at constant collector temperature divided be yearly load. This table may examplify the results:CollectorLoadLoadSolar Improvementtypeprofile levelfractionover flat plateFlat plateDHW 75 %59 %Load adaptedDHW 75 %66 %12 %Flat plateSpace heating 50 %22 %Load adaptedSpace heating 50 %28 %29 %The second method utilises simulations with one-hour timesteps for collectors connected to a simplified storage and a variable load. Collector output, optical and thermal losses, heat overproduction, load level and storage temperature are presented as functions of solar incidence angles. These data are suitable for optical design of load adapted solar collectors. Results for a Stockholm location indicate that a solar combisystem with a solar fraction around 30 % should have collectors that reduce heat production at solar heights above 30 degrees and have optimum efficiency for solar heights between 8 and 30 degrees.
Resumo:
This paper investigates the degree of short run and long run co-movement in U.S. sectoral output data by estimating sectoraI trends and cycles. A theoretical model based on Long and Plosser (1983) is used to derive a reduced form for sectoral output from first principles. Cointegration and common features (cycles) tests are performed; sectoral output data seem to share a relatively high number of common trends and a relatively low number of common cycles. A special trend-cycle decomposition of the data set is performed and the results indicate a very similar cyclical behavior across sectors and a very different behavior for trends. Indeed. sectors cyclical components appear as one. In a variance decomposition analysis, prominent sectors such as Manufacturing and Wholesale/Retail Trade exhibit relatively important transitory shocks.
Resumo:
O presente trabalho tem como objetivo clarificar alguns pontos da relação comercial entre Brasil e China. Em grande parte é um trabalho descritivo, que utiliza diferentes agregações de produtos (Sistema harmônico; Broad Economic Categories), a fim de construir um cenário completo desta relação. Efeitos totais na produção brasileira, oriundos da demanda Chinesa por produtos Brasileiros também são analisados. Para tal utilizamos o ferramental de Matriz de Insumo Produto. Este efeito também é construído para o comércio do Brasil com os outros países do Mundo, e uma comparação é estabelecida. No fim do trabalho temos um estudo sobre possibilidades de comércio que ainda não foram exploradas entre esses dois países.