959 resultados para Output data


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation comprises three individual chapters in an effort to examine different explanatory variables that affect firm performance. Chapter Two proposes an additional determinant of firm survival. Based on a detailed examination of firm survival in the British automobile industry between 1895 and 1970, we conclude that a firm's selection of submarket (defined by quality level) influenced survival. In contrast to findings for the US automobile industry, there is no evidence of first-mover advantage in the market as a whole. However, we do find evidence of first-mover advantage after conditioning on submarket choice. Chapter Three examines the effects of product line expansion on firm performance in terms of survival time. Based on a detailed examination of firm survival time in the British automobile industry between 1895 and 1970, we find that diversification exerts a positive effect on firm survival. Furthermore, our findings support the literature with respect to the impacts of submarket types, pre-entry experience, and timing of entry on firm survival time. Chapter Four examines corporate diversification in U.S. manufacturing and service firms. We develop measures of how related a firm's diverse activities are using input-output data and the NAILS classification to construct indexes of "vertical relatedness" and "complementarity". Strong relationships between these two measures are found. We utilize profitability and excess value as the measure for firm performance. Econometric analysis reveals that there is no relationship between the degree of relatedness of diversification and firm performance for the study period.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the current study, we compared technical efficiency of smallholder rice farmers with and without credit in northern Ghana using data from a farm household survey. We fitted a stochastic frontier production function to input and output data to measure technical efficiency. We addressed self-selection into credit participation using propensity score matching and found that the mean efficiency did not differ between credit users and non-users. Credit-participating households had an efficiency of 63.0 percent compared to 61.7 percent for non-participants. The results indicate significant inefficiencies in production and thus a high scope for improving farmers’ technical efficiency through better use of available resources at the current level of technology. Apart from labour and capital, all the conventional farm inputs had a significant effect on rice production. The determinants of efficiency included the respondent’s age, sex, educational status, distance to the nearest market, herd ownership, access to irrigation and specialisation in rice production. From a policy perspective, we recommend that the credit should be channelled to farmers who demonstrate the need for it and show the commitment to improve their production through external financing. Such a screening mechanism will ensure that the credit goes to the right farmers who need it to improve their technical efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Oikean tiedon siirtyminen oikeaan aikaan, sekä laadukkaan työn tekeminen yrityksen tilaus-toimitusketjun jokaisessa vaiheessa, ovat avaintekijöitä arvolupauksen ja laadun täyttämiseen asiakkaalle. Diplomityön tavoite on kehittää pk-yritykselle työkalut parempaan tiedon hallintaan ja laadukkaan työn tekemiseen toiminnanohjausjärjestelmässä. Tutkimusmenetelmänä diplomityössä käytettiin toimintatutkimusta, jossa diplomityön tekijä osallistui kohdeyrityksen päivittäiseen työn tekemiseen neljän kuukauden ajan. Tutkimuksen tiedon keräämisessä käytettiin myös puolistrukturoitua haastattelua, sekä kyselytutkimuksella. Tutkimusote työssä on kvalitatiivinen eli laadullinen tutkimusote. Työ koostuu teoriaosasta sekä soveltavasta osasta, jonka jälkeen työn tulokset esitetään tiivistetysti johtopäätöksissä ja yhteenvedossa. Toiminnanohjausjärjestelmät keräävät ja tallentavat tietoa, jota työntekijät ja yrityksen rajapinnoilla työskentelevät ihmiset siihen syöttävät. Onkin äärimmäisen tärkeää, että yrityksellä on kuvatut yhtenäiset toimintamallit prosesseille, joita he käyttävät tiedon tallentamisessa järjestelmiin. Tässä diplomityössä tutkitaan pk-yrityksen nykyiset toimintamallit tiedon tallentamisesta toiminnanohjausjärjestelmään, jonka jälkeen kehitetään yhtenäiset ohjeet toiminnanohjausjärjestelmään syötetystä myyntitilaussopimuksesta. Teoriaosuudessa esitetään laatu eri näkökulmista ja mitä laadunhallintajärjestelmät ovat ja kuinka niitä kehitetään. Teoriaosassa myös avataan tilausohjautuvan tuotannon periaatteet, sekä toiminnanohjausjärjestelmän merkitys liiketoiminnalle. Teoriaosuudella pohjustetaan soveltavaa osuutta, jossa ongelma-analyysin jälkeen kehitetään yritykseen oma laadunhallintajärjestelmä, sekä uudet työmallit tiedonvaihtoon ja sen tallentamiseen. Tuloksena on myös toiminnanohjausjärjestelmän käytön tehostuminen ohjelmistotoimittajan tekemänä. Ohjelmasta karsittiin turhat nimikkeistöt ja sen konfigurointia tehostettiin. Työn tuloksena saatiin työohjeet ydinprosessien suorittamiseen, sekä oma laadunhallintajärjestelmä tukemaan yrityksen ydin- ja tukiprosesseja, sekä tiedonhallintaa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2015.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Summary: Climate change has a potential to impact rainfall, temperature and air humidity, which have relation to plant evapotranspiration and crop water requirement. The purpose of this research is to assess climate change impacts on irrigation water demand, based on future scenarios derived from the PRECIS (Providing Regional Climates for Impacts Studies), using boundary conditions of the HadCM3 submitted to a dynamic downscaling nested to the Hadley Centre regional circulation model HadRM3P. Monthly time series for average temperature and rainfall were generated for 1961-90 (baseline) and the future (2040). The reference evapotranspiration was estimated using monthly average temperature. Projected climate change impact on irrigation water demand demonstrated to be a result of evapotranspiration and rainfall trend. Impacts were mapped over the target region by using geostatistical methods. An increase of the average crop water needs was estimated to be 18.7% and 22.2% higher for 2040 A2 and B2 scenarios, respectively. Objective ? To analyze the climate change impacts on irrigation water requirements, using downscaling techniques of a climate change model, at the river basin scale. Method: The study area was delimited between 4º39?30? and 5º40?00? South and 37º35?30? and 38º27?00? West. The crop pattern in the target area was characterized, regarding type of irrigated crops, respective areas and cropping schedules, as well as the area and type of irrigation systems adopted. The PRECIS (Providing Regional Climates for Impacts Studies) system (Jones et al., 2004) was used for generating climate predictions for the target area, using the boundary conditions of the Hadley Centre model HadCM3 (Johns et al., 2003). The considered time scale of interest for climate change impacts evaluation was the year of 2040, representing the period of 2025 to 2055. The output data from the climate model was interpolated, considering latitude/longitude, by applying ordinary kriging tools available at a Geographic Information System, in order to produce thematic maps.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Belt and Road Initiative (BRI) is a project launched by the Chinese Government whose main goal is to connect more than 65 countries in Asia, Europe, Africa and Oceania developing infrastructures and facilities. To support the prevention or mitigation of landslide hazards, which may affect the mainland infrastructures of BRI, a landslide susceptibility analysis in the countries involved has been carried out. Due to the large study area, the analysis has been carried out using a multi-scale approach which consists of mapping susceptibility firstly at continental scale, and then at national scale. The study area selected for the continental assessment is the south-Asia, where a pixel-based landslide susceptibility map has been carried out using the Weight of Evidence method and validated by Receiving Operating Characteristic (ROC) curves. Then, we selected the regions of west Tajikistan and north-east India to be investigated at national scale. Data scarcity is a common condition for many countries involved into the Initiative. Therefore in addition to the landslide susceptibility assessment of west Tajikistan, which has been conducted using a Generalized Additive Model and validated by ROC curves, we have examined, in the same study area, the effect of incomplete landslide dataset on the prediction capacity of statistical models. The entire PhD research activity has been conducted using only open data and open-source software. In this context, to support the analysis of the last years an open-source plugin for QGIS has been implemented. The SZ-tool allows the user to make susceptibility assessments from the data preprocessing, susceptibility mapping, to the final classification. All the output data of the analysis conducted are freely available and downloadable. This text describes the research activity of the last three years. Each chapter reports the text of the articles published in international scientific journal during the PhD.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this paper is to analyse the impact of university knowledge and technology transfer activities on academic research output. Specifically, we study whether researchers with collaborative links with the private sector publish less than their peers without such links, once controlling for other sources of heterogeneity. We report findings from a longitudinal dataset on researchers from two engineering departments in the UK between 1985 until 2006. Our results indicate that researchers with industrial links publish significantly more than their peers. Academic productivity, though, is higher for low levels of industry involvement as compared to high levels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We evaluate conditional predictive densities for U.S. output growth and inflationusing a number of commonly used forecasting models that rely on a large number ofmacroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly used normality assumption fit actual realizationsout-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can improve or deteriorate point forecasts, they might have theopposite effect on higher moments. We find that normality is rejected for most modelsin some dimension according to at least one of the tests we use. Interestingly, however,combinations of predictive densities appear to be correctly approximated by a normaldensity: the simple, equal average when predicting output growth and Bayesian modelaverage when predicting inflation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Various studies investigating the future impacts of integrating high levels of renewable energy make use of historical meteorological (met) station data to produce estimates of future generation. Hourly means of 10m horizontal wind are extrapolated to a standard turbine hub height using the wind profile power or log law and used to simulate the hypothetical power output of a turbine at that location; repeating this procedure using many viable locations can produce a picture of future electricity generation. However, the estimate of hub height wind speed is dependent on the choice of the wind shear exponent a or the roughness length z0, and requires a number of simplifying assumptions. This paper investigates the sensitivity of this estimation on generation output using a case study of a met station in West Freugh, Scotland. The results show that the choice of wind shear exponent is a particularly sensitive parameter which can lead to significant variation of estimated hub height wind speed and hence estimated future generation potential of a region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many macroeconomic series, such as U.S. real output growth, are sampled quarterly, although potentially useful predictors are often observed at a higher frequency. We look at whether a mixed data-frequency sampling (MIDAS) approach can improve forecasts of output growth. The MIDAS specification used in the comparison uses a novel way of including an autoregressive term. We find that the use of monthly data on the current quarter leads to significant improvement in forecasting current and next quarter output growth, and that MIDAS is an effective way to exploit monthly data compared with alternative methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vintage-based vector autoregressive models of a single macroeconomic variable are shown to be a useful vehicle for obtaining forecasts of different maturities of future and past observations, including estimates of post-revision values. The forecasting performance of models which include information on annual revisions is superior to that of models which only include the first two data releases. However, the empirical results indicate that a model which reflects the seasonal nature of data releases more closely does not offer much improvement over an unrestricted vintage-based model which includes three rounds of annual revisions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.