853 resultados para employment uncertainty
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper proposes a Fuzzy Goal Programming model (FGP) for a real aggregate production-planning problem. To do so, an application was made in a Brazilian Sugar and Ethanol Milling Company. The FGP Model depicts the comprehensive production process of sugar, ethanol, molasses and derivatives, and considers the uncertainties involved in ethanol and sugar production. Decision-makings, related to the agricultural and logistics phases, were considered on a weekly-basis planning horizon to include the whole harvesting season and the periods between harvests. The research has provided interesting results about decisions in the agricultural stages of cutting, loading and transportation to sugarcane suppliers and, especially, in milling decisions, whose choice of production process includes storage and logistics distribution. (C)2014 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We consider model selection uncertainty in linear regression. We study theoretically and by simulation the approach of Buckland and co-workers, who proposed estimating a parameter common to all models under study by taking a weighted average over the models, using weights obtained from information criteria or the bootstrap. This approach is compared with the usual approach in which the 'best' model is used, and with Bayesian model averaging. The weighted predictor behaves similarly to model averaging, with generally more realistic mean-squared errors than the usual model-selection-based estimator.
Resumo:
A fuzzy ruled-based system was developed in this study and resulted in an index indicating the level of uncertainty related to commercial transactions between cassava growers and their dealers. The fuzzy system was developed based on Transaction Cost Economics approach. The fuzzy system was developed from input variables regarding information sharing between grower and dealer on “Demand/purchase Forecasting”, “Production Forecasting” and “Production Innovation”. The output variable is the level of uncertainty regarding the transaction between seller and buyer agent, which may serve as a system for detecting inefficiencies. Evidences from 27 cassava growers registered in the Regional Development Offices of Tupa and Assis, São Paulo, Brazil, and 48 of their dealers supported the development of the system. The mathematical model indicated that 55% of the growers present a Very High level of uncertainty, 33% present Medium or High. The others present Low or Very Low level of uncertainty. From the model, simulations of external interferences can be implemented in order to improve the degree of uncertainty and, thus, lower transaction costs.
Resumo:
Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Resumo:
Analytical methods accounting for imperfect detection are often used to facilitate reliable inference in population and community ecology. We contend that similar approaches are needed in disease ecology because these complicated systems are inherently difficult to observe without error. For example, wildlife disease studies often designate individuals, populations, or spatial units to states (e.g., susceptible, infected, post-infected), but the uncertainty associated with these state assignments remains largely ignored or unaccounted for. We demonstrate how recent developments incorporating observation error through repeated sampling extend quite naturally to hierarchical spatial models of disease effects, prevalence, and dynamics in natural systems. A highly pathogenic strain of avian influenza virus in migratory waterfowl and a pathogenic fungus recently implicated in the global loss of amphibian biodiversity are used as motivating examples. Both show that relatively simple modifications to study designs can greatly improve our understanding of complex spatio-temporal disease dynamics by rigorously accounting for uncertainty at each level of the hierarchy.
Resumo:
Categorical data cannot be interpolated directly because they are outcomes of discrete random variables. Thus, types of categorical variables are transformed into indicator functions that can be handled by interpolation methods. Interpolated indicator values are then backtransformed to the original types of categorical variables. However, aspects such as variability and uncertainty of interpolated values of categorical data have never been considered. In this paper we show that the interpolation variance can be used to map an uncertainty zone around boundaries between types of categorical variables. Moreover, it is shown that the interpolation variance is a component of the total variance of the categorical variables, as measured by the coefficient of unalikeability. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This article investigates the effect of product market liberalisation on employment allowing for interactions between policies and institutions in product and labour markets. Using panel data for OECD countries over the period 19802002, we present evidence that product market deregulation is more effective at the margin when labour market regulation is high. The data also suggest that product market liberalisation may promote employment-enhancing labour market reforms.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
AC Biosusceptometry (ACB) was previously employed towards recording gastrointestinal motility. Our data show a reliable and successful evaluation of gastrointestinal transit of liquid and solid meals in rats, considering the methods scarcity and number of experiments needed to endorsement of drugs and medicinal plants. ACB permits real time and simultaneous experiments using the same animal, preserving the physiological conditions employing both meals with simplicity and accuracy.
Resumo:
Programa de Doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.