148 resultados para inaccuracy


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncertainty of data affects decision making process as it increases the risk and the costs of the decision. One of the challenges in minimizing the impact of the bounded uncertainty on any scheduling algorithm is the lack of information, as only the upper bound and the lower bound are provided without any known probability or membership function. On the contrary, probabilistic uncertainty can use probability distributions and fuzzy uncertainty can use the membership function. McNaughton's algorithm is used to find the optimum schedule that minimizes the makespan taking into consideration the preemption of tasks. The challenge here is the bounded inaccuracy of the input parameters for the algorithm, namely known as bounded uncertain data. This research uses interval programming to minimise the impact of bounded uncertainty of input parameters on McNaughton’s algorithm, it minimises the uncertainty of the cost function estimate and increase its optimality. This research is based on the hypothesis that doing the calculations on interval values then approximate the end result will produce more accurate results than approximating each interval input then doing numerical calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital elevation model (DEM) plays a substantial role in hydrological study, from understanding the catchment characteristics, setting up a hydrological model to mapping the flood risk in the region. Depending on the nature of study and its objectives, high resolution and reliable DEM is often desired to set up a sound hydrological model. However, such source of good DEM is not always available and it is generally high-priced. Obtained through radar based remote sensing, Shuttle Radar Topography Mission (SRTM) is a publicly available DEM with resolution of 92m outside US. It is a great source of DEM where no surveyed DEM is available. However, apart from the coarse resolution, SRTM suffers from inaccuracy especially on area with dense vegetation coverage due to the limitation of radar signals not penetrating through canopy. This will lead to the improper setup of the model as well as the erroneous mapping of flood risk. This paper attempts on improving SRTM dataset, using Normalised Difference Vegetation Index (NDVI), derived from Visible Red and Near Infra-Red band obtained from Landsat with resolution of 30m, and Artificial Neural Networks (ANN). The assessment of the improvement and the applicability of this method in hydrology would be highlighted and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to examine the contemporary viewer and also propose a review of the literature surrounding the spanish researcher Jésus Martín-Barbero. The proposed review, based on Martín-Barbero observations and the Latin American Cultural Studies analysis, observes the individual as part of the processes of communication and research on the concept of cultural mediation, the viewer as component/agent/object of the production of meaning. The study will perceive it as an element of inaccuracy/inadequacy and understand it as a subject of these reactive processes that contain complex and involve communication and culture, in contrast the impressions that resulted from a discussion between a television professional and a communication researcher and of these findings on the same viewer. The work also aims to point the viewer as a component of the possible mapping desired by Martín-Barbero to massive contemporary interpretation of the processes and its concept of night map

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The idea of considering imprecision in probabilities is old, beginning with the Booles George work, who in 1854 wanted to reconcile the classical logic, which allows the modeling of complete ignorance, with probabilities. In 1921, John Maynard Keynes in his book made explicit use of intervals to represent the imprecision in probabilities. But only from the work ofWalley in 1991 that were established principles that should be respected by a probability theory that deals with inaccuracies. With the emergence of the theory of fuzzy sets by Lotfi Zadeh in 1965, there is another way of dealing with uncertainty and imprecision of concepts. Quickly, they began to propose several ways to consider the ideas of Zadeh in probabilities, to deal with inaccuracies, either in the events associated with the probabilities or in the values of probabilities. In particular, James Buckley, from 2003 begins to develop a probability theory in which the fuzzy values of the probabilities are fuzzy numbers. This fuzzy probability, follows analogous principles to Walley imprecise probabilities. On the other hand, the uses of real numbers between 0 and 1 as truth degrees, as originally proposed by Zadeh, has the drawback to use very precise values for dealing with uncertainties (as one can distinguish a fairly element satisfies a property with a 0.423 level of something that meets with grade 0.424?). This motivated the development of several extensions of fuzzy set theory which includes some kind of inaccuracy. This work consider the Krassimir Atanassov extension proposed in 1983, which add an extra degree of uncertainty to model the moment of hesitation to assign the membership degree, and therefore a value indicate the degree to which the object belongs to the set while the other, the degree to which it not belongs to the set. In the Zadeh fuzzy set theory, this non membership degree is, by default, the complement of the membership degree. Thus, in this approach the non-membership degree is somehow independent of the membership degree, and this difference between the non-membership degree and the complement of the membership degree reveals the hesitation at the moment to assign a membership degree. This new extension today is called of Atanassov s intuitionistic fuzzy sets theory. It is worth noting that the term intuitionistic here has no relation to the term intuitionistic as known in the context of intuitionistic logic. In this work, will be developed two proposals for interval probability: the restricted interval probability and the unrestricted interval probability, are also introduced two notions of fuzzy probability: the constrained fuzzy probability and the unconstrained fuzzy probability and will eventually be introduced two notions of intuitionistic fuzzy probability: the restricted intuitionistic fuzzy probability and the unrestricted intuitionistic fuzzy probability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reactive-optimisation procedures are responsible for the minimisation of online power losses in interconnected systems. These procedures are performed separately at each control centre and involve external network representations. If total losses can be minimised by the implementation of calculated local control actions, the entire system benefits economically, but such control actions generally result in a certain degree of inaccuracy, owing to errors in the modelling of the external system. Since these errors are inevitable, they must at least be maintained within tolerable limits by external-modelling approaches. Care must be taken to avoid unrealistic loss minimisation, as the local-control actions adopted can lead the system to points of operation which will be less economical for the interconnected system as a whole. The evaluation of the economic impact of the external modelling during reactive-optimisation procedures in interconnected systems, in terms of both the amount of losses and constraint violations, becomes important in this context. In the paper, an analytical approach is proposed for such an evaluation. Case studies using data from the Brazilian South-Southeast system (810 buses) have been carried out to compare two different external-modelling approaches, both derived from the equivalent-optimal-power-flow (EOPF) model. Results obtained show that, depending on the external-model representation adopted, the loss representation can be flawed. Results also suggest some modelling features that should be adopted in the EOPF model to enhance the economy of the overall system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Genética e Melhoramento Animal - FCAV

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many organizations are currently facing inventory management problems such as distributing inventory on-time and maintain the correct inventory levels to satisfy the customer or end users. Organizations understand the need for maintaining the accurate inventory levels but sometimes fall short leading a wide performance gap in maintaining inventory accurately. The inventory inaccuracy can consume much of the investment on purchasing inventory and many times leads to excessive inventory. The research objective of thesis is to provide a decision making criteria to the management for closing or maintaining the warehouse based on basic purchasing and holding cost information. The specific objectives provide information regarding the impact of inventory carrying cost, obsolete inventory, inventory turns. The methodology section explains about the carrying cost ratio that would help inventory managers to adopt best practices to avoid obsolete inventory and also reduce excessive inventory levels. The research model was helpful in providing a decision making criteria based on the performance metric developed. This research model and performance metric had been validated by analysis of warehouse data and results indicated a shift from two-echelon inventory supply chain to a one-echelon or Just In Time (JIT) based inventory supply chain. The recommendations from the case study were used by a health care organization to reorganize the supply chain resulting in the reduction of excessive inventory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study describes the enantioselective analysis of unbound and total concentrations of tramadol and its main metabolites O-desmethyltramadol (M1) and N-desmethyltramadol (M2) in human plasma. Sample preparation was preceded by an ultrafiltration step to separate the unbound drug. Both the ultrafiltrate and plasma samples were submitted to liquid/liquid extraction with methyl t-butyl ether. Separation was performed on a Chiralpak (R) AD column and tandem mass spectrometry consisting of an electrospray ionization source, positive ion mode and multiple reaction monitoring was used as the detection system. Linearity was observed in the following ranges: 0.2-600 and 0.5-250 ng/mL for analysis of total and unbound concentrations of the tramadol enantiomers, respectively, and 0.1-300 and 0.25-125 ng/mL for total and unbound concentrations of the M1 and M2 enantiomers, respectively. The lower limits of quantitation were 0.2 and 0.5 ng/mL for analysis of total and unbound concentration of each tramadol enantiomer, respectively, and 0.1 and 0.25 ng/mL for total and unbound concentrations of M1 and M2 enantiomers, respectively. Intra- and interassay reproducibility and inaccuracy did not exceed 15%. Clinical application of the method to patients with neuropathic pain showed plasma accumulation of (+)-tramadol and (+)-M2 after a single oral dose of racemic tramadol. Fractions unbound of tramadol, M1 or M2 were not enantioselective in the patients investigated. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This artwork reports on two different projects that were carried out during the three years of Doctor of the Philosophy course. In the first years a project regarding Capacitive Pressure Sensors Array for Aerodynamic Applications was developed in the Applied Aerodynamic research team of the Second Faculty of Engineering, University of Bologna, Forlì, Italy, and in collaboration with the ARCES laboratories of the same university. Capacitive pressure sensors were designed and fabricated, investigating theoretically and experimentally the sensor’s mechanical and electrical behaviours by means of finite elements method simulations and by means of wind tunnel tests. During the design phase, the sensor figures of merit are considered and evaluated for specific aerodynamic applications. The aim of this work is the production of low cost MEMS-alternative devices suitable for a sensor network to be implemented in air data system. The last two year was dedicated to a project regarding Wireless Pressure Sensor Network for Nautical Applications. Aim of the developed sensor network is to sense the weak pressure field acting on the sail plan of a full batten sail by means of instrumented battens, providing a real time differential pressure map over the entire sail surface. The wireless sensor network and the sensing unit were designed, fabricated and tested in the faculty laboratories. A static non-linear coupled mechanical-electrostatic simulation, has been developed to predict the pressure versus capacitance static characteristic suitable for the transduction process and to tune the geometry of the transducer to reach the required resolution, sensitivity and time response in the appropriate full scale pressure input A time dependent viscoelastic error model has been inferred and developed by means of experimental data in order to model, predict and reduce the inaccuracy bound due to the viscolelastic phenomena affecting the Mylar® polyester film used for the sensor diaphragm. The development of the two above mentioned subjects are strictly related but presently separately in this artwork.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.