7 resultados para uncertainty-based coordination
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.
Resumo:
In developing a biosensor, the utmost important aspects that need to be emphasized are the specificity and selectivity of the transducer. These two vital prerequisites are of paramount in ensuring a robust and reliable biosensor. Improvements in electrochemical sensors can be achieved by using microelectrodes and to modify the electrode surface (using chemical or biological recognition layers to improve the sensitivity and selectivity). The fabrication and characterisations of silicon-based and glass-based gold microelectrode arrays with various geometries (band and disc) and dimension (ranging from 10 μm-100 nm) were reported. It was found that silicon-based transducers of 10 μm gold microelectrode array exhibited the most stable and reproducible electrochemical measurements hence this dimension was selected for further study. Chemical electrodeposition on both 10 μm microband and microdisc were found viable by electro-assisted self-assembled sol-gel silica film and nanoporous-gold electrodeposition respectively. The fabrication and characterisations of on-chip electrochemical cell was also reported with a fixed diameter/width dimension and interspacing variation. With this regard, the 10 μm microelectrode array with interspacing distance of 100 μm exhibited the best electrochemical response. Surface functionalisations on single chip of planar gold macroelectrodes were also studied for the immobilisation of histidine-tagged protein and antibody. Imaging techniques such as atomic force microscopy, fluorescent microscopy or scanning electron microscope were employed to complement the electrochemical characterisations. The long-chain thiol of self-assembled monolayer with NTA-metal ligand coordination was selected for the histidine-tagged protein while silanisation technique was selected for the antibody immobilisation. The final part of the thesis described the development of a T-2 labelless immunosensor using impedimetric approach. Good antibody calibration curve was obtained for both 10 μm microband and 10 μm microdisc array. For the establishment of the T-2/HT-2 toxin calibration curve, it was found that larger microdisc array dimension was required to produce better calibration curve. The calibration curves established in buffer solution show that the microelectrode arrays were sensitive and able to detect levels of T-2/HT-2 toxin as low as 25 ppb (25 μg kg-1) with a limit of quantitation of 4.89 ppb for a 10 μm microband array and 1.53 ppb for the 40 μm microdisc array.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.
Resumo:
In this paper, we consider Preference Inference based on a generalised form of Pareto order. Preference Inference aims at reasoning over an incomplete specification of user preferences. We focus on two problems. The Preference Deduction Problem (PDP) asks if another preference statement can be deduced (with certainty) from a set of given preference statements. The Preference Consistency Problem (PCP) asks if a set of given preference statements is consistent, i.e., the statements are not contradicting each other. Here, preference statements are direct comparisons between alternatives (strict and non-strict). It is assumed that a set of evaluation functions is known by which all alternatives can be rated. We consider Pareto models which induce order relations on the set of alternatives in a Pareto manner, i.e., one alternative is preferred to another only if it is preferred on every component of the model. We describe characterisations for deduction and consistency based on an analysis of the set of evaluation functions, and present algorithmic solutions and complexity results for PDP and PCP, based on Pareto models in general and for a special case. Furthermore, a comparison shows that the inference based on Pareto models is less cautious than some other types of well-known preference model.
Resumo:
Background: Raised blood pressure is an important risk factor for cardiovascular diseases and chronic kidney disease. We estimated worldwide trends in mean systolic and mean diastolic blood pressure, and the prevalence of, and number of people with, raised blood pressure, defined as systolic blood pressure of 140 mm Hg or higher or diastolic blood pressure of 90 mm Hg or higher. Methods: For this analysis, we pooled national, subnational, or community population-based studies that had measured blood pressure in adults aged 18 years and older. We used a Bayesian hierarchical model to estimate trends from 1975 to 2015 in mean systolic and mean diastolic blood pressure, and the prevalence of raised blood pressure for 200 countries. We calculated the contributions of changes in prevalence versus population growth and ageing to the increase in the number of adults with raised blood pressure. Findings: We pooled 1479 studies that had measured the blood pressures of 19·1 million adults. Global age-standardised mean systolic blood pressure in 2015 was 127·0 mm Hg (95% credible interval 125·7–128·3) in men and 122·3 mm Hg (121·0–123·6) in women; age-standardised mean diastolic blood pressure was 78·7 mm Hg (77·9–79·5) for men and 76·7 mm Hg (75·9–77·6) for women. Global age-standardised prevalence of raised blood pressure was 24·1% (21·4–27·1) in men and 20·1% (17·8–22·5) in women in 2015. Mean systolic and mean diastolic blood pressure decreased substantially from 1975 to 2015 in high-income western and Asia Pacific countries, moving these countries from having some of the highest worldwide blood pressure in 1975 to the lowest in 2015. Mean blood pressure also decreased in women in central and eastern Europe, Latin America and the Caribbean, and, more recently, central Asia, Middle East, and north Africa, but the estimated trends in these super-regions had larger uncertainty than in high-income super-regions. By contrast, mean blood pressure might have increased in east and southeast Asia, south Asia, Oceania, and sub-Saharan Africa. In 2015, central and eastern Europe, sub-Saharan Africa, and south Asia had the highest blood pressure levels. Prevalence of raised blood pressure decreased in high-income and some middle-income countries; it remained unchanged elsewhere. The number of adults with raised blood pressure increased from 594 million in 1975 to 1·13 billion in 2015, with the increase largely in low-income and middle-income countries. The global increase in the number of adults with raised blood pressure is a net effect of increase due to population growth and ageing, and decrease due to declining age-specific prevalence. Interpretation: During the past four decades, the highest worldwide blood pressure levels have shifted from high-income countries to low-income countries in south Asia and sub-Saharan Africa due to opposite trends, while blood pressure has been persistently high in central and eastern Europe.