9 resultados para handling

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hazard perception has been found to correlate with crash involvement, and has thus been suggested as the most likely source of any skill gap between novice and experienced drivers. The most commonly used method for measuring hazard perception is to evaluate the perception-reaction time to filmed traffic events. It can be argued that this method lacks ecological validity and may be of limited value in predicting the actions drivers’ will take to hazards encountered. The first two studies of this thesis compare novice and experienced drivers’ performance on a hazard detection test, requiring discrete button press responses, with their behaviour in a more dynamic driving environment, requiring hazard handling ability. Results indicate that the hazard handling test is more successful at identifying experience-related differences in response time to hazards. Hazard detection test scores were strongly related to performance on a driver theory test, implying that traditional hazard perception tests may be focusing more on declarative knowledge of driving than on the procedural knowledge required to successfully avoid hazards while driving. One in five Irish drivers crash within a year of passing their driving test. This suggests that the current driver training system does not fully prepare drivers for the dangers they will encounter. Thus, the third and fourth studies in this thesis focus on the development of two simulator-based training regimes. In the third study participants receive intensive training on the molar elements of driving i.e. speed and distance evaluation. The fourth study focuses on training higher order situation awareness skills, including perception, comprehension and projection. Results indicate significant improvement in aspects of speed, distance and situation awareness across training days. However, neither training programme leads to significant improvements in hazard handling performance, highlighting the difficulties of applying learning to situations not previously encountered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contribution of buildings towards total worldwide energy consumption in developed countries is between 20% and 40%. Heating Ventilation and Air Conditioning (HVAC), and more specifically Air Handling Units (AHUs) energy consumption accounts on average for 40% of a typical medical device manufacturing or pharmaceutical facility’s energy consumption. Studies have indicated that 20 – 30% energy savings are achievable by recommissioning HVAC systems, and more specifically AHU operations, to rectify faulty operation. Automated Fault Detection and Diagnosis (AFDD) is a process concerned with potentially partially or fully automating the commissioning process through the detection of faults. An expert system is a knowledge-based system, which employs Artificial Intelligence (AI) methods to replicate the knowledge of a human subject matter expert, in a particular field, such as engineering, medicine, finance and marketing, to name a few. This thesis details the research and development work undertaken in the development and testing of a new AFDD expert system for AHUs which can be installed in minimal set up time on a large cross section of AHU types in a building management system vendor neutral manner. Both simulated and extensive field testing was undertaken against a widely available and industry known expert set of rules known as the Air Handling Unit Performance Assessment Rules (APAR) (and a later more developed version known as APAR_extended) in order to prove its effectiveness. Specifically, in tests against a dataset of 52 simulated faults, this new AFDD expert system identified all 52 derived issues whereas the APAR ruleset identified just 10. In tests using actual field data from 5 operating AHUs in 4 manufacturing facilities, the newly developed AFDD expert system for AHUs was shown to identify four individual fault case categories that the APAR method did not, as well as showing improvements made in the area of fault diagnosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of police accountability is not susceptible to a universal or concise definition. In the context of this thesis it is treated as embracing two fundamental components. First, it entails an arrangement whereby an individual, a minority and the whole community have the opportunity to participate meaningfully in the formulation of the principles and policies governing police operations. Second, it presupposes that those who have suffered as victims of unacceptable police behaviour should have an effective remedy. These ingredients, however, cannot operate in a vacuum. They must find an accommodation with the equally vital requirement that the burden of accountability should not be so demanding that the delivery of an effective police service is fatally impaired. While much of the current debate on police accountability in Britain and the USA revolves around the issue of where the balance should be struck in this accommodation, Ireland lacks the very foundation for such a debate as it suffers from a serious deficit in research and writing on police generally. This thesis aims to fill that gap by laying the foundations for an informed debate on police accountability and related aspects of police in Ireland. Broadly speaking the thesis contains three major interrelated components. The first is concerned with the concept of police in Ireland and the legal, constitutional and political context in which it operates. This reveals that although the Garda Siochana is established as a national force the legal prescriptions concerning its role and governance are very vague. Although a similar legislative format in Britain, and elsewhere, have been interpreted as conferring operational autonomy on the police it has not stopped successive Irish governments from exercising close control over the police. The second component analyses the structure and operation of the traditional police accountability mechanisms in Ireland; namely the law and the democratic process. It concludes that some basic aspects of the peculiar legal, constitutional and political structures of policing seriously undermine their capacity to deliver effective police accountability. In the case of the law, for example, the status of, and the broad discretion vested in, each individual member of the force ensure that the traditional legal actions cannot always provide redress where individuals or collective groups feel victimised. In the case of the democratic process the integration of the police into the excessively centralised system of executive government, coupled with the refusal of the Minister for Justice to accept responsibility for operational matters, project a barrier between the police and their accountability to the public. The third component details proposals on how the current structures of police accountability in Ireland can be strengthened without interfering with the fundamentals of the law, the democratic process or the legal and constitutional status of the police. The key elements in these proposals are the establishment of an independent administrative procedure for handling citizen complaints against the police and the establishment of a network of local police-community liaison councils throughout the country coupled with a centralised parliamentary committee on the police. While these proposals are analysed from the perspective of maximising the degree of police accountability to the public they also take into account the need to ensure that the police capacity to deliver an effective police service is not unduly impaired as a result.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Portfolio of Exploration (PoE) tracks a transformative learning developmental journey that is directed at changing meaning making structures and mental models within an innovation practice. The explicit purpose of the Portfolio is to develop new and different perspectives that enable the handling of new and more complex phenomena through self transformation and increased emotional intelligence development. The Portfolio provides a response to the question: ‘What are the key determinants that enable a Virtual Team (VT) to flourish where flourishing means developing and delivering on the firm’s innovative imperatives?’ Furthermore, the PoE is structured as an investigation into how higher order meaning making promotes ‘entrepreneurial services’ within an intra-firm virtual team, with a secondary aim to identify how reasoning about trust influence KGPs to exchange knowledge. I have developed a framework which specifically focuses on the effectiveness of any firms’ Virtual Team (VT) through transforming the meaning making of the VT participants. I hypothesized it is the way KGPs make meaning (reasoning about trust) which differentiates the firm as a growing firm in the sense of Penrosean resources: ‘inducement to expand and a limit of expansion’ (1959). Reasoning about trust is used as a higher order meaning-making concept in line with Kegan’s (1994) conception of complex meaning making, which is the combining of ideas and data in ways that transform meaning and implicates participants to find new ways of knowledge generation. Simply, it is the VT participants who develop higher order meaning making that hold the capabilities to transform the firm from within, providing a unique competitive advantage that enables the firm to grow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis examines Milton's strategic use of romance in Paradise Lost, arguing that such a handling of romance is a provocative realignment of its values according to the poet’s Christian focus. The thesis argues that Milton's use of romance is not simply the importation of a tradition into the poem; it entails a backward judgement on that tradition, defining its idealising tendencies as fundamentally misplaced. The thesis also examines the Caroline uses of romance and chivalry in the 1630s to provide a vision of British unification, and Milton's reaction to this political agenda.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The overall objective of this thesis is to integrate a number of micro/nanotechnologies into integrated cartridge type systems to implement such biochemical protocols. Instrumentation and systems were developed to interface such cartridge systems: (i) implementing microfluidic handling, (ii) executing thermal control during biochemical protocols and (iii) detection of biomolecules associated with inherited or infectious disease. This system implements biochemical protocols for DNA extraction, amplification and detection. A digital microfluidic chip (ElectroWetting on Dielectric) manipulated droplets of sample and reagent implementing sample preparation protocols. The cartridge system also integrated a planar magnetic microcoil device to generate local magnetic field gradients, manipulating magnetic beads. For hybridisation detection a fluorescence microarray, screening for mutations associated with CFTR gene is printed on a waveguide surface and integrated within the cartridge. A second cartridge system was developed to implement amplification and detection screening for DNA associated with disease-causing pathogens e.g. Escherichia coli. This system incorporates (i) elastomeric pinch valves isolating liquids during biochemical protocols and (ii) a silver nanoparticle microarray for fluorescent signal enhancement, using localized surface plasmon resonance. The microfluidic structures facilitated the sample and reagent to be loaded and moved between chambers with external heaters implementing thermal steps for nucleic acid amplification and detection. In a technique allowing probe DNA to be immobilised within a microfluidic system using (3D) hydrogel structures a prepolymer solution containing probe DNA was formulated and introduced into the microfluidic channel. Photo-polymerisation was undertaken forming 3D hydrogel structures attached to the microfluidic channel surface. The prepolymer material, poly-ethyleneglycol (PEG), was used to form hydrogel structures containing probe DNA. This hydrogel formulation process was fast compared to conventional biomolecule immobilization techniques and was also biocompatible with the immobilised biomolecules, as verified by on-chip hybridisation assays. This process allowed control over hydrogel height growth at the micron scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain