12 resultados para Performance-based Research Funding Systems
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
In the development of wave energy converters, the mooring system is a key component for a safe station-keeping and an important factor in the cost of the wave energy production. Generally, when designing a mooring system for a wave energy converter, two important conditions must be considered: (i) that the mooring system must be strong enough to limit the drifting motions, even in extreme waves, tidal and wind conditions and (ii) it must be compliant enough so that the impact on wave energy production can be minimised. It is frequently found that these two conditions are contradictory. The existing solutions mainly include the use of heavy chains, which create a catenary shaped mooring configuration, allowing limited flexibility within the mooring system, and hence very large forces may still be present on mooring lines and thus on anchors. This solution is normally quite expensive if the costs of the materials and installation are included. This paper presents a new solution to the mooring system for wave energy converters within the FP7 project, ‘GeoWAVE’, which is a project aiming to develop a new generation of the moorings system for minimising the loads on mooring lines and anchors, the impact on the device motions for power conversion, and the footprint if it is applicable, and meanwhile the new types of anchors are also addressed within the project. However this paper will focus on the new mooring system by presenting the wave tank test results of the Pelamis wave energy converter model and the new developed mooring system. It can be seen that the new generation of mooring system can significantly reduce the loads on mooring lines and anchors, and reduce the device excursions as a result of the new mooring system when compare to the conventional catenary mooring.
Resumo:
The desire to obtain competitive advantage is a motivator for implementing Enterprise Resource Planning (ERP) Systems (Adam & O’Doherty, 2000). However, while it is accepted that Information Technology (IT) in general may contribute to the improvement of organisational performance (Melville, Kraemer, & Gurbaxani, 2004), the nature and extent of that contribution is poorly understood (Jacobs & Bendoly, 2003; Ravichandran & Lertwongsatien, 2005). Accordingly, Henderson and Venkatraman (1993) assert that it is the application of business and IT capabilities to develop and leverage a firm’s IT resources for organisational transformation, rather than the acquired technological functionality, that secures competitive advantage for firms. Application of the Resource Based View of the firm (Wernerfelt, 1984) and Dynamic Capabilities Theory (DCT) (Teece and Pisano (1998) in particular) may yield insights into whether or not the use of Enterprise Systems enhances organisations’ core capabilities and thereby obtains competitive advantage, sustainable or otherwise (Melville et al., 2004). An operational definition of Core Capabilities that is independent of the construct of Sustained Competitive Advantage is formulated. This Study proposes and utilises an applied Dynamic Capabilities framework to facilitate the investigation of the role of Enterprise Systems. The objective of this research study is to investigate the role of Enterprise Systems in the Core Dynamic Capabilities of Asset Lifecycle Management. The Study explores the activities of Asset Lifecycle Management, the Core Dynamic Capabilities inherent in Asset Lifecycle Management and the footprint of Enterprise Systems on those Dynamic Capabilities. Additionally, the study explains the mechanisms by which Enterprise Systems sustain the Exploitability and the Renewability of those Core Dynamic Capabilities. The study finds that Enterprise Systems contribute directly to the Value, Exploitability and Renewability of Core Dynamic Capabilities and indirectly to their Inimitability and Non-substitutability. The study concludes by presenting an applied Dynamic Capabilities framework, which integrates Alter (1992)’s definition of Information Systems with Teece and Pisano (1998)’s model of Dynamic Capabilities to provide a robust diagnostic for determining the sustained value generating contributions of Enterprise Systems. These frameworks are used in the conclusions to frame the findings of the study. The conclusions go on to assert that these frameworks are free - standing and analytically generalisable, per Siggelkow (2007) and Yin (2003).
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
Buildings consume 40% of Ireland's total annual energy translating to 3.5 billion (2004). The EPBD directive (effective January 2003) places an onus on all member states to rate the energy performance of all buildings in excess of 50m2. Energy and environmental performance management systems for residential buildings do not exist and consist of an ad-hoc integration of wired building management systems and Monitoring & Targeting systems for non-residential buildings. These systems are unsophisticated and do not easily lend themselves to cost effective retrofit or integration with other enterprise management systems. It is commonly agreed that a 15-40% reduction of building energy consumption is achievable by efficiently operating buildings when compared with typical practice. Existing research has identified that the level of information available to Building Managers with existing Building Management Systems and Environmental Monitoring Systems (BMS/EMS) is insufficient to perform the required performance based building assessment. The cost of installing additional sensors and meters is extremely high, primarily due to the estimated cost of wiring and the needed labour. From this perspective wireless sensor technology provides the capability to provide reliable sensor data at the required temporal and spatial granularity associated with building energy management. In this paper, a wireless sensor network mote hardware design and implementation is presented for a building energy management application. Appropriate sensors were selected and interfaced with the developed system based on user requirements to meet both the building monitoring and metering requirements. Beside the sensing capability, actuation and interfacing to external meters/sensors are provided to perform different management control and data recording tasks associated with minimisation of energy consumption in the built environment and the development of appropriate Building information models(BIM)to enable the design and development of energy efficient spaces.
Resumo:
An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.
Resumo:
Based on the experience that today's students find it more difficult than students of previous decades to relate to literature and appreciate its high cultural value, this paper argues that too little is known about the actual teaching and learning processes which take place in literature courses and that, in order to ensure the survival of literary studies in German curricula, future research needs to elucidate for students, the wider public and, most importantly, educational policy makers, why the study of literature should continue to have an important place in modern language curricula. Contending that students' willingness to engage with literature will, in the future, depend to a great extent on the use of imaginative methodology on the part of the teacher, we give a detailed account of an action research project carried out at University College Cork from October to December 2002 which set out to explore the potential of a drama in education approach to the teaching and learning of foreign language literature. We give concrete examples of how this approach works in practice, situate our approach within the subject debate surrounding Drama and the Language Arts and evaluate in detail the learning processes which are typical of performance-based literature learning. Based on converging evidence from different data sources and overall very positive feedback from students, we conclude by recommending that modern language departments introduce courses which offer a hands-on experience of literature that is different from that encountered in lectures and teacher-directed seminars.
Resumo:
Current building regulations are generally prescriptive in nature. It is widely accepted in Europe that this form of building regulation is stifling technological innovation and leading to inadequate energy efficiency in the building stock. This has increased the motivation to move design practices towards a more ‘performance-based’ model in order to mitigate inflated levels of energy-use consumed by the building stock. A performance based model assesses the interaction of all building elements and the resulting impact on holistic building energy-use. However, this is a nebulous task due to building energy-use being affected by a myriad of heterogeneous agents. Accordingly, it is imperative that appropriate methods, tools and technologies are employed for energy prediction, measurement and evaluation throughout the project’s life cycle. This research also considers that it is imperative that the data is universally accessible by all stakeholders. The use of a centrally based product model for exchange of building information is explored. This research describes the development and implementation of a new building energy-use performance assessment methodology. Termed the Building Effectiveness Communications ratios (BECs) methodology, this performance-based framework is capable of translating complex definitions of sustainability for energy efficiency and depicting universally understandable views at all stage of the Building Life Cycle (BLC) to the project’s stakeholders. The enabling yardsticks of building energy-use performance, termed Ir and Pr, provide continuous design and operations feedback in order to aid the building’s decision makers. Utilised effectively, the methodology is capable of delivering quality assurance throughout the BLC by providing project teams with quantitative measurement of energy efficiency. Armed with these superior enabling tools for project stakeholder communication, it is envisaged that project teams will be better placed to augment a knowledge base and generate more efficient additions to the building stock.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
This PhD thesis investigates the potential use of science communication models to engage a broader swathe of actors in decision making in relation to scientific and technological innovation in order to address possible democratic deficits in science and technology policy-making. A four-pronged research approach has been employed to examine different representations of the public(s) and different modes of engagement. The first case study investigates whether patient-groups could represent an alternative needs-driven approach to biomedical and health sciences R & D. This is followed by enquiry into the potential for Science Shops to represent a bottom-up approach to promote research and development of local relevance. The barriers and opportunities for the involvement of scientific researchers in science communication are next investigated via a national survey which is comparable to a similar survey conducted in the UK. The final case study investigates to what extent opposition or support regarding nanotechnology (as an emerging technology) is reflected amongst the YouTube user community and the findings are considered in the context of how support or opposition to new or emerging technologies can be addressed using conflict resolution based approaches to manage potential conflict trajectories. The research indicates that the majority of communication exercises of relevance to science policy and planning take the form of a one-way flow of information with little or no facility for public feedback. This thesis proposes that a more bottom-up approach to research and technology would help broaden acceptability and accountability for decisions made relating to new or existing technological trajectories. This approach could be better integrated with and complementary to government, institutional, e.g. university, and research funding agencies activities and help ensure that public needs and issues are better addressed directly by the research community. Such approaches could also facilitate empowerment of societal stakeholders regarding scientific literacy and agenda-setting. One-way information relays could be adapted to facilitate feedback from representative groups e.g. Non-governmental organisations or Civil Society Organisations (such as patient groups) in order to enhance the functioning and socio-economic relevance of knowledge-based societies to the betterment of human livelihoods.
Resumo:
Absorption heat transformers are thermodynamic systems which are capable of recycling industrial waste heat energy by increasing its temperature. Triple stage heat transformers (TAHTs) can increase the temperature of this waste heat by up to approximately 145˚C. The principle factors influencing the thermodynamic performance of a TAHT and general points of operating optima were identified using a multivariate statistical analysis, prior to using heat exchange network modelling techniques to dissect the design of the TAHT and systematically reassemble it in order to minimise internal exergy destruction within the unit. This enabled first and second law efficiency improvements of up to 18.8% and 31.5% respectively to be achieved compared to conventional TAHT designs. The economic feasibility of such a thermodynamically optimised cycle was investigated by applying it to an oil refinery in Ireland, demonstrating that in general the capital cost of a TAHT makes it difficult to achieve acceptable rates of return. Decreasing the TAHT's capital cost may be achieved by redesigning its individual pieces of equipment and reducing their size. The potential benefits of using a bubble column absorber were therefore investigated in this thesis. An experimental bubble column was constructed and used to track the collapse of steam bubbles being absorbed into a hotter lithium bromide salt solution. Extremely high mass transfer coefficients of approximately 0.0012m/s were observed, showing significant improvements over previously investigated absorbers. Two separate models were developed, namely a combined heat and mass transfer model describing the rate of collapse of the bubbles, and a stochastic model describing the hydrodynamic motion of the collapsing vapour bubbles taking into consideration random fluctuations observed in the experimental data. Both models showed good agreement with the collected data, and demonstrated that the difference between the solution's temperature and its boiling temperature is the primary factor influencing the absorber's performance.