41 resultados para System-Level Models
em Aston University Research Archive
Resumo:
To explore the views of pharmacy and rheumatology stakeholders about system-related barriers to medicines optimisation activities with young people with long-term conditions. A three-phase consensus-building study comprising (1) focus groups with community and hospital pharmacists; (2) semi-structured telephone interviews with lay and professional adolescent rheumatology stakeholders and pharmacy policymakers, and (3) multidisciplinary discussion groups with community and hospital pharmacists and rheumatology staff. Qualitative verbatim transcripts from phases 1 and 2 were subjected to framework analysis. Themes from phase 1 underpinned a briefing for phase 2 interviewees. Themes from phases 1 and 2 generated elements of good pharmacy practice and current/future pharmacy roles for ranking in phase 3. Results from phase 3 prioritisation and ranking exercises were captured on self-completion data collection forms, entered into an Excel spreadsheet and subjected to descriptive statistical analysis. Institutional ethical approval was given by Aston University Health and Life Sciences Research Ethics Committee. Four focus groups were conducted with 18 pharmacists across England, Scotland and Wales (7 hospital, 10 community and 1 community/public health). Fifteen stakeholders took part in telephone interviews (3 pharmacist commissioners; 2 pharmacist policymakers; 2 pharmacy staff members (1 community and 1 hospital); 4 rheumatologists; 1 specialist nurse, and 3 lay juvenile arthritis advocates). Twenty-five participants took part in three discussion groups in adolescent rheumatology centres across England and Scotland (9 community pharmacists; 4 hospital pharmacists; 6 rheumatologists; 5 specialist nurses, and 1 physiotherapist). In all phases of the study, system-level issues were acknowledged as barriers to more engagement with young people and families. Community pharmacists in the focus groups reported that opportunities for engaging with young people were low if parents collected prescriptions alone, which was agreed by other stakeholders. Moreover, institutional/company prescription collection policies – an activity largely disallowed for a young person under 16 without an accompanying parent - were identified by hospital and community pharmacists as barriers to open discussion and engagement. Few community pharmacists reported using Medicines Use Review (England/Wales) or Chronic Medication Service (Scotland) as a medicines optimisation activity with young people; many were unsure about consent procedures. Despite these limitations, rheumatology stakeholders ranked highly the potential of pharmacists empowering young people with general health care skills, such as repeat prescription ordering. The pharmacy profession lacks vision for its role in the care of young people with long-term conditions. Pharmacists and rheumatology stakeholders identified system-level barriers to more engagement with young people who take medicines regularly. We acknowledge that the modest number of participants may have had a specific interest and thus bias for the topic, but this underscores their frank admission of the challenges. Professional guidance and policy, practice frameworks and institutional/company policies must promote flexibility for pharmacy staff to recognise and empower young people who are able to give consent and take responsibility for medicines activities. This will increase mutual confidence and trust, and foster pharmacy’s role in teaching general health care skills. In this way, pharmacists will be able to build long-term relationships with young people and families.
Resumo:
This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.
Resumo:
The simulation of a power system such as the More Electric Aircraft is a complex problem. There are conflicting requirements of the simulation, for example in order to reduce simulation run-times, power ratings that need to be established over long periods of the flight can be calculated using a fairly coarse model, whereas power quality is established over relatively short periods with a detailed model. An important issue is to establish the requirements of the simulation work at an early stage. This paper describes the modelling and simulation strategy adopted for the UK TIMES project, which is looking into the optimisation of the More Electric Aircraft from a system level. Essentially four main requirements of the simulation work have been identified, resulting in four different types of simulation. Each of the simulations is described along with preliminary models and results.
Resumo:
With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.
Resumo:
For remote, semi-arid areas, brackish groundwater (BW) desalination powered by solar energy may serve as the most technically and economically viable means to alleviate the water stresses. For such systems, high recovery ratio is desired because of the technical and economical difficulties of concentrate management. It has been demonstrated that the current, conventional solar reverse osmosis (RO) desalination can be improved by 40–200 times by eliminating unnecessary energy losses. In this work, a batch-RO system that can be powered by a thermal Rankine cycle has been developed. By directly recycling high pressure concentrates and by using a linkage connection to provide increasing feed pressures, the batch-RO has been shown to achieve a 70% saving in energy consumption compared to a continuous single-stage RO system. Theoretical investigations on the mass transfer phenomena, including dispersion and concentration polarization, have been carried out to complement and to guide experimental efforts. The performance evaluation of the batch-RO system, named DesaLink, has been based on extensive experimental tests performed upon it. Operating DesaLink using compressed air as power supply under laboratory conditions, a freshwater production of approximately 300 litres per day was recorded with a concentration of around 350 ppm, whilst the feed water had a concentration range of 2500–4500 ppm; the corresponding linkage efficiency was around 40%. In the computational aspect, simulation models have been developed and validated for each of the subsystems of DesaLink, upon which an integrated model has been realised for the whole system. The models, both the subsystem ones and the integrated one, have been demonstrated to predict accurately the system performance under specific operational conditions. A simulation case study has been performed using the developed model. Simulation results indicate that the system can be expected to achieve a water production of 200 m3 per year by using a widely available evacuated tube solar collector having an area of only 2 m2. This freshwater production would satisfy the drinking water needs of 163 habitants in the Rajasthan region, the area for which the case study was performed.
Resumo:
It is widely supposed that things tend to look blurred when they are moving fast. Previous work has shown that this is true for sharp edges but, paradoxically, blurred edges look sharper when they are moving than when stationary. This is 'motion sharpening'. We show that blurred edges also look up to 50% sharper when they are presented briefly (8-24 ms) than at longer durations (100-500 ms) without motion. This argues strongly against high-level models of sharpening based specifically on compensation for motion blur. It also argues against a recent, low-level, linear filter model that requires motion to produce sharpening. No linear filter model can explain our finding that sharpening was similar for sinusoidal and non-sinusoidal gratings, since linear filters can never distort sine waves. We also conclude that the idea of a 'default' assumption of sharpness is not supported by experimental evidence. A possible source of sharpening is a nonlinearity in the contrast response of early visual mechanisms to fast or transient temporal changes, perhaps based on the magnocellular (M-cell) pathway. Our finding that sharpening is not diminished at low contrast sets strong constraints on the nature of the nonlinearity.
Resumo:
Often observations are nested within other units. This is particularly the case in the educational sector where school performance in terms of value added is the result of school contribution as well as pupil academic ability and other features relating to the pupil. Traditionally, the literature uses parametric (i.e. it assumes a priori a particular function on the production process) Multi-Level Models to estimate the performance of nested entities. This paper discusses the use of the non-parametric (i.e. without a priori assumptions on the production process) Free Disposal Hull model as an alternative approach. While taking into account contextual characteristics as well as atypical observations, we show how to decompose non-parametrically the overall inefficiency of a pupil into a unit specific and a higher level (i.e. a school) component. By a sample of entry and exit attainments of 3017 girls in British ordinary single sex schools, we test the robustness of the non-parametric and parametric estimates. We find that the two methods agree in the relative measures of the scope for potential attainment improvement. Further, the two methods agree on the variation in pupil attainment and the proportion attributable to pupil and school level.
Resumo:
The behaviour of control functions in safety critical software systems is typically bounded to prevent the occurrence of known system level hazards. These bounds are typically derived through safety analyses and can be implemented through the use of necessary design features. However, the unpredictability of real world problems can result in changes in the operating context that may invalidate the behavioural bounds themselves, for example, unexpected hazardous operating contexts as a result of failures or degradation. For highly complex problems it may be infeasible to determine the precise desired behavioural bounds of a function that addresses or minimises risk for hazardous operation cases prior to deployment. This paper presents an overview of the safety challenges associated with such a problem and how such problems might be addressed. A self-management framework is proposed that performs on-line risk management. The features of the framework are shown in context of employing intelligent adaptive controllers operating within complex and highly dynamic problem domains such as Gas-Turbine Aero Engine control. Safety assurance arguments enabled by the framework necessary for certification are also outlined.
Resumo:
The connectivity of the Internet at the Autonomous System level is influenced by the network operator policies implemented. These in turn impose a direction to the announcement of address advertisements and, consequently, to the paths that can be used to reach back such destinations. We propose to use directed graphs to properly represent how destinations propagate through the Internet and the number of arc-disjoint paths to quantify this network's path diversity. Moreover, in order to understand the effects that policies have on the connectivity of the Internet, numerical analyses of the resulting directed graphs were conducted. Results demonstrate that, even after policies have been applied, there is still path diversity which the Border Gateway Protocol cannot currently exploit.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
This project has been undertaken for Hamworthy Hydraulics Limited. Its objective was to design and develop a controller package for a variable displacement, hydraulic pump for use mainly on mobile earth moving machinery. A survey was undertaken of control options used in practice and from this a design specification was formulated, the successful implementation of which would give Hamworthy an advantage over its competitors. Two different modes for the controller were envisaged. One consisted of using conventional hydro-mechanics and the other was based upon a microprocessor. To meet short term customer prototype requirements the first section of work was the realisation of the hydro-mechanical system. Mathematical models were made to evaluate controller stability and hence aid their design. The final package met the requirements of the specification and a single version could operate all sizes of variable displacement pumps in the Hamworthy range. The choice of controller options and combinations totalled twenty-four. The hydro-mechanical controller was complex and it was realised that a micro-processor system would allow all options to be implemented with just one design of hardware, thus greatly simplifying production. The final section of this project was to determine whether such a design was feasible. This entailed finding cheap, reliable transducers, using mathematical models to predict electro-hydraulic interface stability, testing such interfaces and finally incorporating a micro-processor in an interactive control loop. The study revealed that such a system was technically possible but it would cost 60% more than its hydro-mechanical counterpart. It was therefore concluded that, in the short term, for the markets considered, the hydro-mechanical design was the better solution. Regarding the micro-processor system the final conclusion was that, because the relative costs of the two systems are decreasing, the electro-hydraulic controller will gradually become more attractive and therefore Hamworthy should continue with its development.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
Congestion control is critical for the provisioning of quality of services (QoS) over dedicated short range communications (DSRC) vehicle networks for road safety applications. In this paper we propose a congestion control method for DSRC vehicle networks at road intersection, with the aims of providing high availability and low latency channels for high priority emergency safety applications while maximizing channel utilization for low priority routine safety applications. In this method a offline simulation based approach is used to find out the best possible configurations of message rate and MAC layer backoff exponent (BE) for a given number of vehicles equipped with DSRC radios. The identified best configurations are then used online by an roadside access point (AP) for system operation. Simulation results demonstrated that this adaptive method significantly outperforms the fixed control method under varying number of vehicles. The impact of estimation error on the number of vehicles in the network on system level performance is also investigated.
Resumo:
This thesis examined solar thermal collectors for use in alternative hybrid solar-biomass power plant applications in Gujarat, India. Following a preliminary review, the cost-effective selection and design of the solar thermal field were identified as critical factors underlying the success of hybrid plants. Consequently, the existing solar thermal technologies were reviewed and ranked for use in India by means of a multi-criteria decision-making method, the Analytical Hierarchy Process (AHP). Informed by the outcome of the AHP, the thesis went on to pursue the Linear Fresnel Reflector (LFR), the design of which was optimised with the help of ray-tracing. To further enhance collector performance, LFR concepts incorporating novel mirror spacing and drive mechanisms were evaluated. Subsequently, a new variant, termed the Elevation Linear Fresnel Reflector (ELFR) was designed, constructed and tested at Aston University, UK, therefore allowing theoretical models for the performance of a solar thermal field to be verified. Based on the resulting characteristics of the LFR, and data gathered for the other hybrid system components, models of hybrid LFR- and ELFR-biomass power plants were developed and analysed in TRNSYS®. The techno-economic and environmental consequences of varying the size of the solar field in relation to the total plant capacity were modelled for a series of case studies to evaluate different applications: tri-generation (electricity, ice and heat), electricity-only generation, and process heat. The case studies also encompassed varying site locations, capacities, operational conditions and financial situations. In the case of a hybrid tri-generation plant in Gujarat, it was recommended to use an LFR solar thermal field of 14,000 m2 aperture with a 3 tonne biomass boiler, generating 815 MWh per annum of electricity for nearby villages and 12,450 tonnes of ice per annum for local fisheries and food industries. However, at the expense of a 0.3 ¢/kWh increase in levelised energy costs, the ELFR increased saving of biomass (100 t/a) and land (9 ha/a). For solar thermal applications in areas with high land cost, the ELFR reduced levelised energy costs. It was determined that off-grid hybrid plants for tri-generation were the most feasible application in India. Whereas biomass-only plants were found to be more economically viable, it was concluded that hybrid systems will soon become cost competitive and can considerably improve current energy security and biomass supply chain issues in India.
Resumo:
The importance of interorganizational networks in supporting or hindering the achievement of organizational objectives is now widely acknowledged. Network research is directed at understanding network processes and structures, and their impact upon performance. A key process is learning. The concepts of individual, group and organizational learning are long established. This article argues that learning might also usefully be regarded as occurring at a fourth system level, the interorganizational network. The concept of network learning - learning by a group of organizations as a group - is presented, and differentiated from other types of learning, notably interorganizational learning (learning in interorganizational contexts). Four cases of network learning are identified and analysed to provide insights into network learning processes and outcomes. It is proposed that 'network learning episode' offers a suitable unit of analysis for the empirical research needed to develop our understanding of this potentially important concept.