855 resultados para trade-offs
Resumo:
Eddy currents induced within a magnetic resonance imaging (MRI) cryostat bore during pulsing of gradient coils can be applied constructively together with the gradient currents that generate them, to obtain good quality gradient uniformities within a specified imaging volume over time. This can be achieved by simultaneously optimizing the spatial distribution and temporal pre-emphasis of the gradient coil current, to account for the spatial and temporal variation of the secondary magnetic fields due to the induced eddy currents. This method allows the tailored design of gradient coil/magnet configurations and consequent engineering trade-offs. To compute the transient eddy currents within a realistic cryostat vessel, a low-frequency finite-difference time-domain (FDTD) method using total-field scattered-field (TFSF) scheme has been performed and validated
Resumo:
In some contexts data envelopment analysis (DEA) gives poor discrimination on the performance of units. While this may reflect genuine uniformity of performance between units, it may also reflect lack of sufficient observations or other factors limiting discrimination on performance between units. In this paper, we present an overview of the main approaches that can be used to improve the discrimination of DEA. This includes simple methods such as the aggregation of inputs or outputs, the use of longitudinal data, more advanced methods such as the use of weight restrictions, production trade-offs and unobserved units, and a relatively new method based on the use of selective proportionality between the inputs and outputs. © 2007 Springer Science+Business Media, LLC.
Resumo:
We investigate apodisation profiles of fibre Bragg gratings to determine key factors in filter design, using a novel apodisation technique. This highlights some practical fabrication limitations and provides important information concerning trade-offs between sidelobe suppression and bandwidth
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
Multiple-antenna systems offer significant performance enhancement and will be applied to the next generation broadband wireless communications. This thesis presents the investigations of multiple-antenna systems – multiple-input multiple-output (MIMO) and cooperative communication (CC) – and their performances in more realistic propagation environments than those reported previously. For MIMO systems, the investigations are conducted via theoretical modelling and simulations in a double-scattering environment. The results show that the variations of system performances depend on how scatterer density varies in flat fading channels, and that in frequency-selective fading channels system performances are affected by the length of the coding block as well as scatterer density. In realistic propagation environments, the fading correlation also has an impact on CC systems where the antennas can be further apart than those in MIMO systems. A general stochastic model is applied to studying the effects of fading correlation on the performances of CC systems. This model reflects the asymmetry fact of the wireless channels in a CC system. The results demonstrate the varied effects of fading correlation under different protocols and channel conditions. Performances of CC systems are further studied at the packet level, using both simulations and an experimental testbed. The results obtained have verified various performance trade-offs of the cooperative relaying network (CRN) investigated in different propagation environments. The results suggest that a proper selection of the relaying algorithms and other techniques can meet the requirements of quality of service for different applications.
Resumo:
Designers of self-adaptive systems often formulate adaptive design decisions, making unrealistic or myopic assumptions about the system's requirements and environment. The decisions taken during this formulation are crucial for satisfying requirements. In environments which are characterized by uncertainty and dynamism, deviation from these assumptions is the norm and may trigger 'surprises'. Our method allows designers to make explicit links between the possible emergence of surprises, risks and design trade-offs. The method can be used to explore the design decisions for self-adaptive systems and choose among decisions that better fulfil (or rather partially fulfil) non-functional requirements and address their trade-offs. The analysis can also provide designers with valuable input for refining the adaptation decisions to balance, for example, resilience (i.e. Satisfiability of non-functional requirements and their trade-offs) and stability (i.e. Minimizing the frequency of adaptation). The objective is to provide designers of self adaptive systems with a basis for multi-dimensional what-if analysis to revise and improve the understanding of the environment and its effect on non-functional requirements and thereafter decision-making. We have applied the method to a wireless sensor network for flood prediction. The application shows that the method gives rise to questions that were not explicitly asked before at design-time and assists designers in the process of risk-aware, what-if and trade-off analysis.
Resumo:
We assess the feasibility of hybrid solar-biomass power plants for use in India in various applications including tri-generation, electricity generation and process heat. To cover this breadth of scenarios we analyse, with the help of simulation models, case studies with peak thermal capacities ranging from 2 to 10 MW. Evaluations are made against technical, financial and environmental criteria. Suitable solar multiples, based on the trade-offs among the various criteria, range from 1 to 2.5. Compared to conventional energy sources, levelised energy costs are high - but competitive in comparison to other renewables such as photovoltaic and wind. Long payback periods for hybrid plants mean that they cannot compete directly with biomass-only systems. However, a 1.2-3.2 times increase in feedstock price will result in hybrid systems becoming cost competitive. Furthermore, in comparison to biomass-only, hybrid operation saves up to 29% biomass and land with an 8.3-24.8 $/GJ/a and 1.8-5.2 ¢/kWh increase in cost per exergy loss and levelised energy cost. Hybrid plants will become an increasingly attractive option as the cost of solar thermal falls and feedstock, fossil fuel and land prices continue to rise. In the foreseeable future, solar will continue to rely on subsidies and it is recommended to subsidise preferentially tri-generation plants. © 2012 Elsevier Ltd.
Resumo:
Requirements are sensitive to the context in which the system-to-be must operate. Where such context is well-understood and is static or evolves slowly, existing RE techniques can be made to work well. Increasingly, however, development projects are being challenged to build systems to operate in contexts that are volatile over short periods in ways that are imperfectly understood. Such systems need to be able to adapt to new environmental contexts dynamically, but the contextual uncertainty that demands this self-adaptive ability makes it hard to formulate, validate and manage their requirements. Different contexts may demand different requirements trade-offs. Unanticipated contexts may even lead to entirely new requirements. To help counter this uncertainty, we argue that requirements for self-adaptive systems should be run-time entities that can be reasoned over in order to understand the extent to which they are being satisfied and to support adaptation decisions that can take advantage of the systems' self-adaptive machinery. We take our inspiration from the fact that explicit, abstract representations of software architectures used to be considered design-time-only entities but computational reflection showed that architectural concerns could be represented at run-time too, helping systems to dynamically reconfigure themselves according to changing context. We propose to use analogous mechanisms to achieve requirements reflection. In this paper we discuss the ideas that support requirements reflection as a means to articulate some of the outstanding research challenges.
Resumo:
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Investigating the recent direct action campaigns against genetically modified crops in France and the United Kingdom, the authors set out to understand how contrasting judicial systems and cultures affect the way that activists choose to commit ostensibly illegal actions and how they negotiate the trade-offs between effectiveness and public accountability. The authors find evidence that prosecution outcomes across different judicial systems are consistent and relatively predictable and consequently argue that the concept of a “judicial opportunity structure” is useful for developing scholars’ understanding of social movement trajectories. The authors also find that these differential judicial opportunities cannot adequately account for the tactical choices made by activists with respect to the staging of covert or overt direct action; rather, explanations of tactical choice are better accounted for by movement ideas, cultures, and traditions.
Resumo:
A view has emerged within manufacturing and service organizations that the operations management function can hold the key to achieving competitive edge. This has recently been emphasized by the demands for greater variety and higher quality which must be set against a background of increasing cost of resources. As nations' trade barriers are progressively lowered and removed, so producers of goods and service products are becoming more exposed to competition that may come from virtually anywhere around the world. To simply survive in this climate many organizations have found it necessary to improve their manufacturing or service delivery systems. To become real ''winners'' some have adopted a strategic approach to operations and completely reviewed and restructured their approach to production system design and operations planning and control. The articles in this issue of the International journal of Operations & Production Management have been selected to illustrate current thinking and practice in relation to this situation. They are all based on papers presented to the Sixth International Conference of the Operations Management Association-UK which was held at Aston University in June 1991. The theme of the conference was "Achieving Competitive Edge" and authors from 15 countries around the world contributed to more than 80 presented papers. Within this special issue five topic areas are addressed with two articles relating to each. The topics are: strategic management of operations; managing change; production system design; production control; and service operations. Under strategic management of operations De Toni, Filippini and Forza propose a conceptual model which considers the performance of an operating system as a source of competitive advantage through the ''operation value chain'' of design, purchasing, production and distribution. Their model is set within the context of the tendency towards globalization. New's article is somewhat in contrast to the more fashionable literature on operations strategy. It challenges the validity of the current idea of ''world-class manufacturing'' and, instead, urges a reconsideration of the view that strategic ''trade-offs'' are necessary to achieve a competitive edge. The importance of managing change has for some time been recognized within the field of organization studies but its relevance in operations management is now being realized. Berger considers the use of "organization design", ''sociotechnical systems'' and change strategies and contrasts these with the more recent idea of the ''dialogue perspective''. A tentative model is suggested to improve the analysis of different strategies in a situation specific context. Neely and Wilson look at an essential prerequisite if change is to be effected in an efficient way, namely product goal congruence. Using a case study as its basis, their article suggests a method of measuring goal congruence as a means of identifying the extent to which key performance criteria relating to quality, time, cost and flexibility are understood within an organization. The two articles on production systems design represent important contributions to the debate on flexible production organization and autonomous group working. Rosander uses the results from cases to test the applicability of ''flow groups'' as the optimal way of organizing batch production. Schuring also examines cases to determine the reasons behind the adoption of ''autonomous work groups'' in The Netherlands and Sweden. Both these contributions help to provide a greater understanding of the production philosophies which have emerged as alternatives to more conventional systems -------for intermittent and continuous production. The production control articles are both concerned with the concepts of ''push'' and ''pull'' which are the two broad approaches to material planning and control. Hirakawa, Hoshino and Katayama have developed a hybrid model, suitable for multistage manufacturing processes, which combines the benefits of both systems. They discuss the theoretical arguments in support of the system and illustrate its performance with numerical studies. Slack and Correa's concern is with the flexibility characteristics of push and pull material planning and control systems. They use the case of two plants using the different systems to compare their performance within a number of predefined flexibility types. The two final contributions on service operations are complementary. The article by Voss really relates to manufacturing but examines the application of service industry concepts within the UK manufacturing sector. His studies in a number of companies support the idea of the ''service factory'' and offer a new perspective for manufacturing. Harvey's contribution by contrast, is concerned with the application of operations management principles in the delivery of professional services. Using the case of social-service provision in Canada, it demonstrates how concepts such as ''just-in-time'' can be used to improve service performance. The ten articles in this special issue of the journal address a wide range of issues and situations. Their common aspect is that, together, they demonstrate the extent to which competitiveness can be improved via the application of operations management concepts and techniques.
Resumo:
We investigate apodisation profiles of fibre Bragg gratings to determine key factors in filter design, using a novel apodisation technique. This highlights some practical fabrication limitations and provides important information concerning trade-offs between sidelobe suppression and bandwidth
Resumo:
Background Qualitative research makes an important contribution to our understanding of health and healthcare. However, qualitative evidence can be difficult to search for and identify, and the effectiveness of different types of search strategies is unknown. Methods Three search strategies for qualitative research in the example area of support for breast-feeding were evaluated using six electronic bibliographic databases. The strategies were based on using thesaurus terms, free-text terms and broad-based terms. These strategies were combined with recognised search terms for support for breast-feeding previously used in a Cochrane review. For each strategy, we evaluated the recall (potentially relevant records found) and precision (actually relevant records found). Results A total yield of 7420 potentially relevant records was retrieved by the three strategies combined. Of these, 262 were judged relevant. Using one strategy alone would miss relevant records. The broad-based strategy had the highest recall and the thesaurus strategy the highest precision. Precision was generally poor: 96% of records initially identified as potentially relevant were deemed irrelevant. Searching for qualitative research involves trade-offs between recall and precision. Conclusions These findings confirm that strategies that attempt to maximise the number of potentially relevant records found are likely to result in a large number of false positives. The findings also suggest that a range of search terms is required to optimise searching for qualitative evidence. This underlines the problems of current methods for indexing qualitative research in bibliographic databases and indicates where improvements need to be made.
Resumo:
This paper investigates whether AspectJ can be used for efficient profiling of Java programs. Profiling differs from other applications of AOP (e.g. tracing), since it necessitates efficient and often complex interactions with the target program. As such, it was uncertain whether AspectJ could achieve this goal. Therefore, we investigate four common profiling problems (heap usage, object lifetime, wasted time and time-spent) and report on how well AspectJ handles them. For each, we provide an efficient implementation, discuss any trade-offs or limitations and present the results of an experimental evaluation into the costs of using it. Our conclusions are mixed. On the one hand, we find that AspectJ is sufficiently expressive to describe the four profiling problems and reasonably efficient in most cases. On the other hand, we find several limitations with the current AspectJ implementation that severely hamper its suitability for profiling. Copyright © 2006 John Wiley & Sons, Ltd.