811 resultados para dynamic performance appraisal
Resumo:
Approximately half of the houses in Northern Ireland were built before any form of minimum thermal specification or energy efficiency standard was enforced. Furthermore, 44% of households are categorised as being in fuel poverty; spending more than 10% of the household income to heat the house to bring it to an acceptable level of thermal comfort. To bring existing housing stock up to an acceptable standard, retrofitting for improving the energy efficiency is essential and it is also necessary to study the effectiveness of such improvements in future climate scenarios. This paper presents the results from a year-long performance monitoring of two houses that have undergone retrofits to improve energy efficiency. Using wireless sensor technology internal temperature, humidity, external weather, household gas and electricity usage were monitored for a year. Simulations using IES-VE dynamic building modelling software were calibrated using the monitoring data to ASHARE Guideline 14 standards. The energy performance and the internal environment of the houses were then assessed for current and future climate scenarios and the results show that there is a need for a holistic balanced strategy for retrofitting.
Resumo:
As part of its single technology appraisal (STA) process, the National Institute for Health and Care Excellence (NICE) invited the company that manufactures cabazitaxel (Jevtana(®), Sanofi, UK) to submit evidence for the clinical and cost effectiveness of cabazitaxel for treatment of patients with metastatic hormone-relapsed prostate cancer (mHRPC) previously treated with a docetaxel-containing regimen. The School of Health and Related Research Technology Appraisal Group at the University of Sheffield was commissioned to act as the independent Evidence Review Group (ERG). The ERG produced a critical review of the evidence for the clinical and cost effectiveness of the technology based upon the company's submission to NICE. Clinical evidence for cabazitaxel was derived from a multinational randomised open-label phase III trial (TROPIC) of cabazitaxel plus prednisone or prednisolone compared with mitoxantrone plus prednisone or prednisolone, which was assumed to represent best supportive care. The NICE final scope identified a further three comparators: abiraterone in combination with prednisone or prednisolone; enzalutamide; and radium-223 dichloride for the subgroup of people with bone metastasis only (no visceral metastasis). The company did not consider radium-223 dichloride to be a relevant comparator. Neither abiraterone nor enzalutamide has been directly compared in a trial with cabazitaxel. Instead, clinical evidence was synthesised within a network meta-analysis (NMA). Results from TROPIC showed that cabazitaxel was associated with a statistically significant improvement in both overall survival and progression-free survival compared with mitoxantrone. Results from a random-effects NMA, as conducted by the company and updated by the ERG, indicated that there was no statistically significant difference between the three active treatments for both overall survival and progression-free survival. Utility data were not collected as part of the TROPIC trial, and were instead taken from the company's UK early access programme. Evidence on resource use came from the TROPIC trial, supplemented by both expert clinical opinion and a UK clinical audit. List prices were used for mitoxantrone, abiraterone and enzalutamide as directed by NICE, although commercial in-confidence patient-access schemes (PASs) are in place for abiraterone and enzalutamide. The confidential PAS was used for cabazitaxel. Sequential use of the advanced hormonal therapies (abiraterone and enzalutamide) does not usually occur in clinical practice in the UK. Hence, cabazitaxel could be used within two pathways of care: either when an advanced hormonal therapy was used pre-docetaxel, or when one was used post-docetaxel. The company believed that the former pathway was more likely to represent standard National Health Service (NHS) practice, and so their main comparison was between cabazitaxel and mitoxantrone, with effectiveness data from the TROPIC trial. Results of the company's updated cost-effectiveness analysis estimated a probabilistic incremental cost-effectiveness ratio (ICER) of £45,982 per quality-adjusted life-year (QALY) gained, which the committee considered to be the most plausible value for this comparison. Cabazitaxel was estimated to be both cheaper and more effective than abiraterone. Cabazitaxel was estimated to be cheaper but less effective than enzalutamide, resulting in an ICER of £212,038 per QALY gained for enzalutamide compared with cabazitaxel. The ERG noted that radium-223 is a valid comparator (for the indicated sub-group), and that it may be used in either of the two care pathways. Hence, its exclusion leads to uncertainty in the cost-effectiveness results. In addition, the company assumed that there would be no drug wastage when cabazitaxel was used, with cost-effectiveness results being sensitive to this assumption: modelling drug wastage increased the ICER comparing cabazitaxel with mitoxantrone to over £55,000 per QALY gained. The ERG updated the company's NMA and used a random effects model to perform a fully incremental analysis between cabazitaxel, abiraterone, enzalutamide and best supportive care using PASs for abiraterone and enzalutamide. Results showed that both cabazitaxel and abiraterone were extendedly dominated by the combination of best supportive care and enzalutamide. Preliminary guidance from the committee, which included wastage of cabazitaxel, did not recommend its use. In response, the company provided both a further discount to the confidential PAS for cabazitaxel and confirmation from NHS England that it is appropriate to supply and purchase cabazitaxel in pre-prepared intravenous-infusion bags, which would remove the cost of drug wastage. As a result, the committee recommended use of cabazitaxel as a treatment option in people with an Eastern Cooperative Oncology Group performance status of 0 or 1 whose disease had progressed during or after treatment with at least 225 mg/m(2) of docetaxel, as long as it was provided at the discount agreed in the PAS and purchased in either pre-prepared intravenous-infusion bags or in vials at a reduced price to reflect the average per-patient drug wastage.
Resumo:
This thesis makes use of the unique reregulation of pharmaceutical monopoly in Sweden to critically examine intraindustry firm heterogeneity. It contributes to existing divestiture research as it studies the dynamism in between reconfigurations of value constellations and its effects on value creation of divested pharmacies. Because the findings showed that the predominant theory of intraindustry firm heterogeneity could not explain firm performance, the value constellation concept was applied as it captured the phenomena. A patterned finding informed how reconfigurations of value constellations in a reregulated market characterized by strict rules, regulations, and high competition did not generate additional value for firms on short term. My study unveils that value creation is hampered in situations where rules and regulations significantly affect firms’ ability to reconfigure their value constellations. The key practical implication is an alternative perspective on fundamental aspects of the reregulation and how policy-makers may impede firm performance and the intended creation of new value for not only firms but for society as a whole.
Resumo:
Innovation is a strategic necessity for the survival of today’s organizations. The wide recognition of innovation as a competitive necessity, particularly in dynamic market environments, makes it an evergreen domain for research. This dissertation deals with innovation in small Information Technology (IT) firms in India. The IT industry in India has been a phenomenal success story of the last three decades, and is today facing a crucial phase in its history characterized by the need for fundamental changes in strategies, driven by innovation. This study, while motivated by the dynamics of changing times, importantly addresses the research gap on small firm innovation in Indian IT.This study addresses three main objectives: (a) drivers of innovation in small IT firms in India (b) impact of innovation on firm performance (c) variation in the extent of innovation adoption in small firms. Product and process innovation were identified as the two most contextually relevant types of innovation for small IT firms. The antecedents of innovation were identified as Intellectual Capital, Creative Capability, Top Management Support, Organization Learning Capability, Customer Involvement, External Networking and Employee Involvement.Survey method was adopted for data collection and the study unit was the firm. Surveys were conducted in 2014 across five South Indian cities. Small firm was defined as one with 10-499 employees. Responses from 205 firms were chosen for analysis. Rigorous statistical analysis was done to generate meaningful insights. The set of drivers of product innovation (Intellectual Capital, Creative Capability, Top Management Support, Customer Involvement, External Networking, and Employee Involvement)were different from that of process innovation (Creative Capability, Organization Learning Capability, External Networking, and Employee Involvement). Both product and process innovation had strong impact on firm performance. It was found that firms that adopted a combination of product innovation and process innovation had the highest levels of firm performance. Product innovation and process innovation fully mediated the relationship between all the seven antecedents and firm performance The results of this study have several important theoretical and practical implications. To the best of the researcher’s knowledge, this is the first time that an empirical study of firm level innovation of this kind has been undertaken in India. A measurement model for product and process innovation was developed, and the drivers of innovation were established statistically. Customer Involvement, External Networking and Employee Involvement are elements of Open Innovation, and all three had strong association with product innovation, and the latter twohad strong association with process innovation. The results showed that proclivity for Open Innovation is healthy in the Indian context. Practical implications have been outlined along how firms can organize themselves for innovation, the human talent for innovation, the right culture for innovation and for open innovation. While some specific examples of possible future studies have been recommended, the researcher believes that the study provides numerous opportunities to further this line of enquiry.
Resumo:
Currently, no standard mix design procedure is available for CIR-emulsion in Iowa. The CIR-foam mix design process developed during the previous phase is applied for CIR-emulsion mixtures with varying emulsified asphalt contents. Dynamic modulus test, dynamic creep test, static creep test and raveling test were conducted to evaluate the short- and long-term performance of CIR-emulsion mixtures at various testing temperatures and loading conditions. A potential benefit of this research is a better understanding of CIR-emulsion material properties in comparison with those of CIR-foam material that would allow for the selection of the most appropriate CIR technology and the type and amount of the optimum stabilization material. Dynamic modulus, flow number and flow time of CIR-emulsion mixtures using CSS-1h were generally higher than those of HFMS-2p. Flow number and flow time of CIR-emulsion using RAP materials from Story County was higher than those from Clayton County. Flow number and flow time of CIR-emulsion with 0.5% emulsified asphalt was higher than CIR-emulsion with 1.0% or 1.5%. Raveling loss of CIR-emulsion with 1.5% emulsified was significantly less than those with 0.5% and 1.0%. Test results in terms of dynamic modulus, flow number, flow time and raveling loss of CIR-foam mixtures are generally better than those of CIR-emulsion mixtures. Given the limited RAP sources used for this study, it is recommended that the CIR-emulsion mix design procedure should be validated against several RAP sources and emulsion types.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
Introdução: O judô é um esporte que implica uma grande variedade de gestos, ações e aptidões físicas, entre as quais, capacidade de controlo postural, equilíbrio, flexibilidade e força. Quando observada as áreas mais afetas na pratica do judô a região do joelho é das que possui maior incidência. O objetivo deste estudo foi avaliar os efeitos da aplicação do Dynamic Tape (DT), um tape biomecânico, na funcionalidade do quadriceps de atletas de judô masculino com dor não específica no joelho em termos de equilíbrio, força, flexibilidade e dor. Metodologia: A amostra foi constituída por 37 indivíduos, tendo os participantes sido submetidos a testes, primeiramente sem Dynamic Tape (SDT) e posteriormente com Dynamic Tape (CDT). Os testes aplicados foram o Standing Stork Test (SST), o Y Balance Test (YBT), o Four Square Step Test (FSST),o Single Leg Hop Test (SLHT), e o Teste de flexão do membro inferior (TFMI) e o Teste de extensão do membros (TEMI) e a escala numérica de dor (END) no final de todos os testes. Resultados: Não foram observadas diferenças significativas para o teste SST (p=0,6794), porém os teste YBT, SLHT, TFMI, TEMI e END (p<0,0001), assim como FSST (p=0,0026) entre os momentos CDT e SDT demonstraram diferenças estatísticamente significativas, produzindo a aplicação do DT efeitos positivos. Na performance do atleta. Conclusão: A aplicação do DT não foi capaz de melhorar de forma significativa o equilíbrio estático, no então demonstrou influenciar o equilíbrio semi-dinâmico, dinâmico, a flexibilidade e a dor.
Resumo:
In many areas of simulation, a crucial component for efficient numerical computations is the use of solution-driven adaptive features: locally adapted meshing or re-meshing; dynamically changing computational tasks. The full advantages of high performance computing (HPC) technology will thus only be able to be exploited when efficient parallel adaptive solvers can be realised. The resulting requirement for HPC software is for dynamic load balancing, which for many mesh-based applications means dynamic mesh re-partitioning. The DRAMA project has been initiated to address this issue, with a particular focus being the requirements of industrial Finite Element codes, but codes using Finite Volume formulations will also be able to make use of the project results.
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
This paper presents a new dynamic load balancing technique for structured mesh computational mechanics codes in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to using coincidental limits in all of the partitioned dimensions. The partition range limits are 'staggered', allowing greater flexibility in obtaining a balanced load distribution in comparison to when the limits are changed 'globally'. as the load increase/decrease on one processor no longer restricts the load decrease/increase on a neighbouring processor. The automatic implementation of this 'staggered' load balancing strategy within an existing parallel code is presented in this paper, along with some preliminary results.
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
In the half-duplex relay channel applying the decode-and-forward protocol the relay introduces energy over random time intervals into the channel as observed at the destination. Consequently, during simulation the average signal power seen at the destination becomes known at run-time only. Therefore, in order to obtain specific performance measures at the signal-to-noise ratio (SNR) of interest, strategies are required to adjust the noise variance during simulation run-time. It is necessary that these strategies result in the same performance as measured under real-world conditions. This paper introduces three noise power allocation strategies and demonstrates their applicability using numerical and simulation results.
Resumo:
The aim of this thesis was threefold, firstly, to compare current player tracking technology in a single game of soccer. Secondly, to investigate the running requirements of elite women’s soccer, in particular the use and application of athlete tracking devices. Finally, how can game style be quantified and defined. Study One compared four different match analysis systems commonly used in both research and applied settings: video-based time-motion analysis, a semi-automated multiple camera based system, and two commercially available Global Positioning System (GPS) based player tracking systems at 1 Hertz (Hz) and 5 Hz respectively. A comparison was made between each of the systems when recording the same game. Total distance covered during the match for the four systems ranged from 10 830 ± 770 m (semi-automated multiple camera based system) to 9 510 ± 740m (video-based time-motion analysis). At running speeds categorised as high-intensity running (>15 km⋅h-1), the semi-automated multiple camera based system reported the highest distance of 2 650 ± 530 m with video-based time-motion analysis reporting the least amount of distance covered with 1 610 ± 370 m. At speeds considered to be sprinting (>20 km⋅h-1), the video-based time-motion analysis reported the highest value (420 ± 170 m) and 1 Hz GPS units the lowest value (230 ± 160 m). These results demonstrate there are differences in the determination of the absolute distances, and that comparison of results between match analysis systems should be made with caution. Currently, there is no criterion measure for these match analysis methods and as such it was not possible to determine if one system was more accurate than another. Study Two provided an opportunity to apply player-tracking technology (GPS) to measure activity profiles and determine the physical demands of Australian international level women soccer players. In four international women’s soccer games, data was collected on a total of 15 Australian women soccer players using a 5 Hz GPS based athlete tracking device. Results indicated that Australian women soccer players covered 9 140 ± 1 030 m during 90 min of play. The total distance covered by Australian women was less than the 10 300 m reportedly covered by female soccer players in the Danish First Division. However, there was no apparent difference in the estimated "#$%&', as measured by multi-stage shuttle tests, between these studies. This study suggests that contextual information, including the “game style” of both the team and opposition may influence physical performance in games. Study Three examined the effect the level of the opposition had on the physical output of Australian women soccer players. In total, 58 game files from 5 Hz athlete-tracking devices from 13 international matches were collected. These files were analysed to examine relationships between physical demands, represented by total distance covered, high intensity running (HIR) and distances covered sprinting, and the level of the opposition, as represented by the Fédération Internationale de Football Association (FIFA) ranking at the time of the match. Higher-ranking opponents elicited less high-speed running and greater low-speed activity compared to playing teams of similar or lower ranking. The results are important to coaches and practitioners in the preparation of players for international competition, and showed that the differing physical demands required were dependent on the level of the opponents. The results also highlighted the need for continued research in the area of integrating contextual information in team sports and demonstrated that soccer can be described as having dynamic and interactive systems. The influence of playing strategy, tactics and subsequently the overall game style was highlighted as playing a significant part in the physical demands of the players. Study Four explored the concept of game style in field sports such as soccer. The aim of this study was to provide an applied framework with suggested metrics for use by coaches, media, practitioners and sports scientists. Based on the findings of Studies 1- 3 and a systematic review of the relevant literature, a theoretical framework was developed to better understand how a team’s game style could be quantified. Soccer games can be broken into key moments of play, and for each of these moments we categorised metrics that provide insight to success or otherwise, to help quantify and measure different methods of playing styles. This study highlights that to date, there had been no clear definition of game style in team sports and as such a novel definition of game style is proposed that can be used by coaches, sport scientists, performance analysts, media and general public. Studies 1-3 outline four common methods of measuring the physical demands in soccer: video based time motion analysis, GPS at 1 Hz and at 5 Hz and semiautomated multiple camera based systems. As there are no semi-automated multiple camera based systems available in Australia, primarily due to cost and logistical reasons, GPS is widely accepted for use in team sports in tracking player movements in training and competition environments. This research identified that, although there are some limitations, GPS player-tracking technology may be a valuable tool in assessing running demands in soccer players and subsequently contribute to our understanding of game style. The results of the research undertaken also reinforce the differences between methods used to analyse player movement patterns in field sports such as soccer and demonstrate that the results from different systems such as GPS based athlete tracking devices and semi-automated multiple camera based systems cannot be used interchangeably. Indeed, the magnitude of measurement differences between methods suggests that significant measurement error is evident. This was apparent even when the same technologies are used which measure at different sampling rates, such as GPS systems using either 1 Hz or 5 Hz frequencies of measurement. It was also recognised that other factors influence how team sport athletes behave within an interactive system. These factors included the strength of the opposition and their style of play. In turn, these can impact the physical demands of players that change from game to game, and even within games depending on these contextual features. Finally, the concept of what is game style and how it might be measured was examined. Game style was defined as "the characteristic playing pattern demonstrated by a team during games. It will be regularly repeated in specific situational contexts such that measurement of variables reflecting game style will be relatively stable. Variables of importance are player and ball movements, interaction of players, and will generally involve elements of speed, time and space (location)".
Resumo:
The asynchronous polyphase induction motor has been the motor of choice in industrial settings for about the past half century because power electronics can be used to control its output behavior. Before that, the dc motor was widely used because of its easy speed and torque controllability. The two main reasons why this might be are its ruggedness and low cost. The induction motor is a rugged machine because it is brushless and has fewer internal parts that need maintenance or replacement. This makes it low cost in comparison to other motors, such as the dc motor. Because of these facts, the induction motor and drive system have been gaining market share in industry and even in alternative applications such as hybrid electric vehicles and electric vehicles. The subject of this thesis is to ascertain various control algorithms’ advantages and disadvantages and give recommendations for their use under certain conditions and in distinct applications. Four drives will be compared as fairly as possible by comparing their parameter sensitivities, dynamic responses, and steady-state errors. Different switching techniques are used to show that the motor drive is separate from the switching scheme; changing the switching scheme produces entirely different responses for each motor drive.