48 resultados para Metrics of managment
Resumo:
The problem of learning from imbalanced data is of critical importance in a large number of application domains and can be a bottleneck in the performance of various conventional learning methods that assume the data distribution to be balanced. The class imbalance problem corresponds to dealing with the situation where one class massively outnumbers the other. The imbalance between majority and minority would lead machine learning to be biased and produce unreliable outcomes if the imbalanced data is used directly. There has been increasing interest in this research area and a number of algorithms have been developed. However, independent evaluation of the algorithms is limited. This paper aims at evaluating the performance of five representative data sampling methods namely SMOTE, ADASYN, BorderlineSMOTE, SMOTETomek and RUSBoost that deal with class imbalance problems. A comparative study is conducted and the performance of each method is critically analysed in terms of assessment metrics. © 2013 Springer-Verlag.
Resumo:
Purpose: There is an urgent need to develop diagnostic tests to improve the detection of pathogens causing life-threatening infection (sepsis). SeptiFast is a CE-marked multi-pathogen real-time PCR system capable of detecting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here a systematic review and meta-analysis of diagnostic accuracy studies of SeptiFast in the setting of suspected sepsis.
Methods: A comprehensive search strategy was developed to identify studies that compared SeptiFast with blood culture in suspected sepsis. Methodological quality was assessed using QUADAS. Heterogeneity of studies was investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in receiver operator characteristic space. Bivariate model method was used to estimate summary sensitivity and specificity.
Results: From 41 phase III diagnostic accuracy studies, summary sensitivity and specificity for SeptiFast compared with blood culture were 0.68 (95 % CI 0.63–0.73) and 0.86 (95 % CI 0.84–0.89) respectively. Study quality was judged to be variable with important deficiencies overall in design and reporting that could impact on derived diagnostic accuracy metrics.
Conclusions: SeptiFast appears to have higher specificity than sensitivity, but deficiencies in study quality are likely to render this body of work unreliable. Based on the evidence presented here, it remains difficult to make firm recommendations about the likely clinical utility of SeptiFast in the setting of suspected sepsis.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.
This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.
Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives
Resumo:
The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process.
Resumo:
The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.
Resumo:
Re-imagining of the aerial transportation system has become increasingly important as the need for significant environmental and economic efficiency gains has become ever more prevalent. A number of studies have highlighted the benefits of the adoption of air to air refuelling within civil aviation. However, it also opens up the potential for increased flexibility in operations through smaller aircraft, shifting emphasis away from the traditional hub and spoke method of operation towards the more flexible Point to Point operations. It is proposed here that one technology can act as an enabler for the other, realising benefits that neither can realise as a standalone. The impact of an air-toair refuelling enabled point to point system is discussed, and the affect on economic and environmental cost metrics relative to traditional operations evaluated. An idealised airport configuration study shows the difference in fuel burn for point to point networks to vary from -23% to 28% from that of Hub and Spoke depending on the configuration. The sensitive natures of the concepts are further explored in a second study based on real airport configurations. The complex effect of the choice of a Point to Point or Hub and Spoke system on fuel burn, operating cost and revenue potential is highlighted. Fuel burn savings of 15% can be experienced with AAR over traditional refuelling operations, with point to point networks increasing the available seat miles (by approximately 20%) without a proportional increase in operating cost or fuel.
Resumo:
As a post-CMOS technology, the incipient Quantum-dot Cellular Automata technology has various advantages. A key aspect which makes it highly desirable is low power dissipation. One method that is used to analyse power dissipation in QCA circuits is bit erasure analysis. This method has been applied to analyse previously proposed QCA binary adders. However, a number of improved QCA adders have been proposed more recently that have only been evaluated in terms of area and speed. As the three key performance metrics for QCA circuits are speed, area and power, in this paper, a bit erasure analysis of these adders will be presented to determine their power dissipation. The adders to be analysed are the Carry Flow Adder (CFA), Brent-Kung Adder (B-K), Ladner-Fischer Adder (L-F) and a more recently developed area-delay efficient adder. This research will allow for a more comprehensive comparison between the different QCA adder proposals. To the best of the authors' knowledge, this is the first time power dissipation analysis has been carried out on these adders.
Resumo:
Directional modulation (DM) is an emerging technology for securing wireless communications at the physical layer. This promising technology, unlike the conventional key-based cryptographic methods and the key-based physical layer security approaches, locks information signals without any requirements of keys. The locked information can only be fully recovered by the legitimate receiver(s) priory known by DM transmitters. This paper reviews the origin of the DM concept and, particularly, its development in recent years, including its mathematical model, assessment metrics, synthesis approaches, physical realizations, and finally its potential aspects for future studies.
Resumo:
AIMS: To determine the incidence and predictive factors of rib fracture and chest wall pain after lung stereotactic ablative radiotherapy (SABR).
MATERIALS AND METHODS: Patients were treated with lung SABR of 48-60 Gy in four to five fractions. The treatment plan and follow-up computed tomography scans of 289 tumours in 239 patients were reviewed. Dose-volume histogram (DVH) metrics and clinical factors were evaluated as potential predictors of chest wall toxicity.
RESULTS: The median follow-up was 21.0 months (range 6.2-52.1). Seventeen per cent (50/289) developed a rib fracture, 44% (22/50) were symptomatic; the median time to fracture was 16.4 months. On univariate analysis, female gender, osteoporosis, tumours adjacent (within 5 mm) to the chest wall and all of the chest wall DVH metrics predicted for rib fracture, but only tumour location adjacent to the chest wall remained significant on the multivariate model (P < 0.01). The 2 year fracture-free probability for those adjacent to the chest wall was 65.6%. Among those tumours adjacent to the chest wall, only osteoporosis (P = 0.02) predicted for fracture, whereas none of the chest wall DVH metrics were predictive. Eight per cent (24/289) experienced chest wall pain without fracture.
CONCLUSIONS: None of the chest wall DVH metrics independently predicted for SABR-induced rib fracture when tumour location is taken into account. Patients with tumours adjacent (within 5 mm) to the chest wall are at greater risk of rib fracture after lung SABR, and among these, an additional risk was observed in osteoporotic patients.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
The end of Dennard scaling has promoted low power consumption into a firstorder concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, xreliability. These limitations would make them unsuitable for HPC systems and datacenters. In order to demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM’s big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the paper describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.
Resumo:
Bridge construction responds to the need for environmentally friendly design of motorways and facilitates the passage through sensitive natural areas and the bypassing of urban areas. However, according to numerous research studies, bridge construction presents substantial budget overruns. Therefore, it is necessary early in the planning process for the decision makers to have reliable estimates of the final cost based on previously constructed projects. At the same time, the current European financial crisis reduces the available capital for investments and financial institutions are even less willing to finance transportation infrastructure. Consequently, it is even more necessary today to estimate the budget of high-cost construction projects -such as road bridges- with reasonable accuracy, in order for the state funds to be invested with lower risk and the projects to be designed with the highest possible efficiency. In this paper, a Bill-of-Quantities (BoQ) estimation tool for road bridges is developed in order to support the decisions made at the preliminary planning and design stages of highways. Specifically, a Feed-Forward Artificial Neural Network (ANN) with a hidden layer of 10 neurons is trained to predict the superstructure material quantities (concrete, pre-stressed steel and reinforcing steel) using the width of the deck, the adjusted length of span or cantilever and the type of the bridge as input variables. The training dataset includes actual data from 68 recently constructed concrete motorway bridges in Greece. According to the relevant metrics, the developed model captures very well the complex interrelations in the dataset and demonstrates strong generalisation capability. Furthermore, it outperforms the linear regression models developed for the same dataset. Therefore, the proposed cost estimation model stands as a useful and reliable tool for the construction industry as it enables planners to reach informed decisions for technical and economic planning of concrete bridge projects from their early implementation stages.
Resumo:
This paper addresses the representation of landscape complexity in stated preferences research. It integrates landscape ecology and landscape economics and conducts the landscape analysis in a three-dimensional space to provide ecologically meaningful quantitative landscape indicators that are used as variables for the monetary valuation of landscape in a stated preferences study. Expected heterogeneity in taste intensity across respondents is addressed with a mixed logit model in Willingness to Pay space. The results suggest that the integration of landscape ecology metrics in a stated preferences model provides useful insights for valuing landscape and landscape changes
Resumo:
This paper discusses the use of primary frequency response metrics to assess the dynamics of frequency disturbance data with the presence of high system non synchronous penetration (SNSP) and system inertia variation. The Irish power system has been chosen as a study case as it experiences a significant level of SNSP from wind turbine generation and imported active power from HVDC interconnectors. Several recorded actual frequency disturbances were used in the analysis. These data were measured and collected from the Irish power system from October 2010 to June 2013. The paper has shown the impact of system inertia and SNSP variation on the performance of primary frequency response metrics, namely: nadir frequency, rate of change of frequency, inertial and primary frequency response.
Resumo:
This paper presents initial results of evaluating suitability of the conventional two-tone CW passive intermodulation (PIM) test for characterization of modulated signal distortion by passive nonlinearities in base station antennas and RF front-end. A comprehensive analysis of analog and digitally modulated waveforms in the transmission lines with weak distributed nonlinearity has been performed using the harmonic balance analysis and X-parameters in Advanced Design System (ADS) simulator. The nonlinear distortion metrics used in the conventional two-tone CW PIM test have been compared with the respective spectral metrics applied to the modulated waveforms, such as adjacent channel power ratio (ACPR) and error vector magnitude (EVM). It is shown that the results of two-tone CW PIM tests are consistent with the metrics used for assessment of signal integrity of both analog and digitally modulated waveforms.