108 resultados para BENCHMARK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of smart grid technologies and appropriate charging strategies are key to accommodating large numbers of Electric Vehicles (EV) charging on the grid. In this paper a general framework is presented for formulating the EV charging optimization problem and three different charging strategies are investigated and compared from the perspective of charging fairness while taking into account power system constraints. Two strategies are based on distributed algorithms, namely, Additive Increase and Multiplicative Decrease (AIMD), and Distributed Price-Feedback (DPF), while the third is an ideal centralized solution used to benchmark performance. The algorithms are evaluated using a simulation of a typical residential low voltage distribution network with 50% EV penetration. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasingly semiconductor manufacturers are exploring opportunities for virtual metrology (VM) enabled process monitoring and control as a means of reducing non-value added metrology and achieving ever more demanding wafer fabrication tolerances. However, developing robust, reliable and interpretable VM models can be very challenging due to the highly correlated input space often associated with the underpinning data sets. A particularly pertinent example is etch rate prediction of plasma etch processes from multichannel optical emission spectroscopy data. This paper proposes a novel input-clustering based forward stepwise regression methodology for VM model building in such highly correlated input spaces. Max Separation Clustering (MSC) is employed as a pre-processing step to identify a reduced srt of well-conditioned, representative variables that can then be used as inputs to state-of-the-art model building techniques such as Forward Selection Regression (FSR), Ridge regression, LASSO and Forward Selection Ridge Regression (FCRR). The methodology is validated on a benchmark semiconductor plasma etch dataset and the results obtained are compared with those achieved when the state-of-art approaches are applied directly to the data without the MSC pre-processing step. Significant performance improvements are observed when MSC is combined with FSR (13%) and FSRR (8.5%), but not with Ridge Regression (-1%) or LASSO (-32%). The optimal VM results are obtained using the MSC-FSR and MSC-FSRR generated models. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a multiple classifier machine learning methodology for Predictive Maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating so called ’health factors’ or quantitative indicators of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance trade-offs in terms of frequency of unexpected breaks and unexploited lifetime and then employing this information in an operating cost based maintenance decision system to minimise expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. The prediction models for VM can be from a large variety of linear and nonlinear regression methods and the selection of a proper regression method for a specific VM problem is not straightforward, especially when the candidate predictor set is of high dimension, correlated and noisy. Using process data from a benchmark semiconductor manufacturing process, this paper evaluates the performance of four typical regression methods for VM: multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), neural networks (NN) and Gaussian process regression (GPR). It is observed that GPR performs the best among the four methods and that, remarkably, the performance of linear regression approaches that of GPR as the subset of selected input variables is increased. The observed competitiveness of high-dimensional linear regression models, which does not hold true in general, is explained in the context of extreme learning machines and functional link neural networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Molecular pathology relies on identifying anomalies using PCR or analysis of DNA/RNA. This is important in solid tumours where molecular stratification of patients define targeted treatment. These molecular biomarkers rely on examination of tumour, annotation for possible macro dissection/tumour cell enrichment and the estimation of % tumour. Manually marking up tumour is error prone. Method: We have developed a method for automated tumour mark-up and % cell calculations using image analysis called TissueMark® based on texture analysis for lung, colorectal and breast (cases=245, 100, 100 respectively). Pathologists marked slides for tumour and reviewed the automated analysis. A subset of slides was manually counted for tumour cells to provide a benchmark for automated image analysis. Results: There was a strong concordance between pathological and automated mark-up (100 % acceptance rate for macro-dissection). We also showed a strong concordance between manually/automatic drawn boundaries (median exclusion/inclusion error of 91.70 %/89 %). EGFR mutation analysis was precisely the same for manual and automated annotation-based macrodissection. The annotation accuracy rates in breast and colorectal cancer were 83 and 80 % respectively. Finally, region-based estimations of tumour percentage using image analysis showed significant correlation with actual cell counts. Conclusion: Image analysis can be used for macro-dissection to (i) annotate tissue for tumour and (ii) estimate the % tumour cells and represents an approach to standardising/improving molecular diagnostics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The original goals of the JET ITER-like wall included the study of the impact of an all W divertor on plasma operation (Coenen et al 2013 Nucl. Fusion 53 073043) and fuel retention (Brezinsek et al 2013 Nucl. Fusion 53 083023). ITER has recently decided to install a full-tungsten (W) divertor from the start of operations. One of the key inputs required in support of this decision was the study of the possibility of W melting and melt splashing during transients. Damage of this type can lead to modifications of surface topology which could lead to higher disruption frequency or compromise subsequent plasma operation. Although every effort will be made to avoid leading edges, ITER plasma stored energies are sufficient that transients can drive shallow melting on the top surfaces of components. JET is able to produce ELMs large enough to allow access to transient melting in a regime of relevance to ITER.

Transient W melt experiments were performed in JET using a dedicated divertor module and a sequence of I-P = 3.0 MA/B-T = 2.9 T H-mode pulses with an input power of P-IN = 23 MW, a stored energy of similar to 6 MJ and regular type I ELMs at Delta W-ELM = 0.3 MJ and f(ELM) similar to 30 Hz. By moving the outer strike point onto a dedicated leading edge in the W divertor the base temperature was raised within similar to 1 s to a level allowing transient, ELM-driven melting during the subsequent 0.5 s. Such ELMs (delta W similar to 300 kJ per ELM) are comparable to mitigated ELMs expected in ITER (Pitts et al 2011 J. Nucl. Mater. 415 (Suppl.) S957-64).

Although significant material losses in terms of ejections into the plasma were not observed, there is indirect evidence that some small droplets (similar to 80 mu m) were released. Almost 1 mm (similar to 6 mm(3)) of W was moved by similar to 150 ELMs within 7 subsequent discharges. The impact on the main plasma parameters was minor and no disruptions occurred. The W-melt gradually moved along the leading edge towards the high-field side, driven by j x B forces. The evaporation rate determined from spectroscopy is 100 times less than expected from steady state melting and is thus consistent only with transient melting during the individual ELMs. Analysis of IR data and spectroscopy together with modelling using the MEMOS code Bazylev et al 2009 J. Nucl. Mater. 390-391 810-13 point to transient melting as the main process. 3D MEMOS simulations on the consequences of multiple ELMs on damage of tungsten castellated armour have been performed.

These experiments provide the first experimental evidence for the absence of significant melt splashing at transient events resembling mitigated ELMs on ITER and establish a key experimental benchmark for the MEMOS code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fully Homomorphic Encryption (FHE) is a recently developed cryptographic technique which allows computations on encrypted data. There are many interesting applications for this encryption method, especially within cloud computing. However, the computational complexity is such that it is not yet practical for real-time applications. This work proposes optimised hardware architectures of the encryption step of an integer-based FHE scheme with the aim of improving its practicality. A low-area design and a high-speed parallel design are proposed and implemented on a Xilinx Virtex-7 FPGA, targeting the available DSP slices, which offer high-speed multiplication and accumulation. Both use the Comba multiplication scheduling method to manage the large multiplications required with uneven sized multiplicands and to minimise the number of read and write operations to RAM. Results show that speed up factors of 3.6 and 10.4 can be achieved for the encryption step with medium-sized security parameters for the low-area and parallel designs respectively, compared to the benchmark software implementation on an Intel Core2 Duo E8400 platform running at 3 GHz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For some time, the satisfiability formulae that have been the most difficult to solve for their size have been crafted to be unsatisfiable by the use of cardinality constraints. Recent solvers have introduced explicit checking of such constraints, rendering previously difficult formulae trivial to solve. A family of unsatisfiable formulae is described that is derived from the sgen4 family but cannot be solved using cardinality constraints detection and reasoning alone. These formulae were found to be the most difficult during the SAT2014 competition by a significant margin and include the shortest unsolved benchmark in the competition, sgen6-1200-5-1.cnf.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from Expectation Maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comet C/2012 S1 (ISON) is unique in that it is a dynamically new comet derived from the Oort cloud reservoir of comets with a sun-grazing orbit. Infrared (IR) and visible wavelength observing campaigns were planned on NASA's Stratospheric Observatory For Infrared Astronomy (SOFIA) and on National Solar Observatory Dunn (DST) and McMath-Pierce Solar Telescopes, respectively. We highlight our early results. SOFIA (+FORCAST [1]) mid- to far-IR images and spectroscopy (~5-35 μm) of the dust in the coma of ISON are to be obtained by the ISON-SOFIA Team during a flight window 2013 Oct 21-23 UT (r_h≈1.18 AU). Dust characteristics, identified through the 10 μm silicate emission feature and its strength [2], as well as spectral features from cometary crystalline silicates (Forsterite) at 11.05-11.2 μm, and near 16, 19, 23.5, 27.5, and 33 μm are compared with other Oort cloud comets that span the range of small and/or highly porous grains (e.g., C/1995 O1 (Hale-Bopp) [3,4,5] and C/2001 Q4 (NEAT) [6]) to large and/or compact grains (e.g., C/2007 N4 (Lulin) [7] and C/2006 P1 (McNaught) [8]). Measurement of the crystalline peaks in contrast to the broad 10 and 20 μm amorphous silicate features yields the cometary silicate crystalline mass fraction [9], which is a benchmark for radial transport in our protoplanetary disk [10]. The central wavelength positions, relative intensities, and feature asymmetries for the crystalline peaks may constrain the shapes of the crystals [11]. Only SOFIA can look for cometary organics in the 5-8 μm region. Spatially resolved measurements of atoms and simple molecules from when comet ISON is near the Sun (r_h<0.4 AU, near Nov-20--Dec-03 UT) were proposed for by the ISON-DST Team. Comet ISON is the first comet since comet Ikeya-Seki (1965f) [12,13] suitable for studying the alkalai metals Na and K and the atoms specifically attributed to dust grains including Mg, Si, Fe, as well as Ca. DST's Horizontal Grating Spectrometer (HGS) measures 4 settings: Na I, K, C2 to sample cometary organics (along with Mg I), and [O I] as a proxy for activity from water [14] (along with Si I and Fe I). State-of-the-art instruments that will also be employed include IBIS [15], which is a Fabry-Perot spectral imaging system that concurrently measures lines of Na, K, Ca II, or Fe, and ROSA (CSUN/QUB) [16], which is a rapid imager that simultaneously monitors Ca II or CN. From McMath-Pierce, the Solar-Stellar Spectrograph also will target ISON (320-900 nm, R~21,000, r_h

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy consumption has become an important area of research of late. With the advent of new manycore processors, situations have arisen where not all the processors need to be active to reach an optimal relation between performance and energy usage. In this paper a study of the power and energy usage of a series of benchmarks, the PARSEC and the SPLASH- 2X Benchmark Suites, on the Intel Xeon Phi for different threads configurations, is presented. To carry out this study, a tool was designed to monitor and record the power usage in real time during execution time and afterwards to compare the r

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economic and environmental load dispatch aims to determine the amount of electricity generated from power plants to meet load demand while minimizing fossil fuel costs and air pollution emissions subject to operational and licensing requirements. These two scheduling problems are commonly formulated with non-smooth cost functions respectively considering various effects and constraints, such as the valve point effect, power balance and ramp rate limits. The expected increase in plug-in electric vehicles is likely to see a significant impact on the power system due to high charging power consumption and significant uncertainty in charging times. In this paper, multiple electric vehicle charging profiles are comparatively integrated into a 24-hour load demand in an economic and environment dispatch model. Self-learning teaching-learning based optimization (TLBO) is employed to solve the non-convex non-linear dispatch problems. Numerical results on well-known benchmark functions, as well as test systems with different scales of generation units show the significance of the new scheduling method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation into exchange-traded fund (ETF) outperforrnance during the period 2008-2012 is undertaken utilizing a data set of 288 U.S. traded securities. ETFs are tested for net asset value (NAV) premium, underlying index and market benchmark outperformance, with Sharpe, Treynor, and Sortino ratios employed as risk-adjusted performance measures. A key contribution is the application of an innovative generalized stepdown procedure in controlling for data snooping bias. We find that a large proportion of optimized replication and debt asset class ETFs display risk-adjusted premiums with energy and precious metals focused funds outperforming the S&P 500 market benchmark

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate a collision-sensitive secondary network that intends to opportunistically aggregate and utilize spectrum of a primary network to achieve higher data rates. In opportunistic spectrum access with imperfect sensing of idle primary spectrum, secondary transmission can collide with primary transmission. When the secondary network aggregates more channels in the presence of the imperfect sensing, collisions could occur more often, limiting the performance obtained by spectrum aggregation. In this context, we aim to address a fundamental query, that is, how much spectrum aggregation is worthy with imperfect sensing. For collision occurrence, we focus on two different types of collision: one is imposed by asynchronous transmission; and the other by imperfect spectrum sensing. The collision probability expression has been derived in closed-form with various secondary network parameters: primary traffic load, secondary user transmission parameters, spectrum sensing errors, and the number of aggregated sub-channels. In addition, the impact of spectrum aggregation on data rate is analysed under the constraint of collision probability. Then, we solve an optimal spectrum aggregation problem and propose the dynamic spectrum aggregation approach to increase the data rate subject to practical collision constraints. Our simulation results show clearly that the proposed approach outperforms the benchmark that passively aggregates sub-channels with lack of collision awareness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.