28 resultados para Weighting
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
In this paper, we propose a new similarity measure to compute the pair-wise similarity of text-based documents based on patterns of the words in the documents. First we develop a kappa measure for pair-wise comparison of documents then we use ordered weighting averaging operator to define a document similarity measure for a set of documents.
Resumo:
This article presents a potential method to assist developers of future bioenergy schemes when selecting from available suppliers of biomass materials. The method aims to allow tacit requirements made on biomass suppliers to be considered at the design stage of new developments. The method used is a combination of the Analytical Hierarchy Process and the Quality Function Deployment methods (AHP-QFD). The output of the method is a ranking and relative weighting of the available suppliers which could be used to improve optimization algorithms such as linear and goal programming. The paper is at a conceptual stage and no results have been obtained. The aim is to use the AHP-QFD method to bridge the gap between treatment of explicit and tacit requirements of bioenergy schemes; allowing decision makers to identify the most successful supply strategy available.
Resumo:
To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.
Resumo:
As we settle into a new year, this second issue of Contact Lens and Anterior Eye allows us to reflect on how new research in this field impacts our understanding, but more importantly, how we use this evidence basis to enhance our day to day practice, to educate the next generation of students and to construct the research studies to deepen our knowledge still further. The end of 2014 saw the publication of the UK governments Research Exercise Framework (REF) which ranks Universities in terms of their outputs (which includes their paper, publications and research income), environment (infrastructure and staff support) and for the first time impact (defined as “any effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” [8]). The REF is a process of expert review, carried out in 36 subject-based units of assessment, of which our field is typically submitted to the Allied Health, Dentistry, Nursing and Pharmacy panel. Universities that offer Optometry did very well with Cardiff, Manchester and Aston in the top 10% out of the 94 Universities that submitted to this panel (Grade point Average ranked order). While the format of the new exercise (probably in 2010) to allocate the more than £2 billion of UK government research funds is yet to be determined, it is already rumoured that impact will contribute an even larger proportion to the weighting. Hence it is even more important to reflect on the impact of our research. In this issue, Elisseef and colleagues [5] examine the intriguing potential of modifying a lens surface to allow it to bind to known wetting agents (in this case hyaluronic acid) to enhance water retention. Such a technique has the capacity to reduced friction between the lens surface and the eyelids/ocular surface, presumably leading to higher comfort and less reason for patients to discontinue with lens wear. Several papers in this issue report on the validity of new high precision, fast scanning imaging and quantification equipment, utilising techniques such as Scheimpflug, partial coherence interferometry, aberrometry and video allowing detailed assessment of anterior chamber biometry, corneal topography, corneal biomechanics, peripheral refraction, ocular aberrations and lens fit. The challenge is how to use this advanced instrumentation which is becoming increasingly available to create real impact. Many challenges in contact lenses and the anterior eye still prevail in 2015 such as: -While contact lens and refractive surgery complications are relatively rare, they are still too often devastating to the individual and their quality of life (such as the impact and prognosis of patients with Acanthmoeba Keratitis reported by Jhanji and colleagues in this issue [7]). How can we detect those patients who are going to be affected and what modifications do we need to make to contact lenses and patient management prevent this occurring? -Drop out from contact lenses still occurs at a rapid rate and symptoms of dry eye seem to be the leading cause driving this discontinuation of wear [1] and [2]. What design, coating, material and lubricant release mechanism will make a step change in end of day comfort in particular? -Presbyopia is a major challenge to hassle free quality vision and is one of the first signs of ageing noticed by many people. As an emmetrope approaching presbyopia, I have a vested interest in new medical devices that will give me high quality vision at all distances when my arms won’t stretch any further. Perhaps a new definition of presbyopia could be when you start to orientate your smartphone in the landscape direction to gain the small increase in print size needed to read! Effective accommodating intraocular lenses that truly mimic the pre-presbyopic crystalline lenses are still a way off [3] and hence simultaneous images achieved through contact lenses, intraocular lenses or refractive surgery still have a secure future. However, splitting light reaching the retina and requiring the brain to supress blurred images will always be a compromise on contrast sensitivity and is liable to cause dysphotopsia; so how will new designs account for differences in a patient's task demands and own optical aberrations to allow focused patient selection, optimising satisfaction? -Drug delivery from contact lenses offers much in terms of compliance and quality of life for patients with chronic ocular conditions such as glaucoma, dry eye and perhaps in the future, dry age-related macular degeneration; but scientific proof-of-concept publications (see EIShaer et al. [6]) have not yet led to commercial products. Part of this is presumably the regulatory complexity of combining a medical device (the contact lens) and a pharmaceutical agent. Will 2015 be the year when this innovation finally becomes a reality for patients, bringing them an enhanced quality of life through their eye care practitioners and allowing researchers to further validate the use of pharmaceutical contact lenses and propose enhancements as the technology matures? -Last, but no means least is the field of myopia control, the topic of the first day of the BCLA's Conference in Liverpool, June 6–9th 2015. The epidemic of myopia is a blight, particularly in Asia, with significant concerns over sight threatening pathology resulting from the elongated eye. This is a field where real impact is already being realised through new soft contact lens optics, orthokeratology and low dose pharmaceuticals [4], but we still need to be able to better predict which technique will work best for an individual and to develop new techniques to retard myopia progression in those who don’t respond to current treatments, without increasing their risk of complications or the treatment impacting their quality of life So what will your New Year's resolution be to make 2015 a year of real impact, whether by advancing science or applying the findings published in journals such as Contact Lens and Anterior Eye to make a real difference to your patients’ lives?
Resumo:
Many candidate biomarkers of human ageing have been proposed in the scientific literature but in all cases their variability in cross-sectional studies is considerable, and therefore no single measurement has proven to serve a useful marker to determine, on its own, biological age. A plausible reason for this is the intrinsic multi-causal and multi-system nature of the ageing process. The recently completed MARK-AGE study was a large-scale integrated project supported by the European Commission. The major aim of this project was to conduct a population study comprising about 3200 subjects in order to identify a set of biomarkers of ageing which, as a combination of parameters with appropriate weighting, would measure biological age better than any marker in isolation.
Resumo:
This thesis objective is to discover “How are informal decisions reached by screeners when filtering out undesirable job applications?” Grounded theory techniques were employed in the field to observe and analyse informal decisions at the source by screeners in three distinct empirical studies. Whilst grounded theory provided the method for case and cross-case analysis, literature from academic and non-academic sources was evaluated and integrated to strengthen this research and create a foundation for understanding informal decisions. As informal decisions in early hiring processes have been under researched, this thesis contributes to current knowledge in several ways. First, it locates the Cycle of Employment which enhances Robertson and Smith’s (1993) Selection Paradigm through the integration of stages that individuals occupy whilst seeking employment. Secondly, a general depiction of the Workflow of General Hiring Processes provides a template for practitioners to map and further develop their organisational processes. Finally, it highlights the emergence of the Locality Effect, which is a geographically driven heuristic and bias that can significantly impact recruitment and informal decisions. Although screeners make informal decisions using multiple variables, informal decisions are made in stages as evidence in the Cycle of Employment. Moreover, informal decisions can be erroneous as a result of a majority and minority influence, the weighting of information, the injection of inappropriate information and criteria, and the influence of an assessor. This thesis considers these faults and develops a basic framework of understanding informal decisions to which future research can be launched.
Resumo:
In the paper, we construct a composite indicator to estimate the potential of four Central and Eastern European countries (the Czech Republic, Hungary, Poland and Slovakia) to benefit from productivity spillovers from foreign direct investment (FDI) in the manufacturing sector. Such transfers of technology are one of the main benefits of FDI for the host country, and should also be one of the main determinants of FDI incentives offered to investing multinationals by governments, but they are difficult to assess ex ante. For our composite index, we use six components to proxy the main channels and determinants of these spillovers. We have tried several weighting and aggregation methods, and we consider our results robust. According to the analysis of our results, between 2003 and 2007 all four countries were able to increase their potential to benefit from such spillovers, although there are large differences between them. The Czech Republic clearly has the most potential to benefit from productivity spillovers, while Poland has the least. The relative positions of Hungary and Slovakia depend to some extent on the exact weighting and aggregation method of the individual components of the index, but the differences are not large. These conclusions have important implications both the investment strategies of multinationals and government FDI policies.
Resumo:
The modern grid system or the smart grid is likely to be populated with multiple distributed energy sources, e.g. wind power, PV power, Plug-in Electric Vehicle (PEV). It will also include a variety of linear and nonlinear loads. The intermittent nature of renewable energies like PV, wind turbine and increased penetration of Electric Vehicle (EV) makes the stable operation of utility grid system challenging. In order to ensure a stable operation of the utility grid system and to support smart grid functionalities such as, fault ride-through, frequency response, reactive power support, and mitigation of power quality issues, an energy storage system (ESS) could play an important role. A fast acting bidirectional energy storage system which can rapidly provide and absorb power and/or VARs for a sufficient time is a potentially valuable tool to support this functionality. Battery energy storage systems (BESS) are one of a range suitable energy storage system because it can provide and absorb power for sufficient time as well as able to respond reasonably fast. Conventional BESS already exist on the grid system are made up primarily of new batteries. The cost of these batteries can be high which makes most BESS an expensive solution. In order to assist moving towards a low carbon economy and to reduce battery cost this work aims to research the opportunities for the re-use of batteries after their primary use in low and ultra-low carbon vehicles (EV/HEV) on the electricity grid system. This research aims to develop a new generation of second life battery energy storage systems (SLBESS) which could interface to the low/medium voltage network to provide necessary grid support in a reliable and in cost-effective manner. The reliability/performance of these batteries is not clear, but is almost certainly worse than a new battery. Manufacturers indicate that a mixture of gradual degradation and sudden failure are both possible and failure mechanisms are likely to be related to how hard the batteries were driven inside the vehicle. There are several figures from a number of sources including the DECC (Department of Energy and Climate Control) and Arup and Cenex reports indicate anything from 70,000 to 2.6 million electric and hybrid vehicles on the road by 2020. Once the vehicle battery has degraded to around 70-80% of its capacity it is considered to be at the end of its first life application. This leaves capacity available for a second life at a much cheaper cost than a new BESS Assuming a battery capability of around 5-18kWhr (MHEV 5kWh - BEV 18kWh battery) and approximate 10 year life span, this equates to a projection of battery storage capability available for second life of >1GWhrs by 2025. Moreover, each vehicle manufacturer has different specifications for battery chemistry, number and arrangement of battery cells, capacity, voltage, size etc. To enable research and investment in this area and to maximize the remaining life of these batteries, one of the design challenges is to combine these hybrid batteries into a grid-tie converter where their different performance characteristics, and parameter variation can be catered for and a hot swapping mechanism is available so that as a battery ends it second life, it can be replaced without affecting the overall system operation. This integration of either single types of batteries with vastly different performance capability or a hybrid battery system to a grid-tie 3 energy storage system is different to currently existing work on battery energy storage systems (BESS) which deals with a single type of battery with common characteristics. This thesis addresses and solves the power electronic design challenges in integrating second life hybrid batteries into a grid-tie energy storage unit for the first time. This study details a suitable multi-modular power electronic converter and its various switching strategies which can integrate widely different batteries to a grid-tie inverter irrespective of their characteristics, voltage levels and reliability. The proposed converter provides a high efficiency, enhanced control flexibility and has the capability to operate in different operational modes from the input to output. Designing an appropriate control system for this kind of hybrid battery storage system is also important because of the variation of battery types, differences in characteristics and different levels of degradations. This thesis proposes a generalised distributed power sharing strategy based on weighting function aims to optimally use a set of hybrid batteries according to their relative characteristics while providing the necessary grid support by distributing the power between the batteries. The strategy is adaptive in nature and varies as the individual battery characteristics change in real time as a result of degradation for example. A suitable bidirectional distributed control strategy or a module independent control technique has been developed corresponding to each mode of operation of the proposed modular converter. Stability is an important consideration in control of all power converters and as such this thesis investigates the control stability of the multi-modular converter in detailed. Many controllers use PI/PID based techniques with fixed control parameters. However, this is not found to be suitable from a stability point-of-view. Issues of control stability using this controller type under one of the operating modes has led to the development of an alternative adaptive and nonlinear Lyapunov based control for the modular power converter. Finally, a detailed simulation and experimental validation of the proposed power converter operation, power sharing strategy, proposed control structures and control stability issue have been undertaken using a grid connected laboratory based multi-modular hybrid battery energy storage system prototype. The experimental validation has demonstrated the feasibility of this new energy storage system operation for use in future grid applications.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs.