29 resultados para Multi-objective analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To assess the repeatability of an objective image analysis technique to determine intraocular lens (IOL) rotation and centration. SETTING: Six ophthalmology clinics across Europe. METHODS: One-hundred seven patients implanted with Akreos AO aspheric IOLs with orientation marks were imaged. Image quality was rated by a masked observer. The axis of rotation was determined from a line bisecting the IOL orientation marks. This was normalized for rotation of the eye between visits using the axis bisecting 2 consistent conjunctival vessels or iris features. The center of ovals overlaid to circumscribe the IOL optic edge and the pupil or limbus were compared to determine IOL centration. Intrasession repeatability was assessed in 40 eyes and the variability of repeated analysis examined. RESULTS: Intrasession rotational stability of the IOL was ±0.79 degrees (SD) and centration was ±0.10 mm horizontally and ±0.10 mm vertically. Repeated analysis variability of the same image was ±0.70 degrees for rotation and ±0.20 mm horizontally and ±0.31 mm vertically for centration. Eye rotation (absolute) between visits was 2.23 ± 1.84 degrees (10%>5 degrees rotation) using one set of consistent conjunctival vessels or iris features and 2.03 ± 1.66 degrees (7%>5 degrees rotation) using the average of 2 sets (P =.13). Poorer image quality resulted in larger apparent absolute IOL rotation (r =-0.45,P<.001). CONCLUSIONS: Objective analysis of digital retroillumination images allows sensitive assessment of IOL rotation and centration stability. Eye rotation between images can lead to significant errors if not taken into account. Image quality is important to analysis accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Market mechanisms are a means by which resources in contention can be allocated between contending parties, both in human economies and those populated by software agents. Designing such mechanisms has traditionally been carried out by hand, and more recently by automation. Assessing these mechanisms typically involves them being evaluated with respect to multiple conflicting objectives, which can often be nonlinear, noisy, and expensive to compute. For typical performance objectives, it is known that designed mechanisms often fall short on being optimal across all objectives simultaneously. However, in all previous automated approaches, either only a single objective is considered, or else the multiple performance objectives are combined into a single objective. In this paper we do not aggregate objectives, instead considering a direct, novel application of multi-objective evolutionary algorithms (MOEAs) to the problem of automated mechanism design. This allows the automatic discovery of trade-offs that such objectives impose on mechanisms. We pose the problem of mechanism design, specifically for the class of linear redistribution mechanisms, as a naturally existing multi-objective optimisation problem. We apply a modified version of NSGA-II in order to design mechanisms within this class, given economically relevant objectives such as welfare and fairness. This application of NSGA-II exposes tradeoffs between objectives, revealing relationships between them that were otherwise unknown for this mechanism class. The understanding of the trade-off gained from the application of MOEAs can thus help practitioners with an insightful application of discovered mechanisms in their respective real/artificial markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To assess the validity and repeatability of objective compared to subjective contact lens fit analysis. Methods: Thirty-five subjects (aged 22.0. ±. 3.0 years) wore two different soft contact lens designs. Four lens fit variables: centration, horizontal lag, post-blink movement in up-gaze and push-up recovery speed were assessed subjectively (four observers) and objectively from slit-lamp biomicroscopy captured images and video. The analysis was repeated a week later. Results: The average of the four experienced observers was compared to objective measures, but centration, movement on blink, lag and push-up recovery speed all varied significantly between them (p <. 0.001). Horizontal lens centration was on average close to central as assessed both objectively and subjectively (p > 0.05). The 95% confidence interval of subjective repeatability was better than objective assessment (±0.128. mm versus ±0.168. mm, p = 0.417), but utilised only 78% of the objective range. Vertical centration assessed objectively showed a slight inferior decentration (0.371. ±. 0.381. mm) with good inter- and intrasession repeatability (p > 0.05). Movement-on-blink was lower estimated subjectively than measured objectively (0.269. ±. 0.179. mm versus 0.352. ±. 0.355. mm; p = 0.035), but had better repeatability (±0.124. mm versus ±0.314. mm 95% confidence interval) unless correcting for the smaller range (47%). Horizontal lag was lower estimated subjectively (0.562. ±. 0.259. mm) than measured objectively (0.708. ±. 0.374. mm, p <. 0.001), had poorer repeatability (±0.132. mm versus ±0.089. mm 95% confidence interval) and had a smaller range (63%). Subjective categorisation of push-up speed of recovery showed reasonable differentiation relative to objective measurement (p <. 0.001). Conclusions: The objective image analysis allows an accurate, reliable and repeatable assessment of soft contact lens fit characteristics, being a useful tool for research and optimisation of lens fit in clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new mathematical method for improving the discrimination power of data envelopment analysis and to completely rank the efficient decision-making units (DMUs). Fuzzy concept is utilised. For this purpose, first all DMUs are evaluated with the CCR model. Thereafter, the resulted weights for each output are considered as fuzzy sets and are then converted to fuzzy numbers. The introduced model is a multi-objective linear model, endpoints of which are the highest and lowest of the weighted values. An added advantage of the model is its ability to handle the infeasibility situation sometimes faced by previously introduced models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: To establish the sensitivity and reliability of objective image analysis in direct comparison with subjective grading of bulbar hyperaemia. Methods: Images of the same eyes were captured with a range of bulbar hyperaemia caused by vasodilation. The progression was recorded and 45 images extracted. The images were objectively analysed on 14 occasions using previously validated edge-detection and colour-extraction techniques. They were also graded by 14 eye-care practitioners (ECPs) and 14 non-clinicians (NCb) using the Efron scale. Six ECPs repeated the grading on three separate occasions Results: Subjective grading was only able to differentiate images with differences in grade of 0.70-1.03 Efron units (sensitivity of 0.30-0.53), compared to 0,02-0.09 Efron units with objective techniques (sensitivity of 0.94-0.99). Significant differences were found between ECPs and individual repeats were also inconsistent (p<0.001). Objective analysis was 16x more reliable than subjective analysis. The NCLs used wider ranges of the scale but were more variable than ECPs, implying that training may have an effect on grading. Conclusions: Objective analysis may offer a new gold standard in anterior ocular examination, and should be developed further as a clinical research tool to allow more highly powered analysis, and to enhance the clinical monitoring of anterior eye disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lack of discrimination power and poor weight dispersion remain major issues in Data Envelopment Analysis (DEA). Since the initial multiple criteria DEA (MCDEA) model developed in the late 1990s, only goal programming approaches; that is, the GPDEA-CCR and GPDEA-BCC were introduced for solving the said problems in a multi-objective framework. We found GPDEA models to be invalid and demonstrate that our proposed bi-objective multiple criteria DEA (BiO-MCDEA) outperforms the GPDEA models in the aspects of discrimination power and weight dispersion, as well as requiring less computational codes. An application of energy dependency among 25 European Union member countries is further used to describe the efficacy of our approach. © 2013 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Group decision making is the study of identifying and selecting alternatives based on the values and preferences of the decision maker. Making a decision implies that there are several alternative choices to be considered. This paper uses the concept of Data Envelopment Analysis to introduce a new mathematical method for selecting the best alternative in a group decision making environment. The introduced model is a multi-objective function which is converted into a multi-objective linear programming model from which the optimal solution is obtained. A numerical example shows how the new model can be applied to rank the alternatives or to choose a subset of the most promising alternatives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present work describes the development of a proton induced X-ray emission (PIXE) analysis system, especially designed and builtfor routine quantitative multi-elemental analysis of a large number of samples. The historical and general developments of the analytical technique and the physical processes involved are discussed. The philosophy, design, constructional details and evaluation of a versatile vacuum chamber, an automatic multi-sample changer, an on-demand beam pulsing system and ion beam current monitoring facility are described.The system calibration using thin standard foils of Si, P, S,Cl, K, Ca, Ti, V, Fe, Cu, Ga, Ge, Rb, Y and Mo was undertaken at proton beam energies of 1 to 3 MeV in steps of 0.5 MeV energy and compared with theoretical calculations. An independent calibration check using bovine liver Standard Reference Material was performed.  The minimum detectable limits have been experimentally determined at detector positions of 90° and 135° with respect to the incident beam for the above range of proton energies as a function of atomic number Z. The system has detection limits of typically 10- 7 to 10- 9 g for elements 14analysis and calculations of areal density of thin foils using Rutherford backscattering data.  Amniotic fluid samples supplied by South Sefton Health Authority were successfully analysed for their low base line elemental concentrations. In conclusion the findings of this work are discussed with suggestions for further work .