87 resultados para data movement problem
Resumo:
This paper suggests a data envelopment analysis (DEA) model for selecting the most efficient alternative in advanced manufacturing technology in the presence of both cardinal and ordinal data. The paper explains the problem of using an iterative method for finding the most efficient alternative and proposes a new DEA model without the need of solving a series of LPs. A numerical example illustrates the model, and an application in technology selection with multi-inputs/multi-outputs shows the usefulness of the proposed approach. © 2012 Springer-Verlag London Limited.
Resumo:
The existing assignment problems for assigning n jobs to n individuals are limited to the considerations of cost or profit measured as crisp. However, in many real applications, costs are not deterministic numbers. This paper develops a procedure based on Data Envelopment Analysis method to solve the assignment problems with fuzzy costs or fuzzy profits for each possible assignment. It aims to obtain the points with maximum membership values for the fuzzy parameters while maximizing the profit or minimizing the assignment cost. In this method, a discrete approach is presented to rank the fuzzy numbers first. Then, corresponding to each fuzzy number, we introduce a crisp number using the efficiency concept. A numerical example is used to illustrate the usefulness of this new method. © 2012 Operational Research Society Ltd. All rights reserved.
Resumo:
In the traditional TOPSIS, the ideal solutions are assumed to be located at the endpoints of the data interval. However, not all performance attributes possess ideal values at the endpoints. We termed performance attributes that have ideal values at extreme points as Type-1 attributes. Type-2 attributes however possess ideal values somewhere within the data interval instead of being at the extreme end points. This provides a preference ranking problem when all attributes are computed and assumed to be of the Type-1 nature. To overcome this issue, we propose a new Fuzzy DEA method for computing the ideal values and distance function of Type-2 attributes in a TOPSIS methodology. Our method allows Type-1 and Type-2 attributes to be included in an evaluation system without compromising the ranking quality. The efficacy of the proposed model is illustrated with a vendor evaluation case for a high-tech investment decision making exercise. A comparison analysis with the traditional TOPSIS is also presented. © 2012 Springer Science+Business Media B.V.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.
Resumo:
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. © 2013 Burgess.
Resumo:
Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.
Resumo:
An iterative method for reconstruction of solutions to second order elliptic equations by Cauchy data given on a part of the boundary, is presented. At each iteration step, a series of mixed well-posed boundary value problems are solved for the elliptic operator and its adjoint. The convergence proof of this method in a weighted L2 space is included. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Resumo:
In this paper, three iterative procedures (Landweber-Fridman, conjugate gradient and minimal error methods) for obtaining a stable solution to the Cauchy problem in slow viscous flows are presented and compared. A section is devoted to the numerical investigations of these algorithms. There, we use the boundary element method together with efficient stopping criteria for ceasing the iteration process in order to obtain stable solutions.
Resumo:
An iterative procedure for determining temperature fields from Cauchy data given on a part of the boundary is presented. At each iteration step, a series of mixed well-posed boundary value problems are solved for the heat operator and its adjoint. A convergence proof of this method in a weighted L2-space is included, as well as a stopping criteria for the case of noisy data. Moreover, a solvability result in a weighted Sobolev space for a parabolic initial boundary value problem of second order with mixed boundary conditions is presented. Regularity of the solution is proved. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Resumo:
An iterative method for reconstruction of the solution to a parabolic initial boundary value problem of second order from Cauchy data is presented. The data are given on a part of the boundary. At each iteration step, a series of well-posed mixed boundary value problems are solved for the parabolic operator and its adjoint. The convergence proof of this method in a weighted L2-space is included.
Resumo:
Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.
Resumo:
We extend a meshless method of fundamental solutions recently proposed by the authors for the one-dimensional two-phase inverse linear Stefan problem, to the nonlinear case. In this latter situation the free surface is also considered unknown which is more realistic from the practical point of view. Building on the earlier work, the solution is approximated in each phase by a linear combination of fundamental solutions to the heat equation. The implementation and analysis are more complicated in the present situation since one needs to deal with a nonlinear minimization problem to identify the free surface. Furthermore, the inverse problem is ill-posed since small errors in the input measured data can cause large deviations in the desired solution. Therefore, regularization needs to be incorporated in the objective function which is minimized in order to obtain a stable solution. Numerical results are presented and discussed. © 2014 IMACS.
Resumo:
In this paper, we address the problem of robust information embedding in digital data. Such a process is carried out by introducing modifications to the original data that one would like to keep minimal. It assumes that the data, which includes the embedded information, is corrupted before the extraction is carried out. We propose a principled way to tailor an efficient embedding process for given data and noise statistics. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
Energy consumption has been a key concern of data gathering in wireless sensor networks. Previous research works show that modulation scaling is an efficient technique to reduce energy consumption. However, such technique will also impact on both packet delivery latency and packet loss, therefore, may result in adverse effects on the qualities of applications. In this paper, we study the problem of modulation scaling and energy-optimization. A mathematical model is proposed to analyze the impact of modulation scaling on the overall energy consumption, end-to-end mean delivery latency and mean packet loss rate. A centralized optimal management mechanism is developed based on the model, which adaptively adjusts the modulation levels to minimize energy consumption while ensuring the QoS for data gathering. Experimental results show that the management mechanism saves significant energy in all the investigated scenarios. Some valuable results are also observed in the experiments. © 2004 IEEE.