14 resultados para Many-to-many-assignment problem

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Confronted with high variety and low volume market demands, many companies, especially the Japanese electronics manufacturing companies, have reconfigured their conveyor assembly lines and adopted seru production systems. Seru production system is a new type of work-cell-based manufacturing system. A lot of successful practices and experience show that seru production system can gain considerable flexibility of job shop and high efficiency of conveyor assembly line. In implementing seru production, the multi-skilled worker is the most important precondition, and some issues about multi-skilled workers are central and foremost. In this paper, we investigate the training and assignment problem of workers when a conveyor assembly line is entirely reconfigured into several serus. We formulate a mathematical model with double objectives which aim to minimize the total training cost and to balance the total processing times among multi-skilled workers in each seru. To obtain the satisfied task-to-worker training plan and worker-to-seru assignment plan, a three-stage heuristic algorithm with nine steps is developed to solve this mathematical model. Then, several computational cases are taken and computed by MATLAB programming. The computation and analysis results validate the performances of the proposed mathematical model and heuristic algorithm. © 2013 Springer-Verlag London.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been widely recognized that the combination of carbon nanotubes (CNTs) and low molar mass thermotropic liquid crystals (tLCs) not only provides a useful way to align CNTs, but also dramatically enhances the tLC performance especially in the liquid crystal display technology. Such CNT-tLC nanocomposites have ignited hopes to address many stubborn problems within the field, such as low contrast, slow response, and narrow view angle. However, this material development has been limited by the poor solubility of CNTs in tLCs. Here, we describe an effective strategy to solve the problem. Prior to integrating with tLCs, pristine CNTs are physically "coated" by a liquid crystalline polymer (LCP) which is compatible with tLCs. The homogeneous CNT-tLC composite obtained in this way is stable for over 6 months, and the concentration of CNTs in tLCs can reach 1 wt %. We further demonstrate the alignment of CNTs at high CNT concentrations by an electric field with a theory to model the impedance response of the CNT-tLC mixture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most academic control schemes for MIMO systems assume all the control variables are updated simultaneously. MPC outperforms other control strategies through its ability to deal with constraints. This requires on-line optimization, hence computational complexity can become an issue when applying MPC to complex systems with fast response times. The multiplexed MPC scheme described in this paper solves the MPC problem for each subsystem sequentially, and updates subsystem controls as soon as the solution is available, thus distributing the control moves over a complete update cycle. The resulting computational speed-up allows faster response to disturbances, and hence improved performance, despite finding sub-optimal solutions to the original problem. The multiplexed MPC scheme is also closer to industrial practice in many cases. This paper presents initial stability results for two variants of multiplexed MPC, and illustrates the performance benefit by an example. Copyright copy; 2005 IFAC. Copyright © 2005 IFAC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information theoretic active learning has been widely studied for probabilistic models. For simple regression an optimal myopic policy is easily tractable. However, for other tasks and with more complex models, such as classification with nonparametric models, the optimal solution is harder to compute. Current approaches make approximations to achieve tractability. We propose an approach that expresses information gain in terms of predictive entropies, and apply this method to the Gaussian Process Classifier (GPC). Our approach makes minimal approximations to the full information theoretic objective. Our experimental performance compares favourably to many popular active learning algorithms, and has equal or lower computational complexity. We compare well to decision theoretic approaches also, which are privy to more information and require much more computational time. Secondly, by developing further a reformulation of binary preference learning to a classification problem, we extend our algorithm to Gaussian Process preference learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In application of the Balancing Domain Decomposition by Constraints (BDDC) to a case with many substructures, solving the coarse problem exactly becomes the bottleneck which spoils scalability of the solver. However, it is straightforward for BDDC to substitute the exact solution of the coarse problem by another step of BDDC method with subdomains playing the role of elements. In this way, the algorithm of three-level BDDC method is obtained. If this approach is applied recursively, multilevel BDDC method is derived. We present a detailed description of a recently developed parallel implementation of this algorithm. The implementation is applied to an engineering problem of linear elasticity and a benchmark problem of Stokes flow in a cavity. Results by the multilevel approach are compared to those by the standard (two-level) BDDC method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphene is emerging as a viable alternative to conventional optoelectronic, plasmonic and nanophotonic materials. The interaction of light with charge carriers creates an out-of-equilibrium distribution, which relaxes on an ultrafast timescale to a hot Fermi-Dirac distribution, that subsequently cools emitting phonons. Although the slower relaxation mechanisms have been extensively investigated, the initial stages still pose a challenge. Experimentally, they defy the resolution of most pump-probe setups, due to the extremely fast sub-100 fs carrier dynamics. Theoretically, massless Dirac fermions represent a novel many-body problem, fundamentally different from Schrödinger fermions. Here we combine pump-probe spectroscopy with a microscopic theory to investigate electron-electron interactions during the early stages of relaxation. We identify the mechanisms controlling the ultrafast dynamics, in particular the role of collinear scattering. This gives rise to Auger processes, including charge multiplication, which is key in photovoltage generation and photodetectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Pharma(ceuticals) industry is at a cross-roads. There are growing concerns that illegitimate products are penetrating the supply chain. There are proposals in many countries to apply RFID and other traceability technologies to solve this problem. However there are several trade-offs and one of the most crucial is between data visibility and confidentiality. In this paper, we use the TrakChain assessment framework tools to study the US Pharma supply chain and to compare candidate solutions to achieve traceability data security: Point-of-Dispense Authentication, Network-based electronic Pedigree, and Document-based electronic Pedigree. We also propose extensions to a supply chain authorization language that is able to capture expressive data sharing conditions considered necessary by the industry's trading partners. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Object matching is a fundamental operation in data analysis. It typically requires the definition of a similarity measure between the classes of objects to be matched. Instead, we develop an approach which is able to perform matching by requiring a similarity measure only within each of the classes. This is achieved by maximizing the dependency between matched pairs of observations by means of the Hilbert Schmidt Independence Criterion. This problem can be cast as one of maximizing a quadratic assignment problem with special structure and we present a simple algorithm for finding a locally optimal solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we adopt a differential-geometry viewpoint to tackle the problem of learning a distance online. As this problem can be cast into the estimation of a fixed-rank positive semidefinite (PSD) matrix, we develop algorithms that exploits the rich geometry structure of the set of fixed-rank PSD matrices. We propose a method which separately updates the subspace of the matrix and its projection onto that subspace. A proper weighting of the two iterations enables to continuously interpolate between the problem of learning a subspace and learning a distance when the subspace is fixed. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We solve the problem of steering a three-level quantum system from one eigen-state to another in minimum time and study its possible extension to the time-optimal control problem for a general n-level quantum system. For the three-level system we find all optimal controls by finding two types of symmetry in the problem: ℤ2 × S3 discrete symmetry and S1 continuous symmetry, and exploiting them to solve the problem through discrete reduction and symplectic reduction. We then study the geometry, in the same framework, which occurs in the time-optimal control of a general n-level quantum system. © 2007 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nanomagnetic structures have the potential to surpass silicon's scaling limitations both as elements in hybrid CMOS logic and as novel computational elements. Magnetic force microscopy (MFM) offers a convenient characterization technique for use in the design of such nanomagnetic structures. MFM measures the magnetic field and not the sample's magnetization. As such the question of the uniqueness of the relationship between an external magnetic field and a magnetization distribution is a relevant one. To study this problem we present a simple algorithm which searches for magnetization distributions consistent with an external magnetic field and solutions to the micromagnetic equations' qualitative features. The algorithm is not computationally intensive and is found to be effective for our test cases. On the basis of our results we propose a systematic approach for interpreting MFM measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main causes of failure of historic buildings is represented by the differential settlements of foundations. Finite element analysis provides a useful tool for predicting the consequences of given ground displacements in terms of structural damage and also assesses the need of strengthening techniques. The actual damage classification for buildings subject to settlement bases the assessment of the potential damage on the expected crack pattern of the structure. In this paper, the correlation between the physical description of the damage in terms of crack width and the interpretation of the finite element analysis output is analyzed. Different discrete and continuum crack models are applied to simulate an experiment carried on a scale model of a masonry historical building, the Loggia Palace in Brescia (Italy). Results are discussed and a modified version of the fixed total strain smeared crack model is evaluated, in order to solve the problem related to the calculation of the exact crack width.