950 resultados para Time Machine
Resumo:
The paper considers an on-line single machine scheduling problem where the goal is to minimize the makespan. The jobs are partitioned into families and a setup is performed every time the machine starts processing a batch of jobs of the same family. The scheduler is aware of the number of families and knows the setup time of each family, although information about a job only becomes available when that job is released. We give a lower bound on the competitive ratio of any on-line algorithm. Moreover, for the case of two families, we provide an algorithm with a competitive ratio that achieves this lower bound. As the number of families increases, the lower bound approaches 2, and we give a simple algorithm with a competitive ratio of 2.
Resumo:
We consider the problem of scheduling families of jobs in a two-machine open shop so as to minimize the makespan. The jobs of each family can be partitioned into batches and a family setup time on each machine is required before the first job is processed, and when a machine switches from processing a job of some family to a job of another family. For this NP-hard problem the literature contains (5/4)-approximation algorithms that cannot be improved on using the class of group technology algorithms in which each family is kept as a single batch. We demonstrate that there is no advantage in splitting a family more than once. We present an algorithm that splits one family at most once on a machine and delivers a worst-case performance ratio of 6/5.
Resumo:
We consider various single machine scheduling problems in which the processing time of a job depends either on its position in a processing sequence or on its start time. We focus on problems of minimizing the makespan or the sum of (weighted) completion times of the jobs. In many situations we show that the objective function is priority-generating, and therefore the corresponding scheduling problem under series-parallel precedence constraints is polynomially solvable. In other situations we provide counter-examples that show that the objective function is not priority-generating.
Resumo:
This paper considers two-machine flow shop scheduling problems with machine availability constraints. When the processing of a job is interrupted by an unavailability period of a machine, we consider both the resumable scenario in which the processing can be resumed when the machine next becomes available, and the semi-resumable scenario in which some portion of the processing is repeated but the job is otherwise resumable. For the problem with several non-availability intervals on the first machine under the resumable scenario, we present a fast (3/2)-approximation algorithm. For the problem with one non-availability interval under the semi-resumable scenario, a polynomial-time approximation scheme is developed.
Resumo:
We consider the two-machine open shop scheduling problem in which the jobs are brought to the system by a single transporter and moved between the processing machines by the same transporter. The purpose is to split the jobs into batches and to find the sequence of moves of the transporter so that the time by which the completed jobs are collected together on board the transporter is minimal. We present a 7/5-approximation algorithm. © 2008 Wiley Periodicals, Inc. Naval Research Logistics 2009
Resumo:
We consider single machine scheduling and due date assignment problems in which the processing time of a job depends on its position in a processing sequence. The objective functions include the cost of changing the due dates, the total cost of discarded jobs that cannot be completed by their due dates and, possibly, the total earliness of the scheduled jobs. We present polynomial-time dynamic programming algorithms in the case of two popular due date assignment methods: CON and SLK. The considered problems are related to mathematical models of cooperation between the manufacturer and the customer in supply chain scheduling.
Resumo:
This paper presented results from a details and comprehensive simulation using finite element method of the practical operation of an electrical machine. The results it displayed have been used in practice to design more efficient equipment.
Resumo:
Support vector machine (SVM) is a powerful technique for data classification. Despite of its good theoretic foundations and high classification accuracy, normal SVM is not suitable for classification of large data sets, because the training complexity of SVM is highly dependent on the size of data set. This paper presents a novel SVM classification approach for large data sets by using minimum enclosing ball clustering. After the training data are partitioned by the proposed clustering method, the centers of the clusters are used for the first time SVM classification. Then we use the clusters whose centers are support vectors or those clusters which have different classes to perform the second time SVM classification. In this stage most data are removed. Several experimental results show that the approach proposed in this paper has good classification accuracy compared with classic SVM while the training is significantly faster than several other SVM classifiers.
Resumo:
This paper proposes a new hierarchical learning structure, namely the holistic triple learning (HTL), for extending the binary support vector machine (SVM) to multi-classification problems. For an N-class problem, a HTL constructs a decision tree up to a depth of A leaf node of the decision tree is allowed to be placed with a holistic triple learning unit whose generalisation abilities are assessed and approved. Meanwhile, the remaining nodes in the decision tree each accommodate a standard binary SVM classifier. The holistic triple classifier is a regression model trained on three classes, whose training algorithm is originated from a recently proposed implementation technique, namely the least-squares support vector machine (LS-SVM). A major novelty with the holistic triple classifier is the reduced number of support vectors in the solution. For the resultant HTL-SVM, an upper bound of the generalisation error can be obtained. The time complexity of training the HTL-SVM is analysed, and is shown to be comparable to that of training the one-versus-one (1-vs.-1) SVM, particularly on small-scale datasets. Empirical studies show that the proposed HTL-SVM achieves competitive classification accuracy with a reduced number of support vectors compared to the popular 1-vs-1 alternative.
Resumo:
Based on an algorithm for pattern matching in character strings, we implement a pattern matching machine that searches for occurrences of patterns in multidimensional time series. Before the search process takes place, time series are encoded in user-designed alphabets. The patterns, on the other hand, are formulated as regular expressions that are composed of letters from these alphabets and operators. Furthermore, we develop a genetic algorithm to breed patterns that maximize a user-defined fitness function. In an application to financial data, we show that patterns bred to predict high exchange rates volatility in training samples retain statistically significant predictive power in validation samples.
Resumo:
A parallel kinematic machine (PKM) topology can only give its best performance when its geometrical parameters are optimized. In this paper, dimensional synthesis of a newly developed PKM is presented for the first time. An optimization method is developed with the objective to maximize both workspace volume and global dexterity of the PKM. Results show that the method can effectively identify design parameter changes under different weighted objectives. The PKM with optimized dimensions has a large workspace to footprint ratio and a large well-conditioned workspace, hence justifies its suitability for large volume machining.
Resumo:
This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.
Resumo:
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.
Resumo:
Tanpura string vibrations have been investigated previously using numerical models based on energy conserving schemes derived from a Hamiltonian description in one-dimensional form. Such time-domain models have the property that, for the lossless case, the numerical Hamiltonian (representing total energy of the system) can be proven to be constant from one time step
to the next, irrespective of any of the system parameters; in practice the Hamiltonian can be shown to be conserved within machine precision. Models of this kind can reproduce a jvari effect, which results from the bridge-string interaction. However the one-dimensional formulation has recently been shown to fail to replicate the jvaris strong dependence on the thread placement. As a first step towards simulations which accurately emulate this sensitivity to the thread placement, a twodimensional model is proposed, incorporating coupling of controllable level between the two string polarisations at the string termination opposite from the barrier. In addition, a friction force acting when the string slides across the bridge in horizontal direction is introduced, thus effecting a further damping mechanism. In this preliminary study, the string is terminated at the position of the thread. As in the one-dimensional model, an implicit scheme has to be used to solve the system, employing Newton's method to calculate the updated positions and momentums of each string segment. The two-dimensional model is proven to be energy conserving when the loss parameters are set to zero, irrespective of the coupling constant. Both frequency-dependent and independent losses are then added to the string, so that the model can be compared to analogous instruments. The influence of coupling and the bridge friction are investigated.