950 resultados para Time Machine


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work applies a variety of multilinear function factorisation techniques to extract appropriate features or attributes from high dimensional multivariate time series for classification. Recently, a great deal of work has centred around designing time series classifiers using more and more complex feature extraction and machine learning schemes. This paper argues that complex learners and domain specific feature extraction schemes of this type are not necessarily needed for time series classification, as excellent classification results can be obtained by simply applying a number of existing matrix factorisation or linear projection techniques, which are simple and computationally inexpensive. We highlight this using a geometric separability measure and classification accuracies obtained though experiments on four different high dimensional multivariate time series datasets. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large concrete structures need to be inspected in order to assess their current physical and functional state, to predict future conditions, to support investment planning and decision making, and to allocate limited maintenance and rehabilitation resources. Current procedures in condition and safety assessment of large concrete structures are performed manually leading to subjective and unreliable results, costly and time-consuming data collection, and safety issues. To address these limitations, automated machine vision-based inspection procedures have increasingly been proposed by the research community. This paper presents current achievements and open challenges in vision-based inspection of large concrete structures. First, the general concept of Building Information Modeling is introduced. Then, vision-based 3D reconstruction and as-built spatial modeling of concrete civil infrastructure are presented. Following that, the focus is set on structural member recognition as well as on concrete damage detection and assessment exemplified for concrete columns. Although some challenges are still under investigation, it can be concluded that vision-based inspection methods have significantly improved over the last 10 years, and now, as-built spatial modeling as well as damage detection and assessment of large concrete structures have the potential to be fully automated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A programmable vision chip for real-time vision applications is presented. The chip architecture is a combination of a SIMD processing element array and row-parallel processors, which can perform pixel-parallel and row-parallel operations at high speed. It implements the mathematical morphology method to carry out low-level and mid-level image processing and sends out image features for high-level image processing without I/O bottleneck. The chip can perform many algorithms through software control. The simulated maximum frequency of the vision chip is 300 MHz with 16 x 16 pixels resolution. It achieves the rate of 1000 frames per second in real-time vision. A prototype chip with a 16 x 16 PE array is fabricated by the 0.18 mu m standard CMOS process. It has a pixel size of 30 mu m x 40 mu m and 8.72 mW power consumption with a 1.8 V power supply. Experiments including the mathematical morphology method and target tracking application demonstrated that the chip is fully functional and can be applied in real-time vision applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chinese Assoc Cryptol Res, State Key Lab Informat Secur, Inst Software, Grad Univ Chinese Acad Sci, Natl Nat Sci Fdn China

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel systems become available. Such systems now exist in digital form, but most are large and expensive. These machines constitute an invaluable test-bed for the development of new algorithms, but they can probably not be scaled down rapidly in both physical size and cost, despite continued advances in semiconductor technology and machine architecture. Simple analog networks can perform interesting computations, as has been known for a long time. We have reached the point where it is feasible to experiment with implementation of these ideas in VLSI form, particularly if we focus on networks composed of locally interconnected passive elements, linear amplifiers, and simple nonlinear components. While there have been excursions into the development of ideas in this area since the very beginnings of work on machine vision, much work remains to be done. Progress will depend on careful attention to matching of the capabilities of simple networks to the needs of early vision. Note that this is not at all intended to be anything like a review of the field, but merely a collection of some ideas that seem to be interesting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of increasingly sophisticated and powerful computers in the last few decades has frequently stimulated comparisons between them and the human brain. Such comparisons will become more earnest as computers are applied more and more to tasks formerly associated with essentially human activities and capabilities. The expectation of a coming generation of "intelligent" computers and robots with sensory, motor and even "intellectual" skills comparable in quality to (and quantitatively surpassing) our own is becoming more widespread and is, I believe, leading to a new and potentially productive analytical science of "information processing". In no field has this new approach been so precisely formulated and so thoroughly exemplified as in the field of vision. As the dominant sensory modality of man, vision is one of the major keys to our mastery of the environment, to our understanding and control of the objects which surround us. If we wish to created robots capable of performing complex manipulative tasks in a changing environment, we must surely endow them with (among other things) adequate visual powers. How can we set about designing such flexible and adaptive robots? In designing them, can we make use of our rapidly growing knowledge of the human brain, and if so, how at the same time, can our experiences in designing artificial vision systems help us to understand how the brain analyzes visual information?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel hybrid data-driven approach is developed for forecasting power system parameters with the goal of increasing the efficiency of short-term forecasting studies for non-stationary time-series. The proposed approach is based on mode decomposition and a feature analysis of initial retrospective data using the Hilbert-Huang transform and machine learning algorithms. The random forests and gradient boosting trees learning techniques were examined. The decision tree techniques were used to rank the importance of variables employed in the forecasting models. The Mean Decrease Gini index is employed as an impurity function. The resulting hybrid forecasting models employ the radial basis function neural network and support vector regression. A part from introduction and references the paper is organized as follows. The second section presents the background and the review of several approaches for short-term forecasting of power system parameters. In the third section a hybrid machine learningbased algorithm using Hilbert-Huang transform is developed for short-term forecasting of power system parameters. Fourth section describes the decision tree learning algorithms used for the issue of variables importance. Finally in section six the experimental results in the following electric power problems are presented: active power flow forecasting, electricity price forecasting and for the wind speed and direction forecasting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of sequencing n jobs in a three-machine flow shop with the objective of minimizing the makespan, which is the completion time of the last job. An O(n log n) time heuristic that is based on Johnson's algorithm is presented. It is shown to generate a schedule with length at most 5/3 times that of an optimal schedule, thereby reducing the previous best available worst-case performance ratio of 2. An application to the general flow shop is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many practical situations, batching of similar jobs to avoid setups is performed while constructing a schedule. This paper addresses the problem of non-preemptively scheduling independent jobs in a two-machine flow shop with the objective of minimizing the makespan. Jobs are grouped into batches. A sequence independent batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. Besides its practical interest, this problem is a direct generalization of the classical two-machine flow shop problem with no grouping of jobs, which can be solved optimally by Johnson's well-known algorithm. The problem under investigation is known to be NP-hard. We propose two O(n logn) time heuristic algorithms. The first heuristic, which creates a schedule with minimum total setup time by forcing all jobs in the same batch to be sequenced in adjacent positions, has a worst-case performance ratio of 3/2. By allowing each batch to be split into at most two sub-batches, a second heuristic is developed which has an improved worst-case performance ratio of 4/3. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of sequencing n jobs in a two‐machine re‐entrant shopwith the objective of minimizing the maximum completion time. The shop consists of twomachines, M1 and M2 , and each job has the processing route (M1 , M2 , M1 ). An O(n log n)time heuristic is presented which generates a schedule with length at most 4/3 times that ofan optimal schedule, thereby improving the best previously available worst‐case performanceratio of 3/2.