903 resultados para Artificial intelligence -- Data processing
Resumo:
The eng-genes concept involves the use of fundamental known system functions as activation functions in a neural model to create a 'grey-box' neural network. One of the main issues in eng-genes modelling is to produce a parsimonious model given a model construction criterion. The challenges are that (1) the eng-genes model in most cases is a heterogenous network consisting of more than one type of nonlinear basis functions, and each basis function may have different set of parameters to be optimised; (2) the number of hidden nodes has to be chosen based on a model selection criterion. This is a mixed integer hard problem and this paper investigates the use of a forward selection algorithm to optimise both the network structure and the parameters of the system-derived activation functions. Results are included from case studies performed on a simulated continuously stirred tank reactor process, and using actual data from a pH neutralisation plant. The resulting eng-genes networks demonstrate superior simulation performance and transparency over a range of network sizes when compared to conventional neural models. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The United States Supreme Court case of 1991, Feist Publications, Inc. v. Rural Tel. Service Co., continues to be highly significant for property in data and databases, but remains poorly understood. The approach taken in this article contrasts with previous studies. It focuses upon the “not original” rather than the original. The delineation of the absence of a modicum of creativity in selection, coordination, and arrangement of data as a component of the not original forms a pivotal point in the Supreme Court decision. The author also aims at elucidation rather than critique, using close textual exegesis of the Supreme Court decision. The results of the exegesis are translated into a more formal logical form to enhance clarity and rigor.
The insufficiently creative is initially characterized as “so mechanical or routine.” Mechanical and routine are understood in their ordinary discourse senses, as a conjunction or as connected by AND, and as the central clause. Subsequent clauses amplify the senses of mechanical and routine without disturbing their conjunction.
The delineation of the absence of a modicum of creativity can be correlated with classic conceptions of computability. The insufficiently creative can then be understood as a routine selection, coordination, or arrangement produced by an automatic mechanical procedure or algorithm. An understanding of a modicum of creativity and of copyright law is also indicated.
The value of the exegesis and interpretation is identified as its final simplicity, clarity, comprehensiveness, and potential practical utility.
Resumo:
This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms.
Resumo:
A Time of flight (ToF) mass spectrometer suitable in terms of sensitivity, detector response and time resolution, for application in fast transient Temporal Analysis of Products (TAP) kinetic catalyst characterization is reported. Technical difficulties associated with such application as well as the solutions implemented in terms of adaptations of the ToF apparatus are discussed. The performance of the ToF was validated and the full linearity of the specific detector over the full dynamic range was explored in order to ensure its applicability for the TAP application. The reported TAP-ToF setup is the first system that achieves the high level of sensitivity allowing monitoring of the full 0-200 AMU range simultaneously with sub-millisecond time resolution. In this new setup, the high sensitivity allows the use of low intensity pulses ensuring that transport through the reactor occurs in the Knudsen diffusion regime and that the data can, therefore, be fully analysed using the reported theoretical TAP models and data processing.
Resumo:
This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modelled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited. An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.
Resumo:
This paper describes the application of an improved nonlinear principal component analysis (PCA) to the detection of faults in polymer extrusion processes. Since the processes are complex in nature and nonlinear relationships exist between the recorded variables, an improved nonlinear PCA, which incorporates the radial basis function (RBF) networks and principal curves, is proposed. This algorithm comprises two stages. The first stage involves the use of the serial principal curve to obtain the nonlinear scores and approximated data. The second stage is to construct two RBF networks using a fast recursive algorithm to solve the topology problem in traditional nonlinear PCA. The benefits of this improvement are demonstrated in the practical application to a polymer extrusion process.
Resumo:
In this paper, we present a novel approach to person verification by fusing face and lip features. Specifically, the face is modeled by the discriminative common vector and the discrete wavelet transform. Our lip features are simple geometric features based on a lip contour, which can be interpreted as multiple spatial widths and heights from a center of mass. In order to combine these features, we consider two simple fusion strategies: data fusion before training and score fusion after training, working with two different face databases. Fusing them together boosts the performance to achieve an equal error rate as low as 0.4% and 0.28%, respectively, confirming that our approach of fusing lips and face is effective and promising.
Resumo:
It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.
Resumo:
A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.
Resumo:
In the literature, politeness has been researched within many disciplines. Although Brown and Levinson’s theory of politeness (1978, 1987) is often cited, it is primarily a linguistic theory and has been criticized for its lack of generalizability to all cultures. Consequently, there is a need for a more comprehensive approach to understand and explain politeness. We suggest applying a social signal framework that considers politeness as a communicative state. By doing so, we aim to unify and explain politeness and its corresponding research and identify further research needed in this area.
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.
Resumo:
With a significant increment of the number of digital cameras used for various purposes, there is a demanding call for advanced video analysis techniques that can be used to systematically interpret and understand the semantics of video contents, which have been recorded in security surveillance, intelligent transportation, health care, video retrieving and summarization. Understanding and interpreting human behaviours based on video analysis have observed competitive challenges due to non-rigid human motion, self and mutual occlusions, and changes of lighting conditions. To solve these problems, advanced image and signal processing technologies such as neural network, fuzzy logic, probabilistic estimation theory and statistical learning have been overwhelmingly investigated.
Resumo:
Colour-based particle filters have been used exhaustively in the literature given rise to multiple applications However tracking coloured objects through time has an important drawback since the way in which the camera perceives the colour of the object can change Simple updates are often used to address this problem which imply a risk of distorting the model and losing the target In this paper a joint image characteristic-space tracking is proposed which updates the model simultaneously to the object location In order to avoid the curse of dimensionality a Rao-Blackwellised particle filter has been used Using this technique the hypotheses are evaluated depending on the difference between the model and the current target appearance during the updating stage Convincing results have been obtained in sequences under both sudden and gradual illumination condition changes Crown Copyright (C) 2010 Published by Elsevier B V All rights reserved