998 resultados para Maximal Models
Resumo:
Traffic classification using machine learning continues to be an active research area. The majority of work in this area uses off-the-shelf machine learning tools and treats them as black-box classifiers. This approach turns all the modelling complexity into a feature selection problem. In this paper, we build a problem-specific solution to the traffic classification problem by designing a custom probabilistic graphical model. Graphical models are a modular framework to design classifiers which incorporate domain-specific knowledge. More specifically, our solution introduces semi-supervised learning which means we learn from both labelled and unlabelled traffic flows. We show that our solution performs competitively compared to previous approaches while using less data and simpler features. Copyright © 2010 ACM.
Resumo:
In this paper, we describe models and algorithms for detection and tracking of group and individual targets. We develop two novel group dynamical models, within a continuous time setting, that aim to mimic behavioural properties of groups. We also describe two possible ways of modeling interactions between closely using Markov Random Field (MRF) and repulsive forces. These can be combined together with a group structure transition model to create realistic evolving group models. We use a Markov Chain Monte Carlo (MCMC)-Particles Algorithm to perform sequential inference. Computer simulations demonstrate the ability of the algorithm to detect and track targets within groups, as well as infer the correct group structure over time. ©2008 IEEE.
Resumo:
Standard algorithms in tracking and other state-space models assume identical and synchronous sampling rates for the state and measurement processes. However, real trajectories of objects are typically characterized by prolonged smooth sections, with sharp, but infrequent, changes. Thus, a more parsimonious representation of a target trajectory may be obtained by direct modeling of maneuver times in the state process, independently from the observation times. This is achieved by assuming the state arrival times to follow a random process, typically specified as Markovian, so that state points may be allocated along the trajectory according to the degree of variation observed. The resulting variable dimension state inference problem is solved by developing an efficient variable rate particle filtering algorithm to recursively update the posterior distribution of the state sequence as new data becomes available. The methodology is quite general and can be applied across many models where dynamic model uncertainty occurs on-line. Specific models are proposed for the dynamics of a moving object under internal forcing, expressed in terms of the intrinsic dynamics of the object. The performance of the algorithms with these dynamical models is demonstrated on several challenging maneuvering target tracking problems in clutter. © 2006 IEEE.
Resumo:
The Chinese language is based on characters which are syllabic in nature. Since languages have syllabotactic rules which govern the construction of syllables and their allowed sequences, Chinese character sequence models can be used as a first level approximation of allowed syllable sequences. N-gram character sequence models were trained on 4.3 billion characters. Characters are used as a first level recognition unit with multiple pronunciations per character. For comparison the CU-HTK Mandarin word based system was used to recognize words which were then converted to character sequences. The character only system error rates for one best recognition were slightly worse than word based character recognition. However combining the two systems using log-linear combination gives better results than either system separately. An equally weighted combination gave consistent CER gains of 0.1-0.2% absolute over the word based standard system. Copyright © 2009 ISCA.
Resumo:
Turbomachinery noise radiating into the rearward arc is an important problem. This noise is scattered by the trailing edges of the nacelle and the jet exhaust, and interacts with the shear layers between the external flow, bypass stream and jet, en route to the far field. In the past a range of relevant model problems involving semi-infinite cylinders have been solved. However, one limitation of these previous solutions is that they do not allow for the jet nozzle protruding a finite distance beyond the end of the nacelle (or in certain configurations being buried a finite distance upstream). With this in mind, we have used the matrix Wiener-Hopf technique to allow precisely this finite nacelle-jet nozzle separation to be included. We have previously reported results for the case of hard-walled ducts, which requires factorisation of a 2 × 2 matrix. In this paper we extend this work by allowing one of the duct walls, in this case the outer wall of the jet pipe, to be acoustically lined. This results in the need to factorise a 3 × 3 matrix, which is completed by use of a combination of pole-removal and Pad́e approximant techniques. Sample results are presented, investigating in particular the effects of exit plane stagger and liner impedance. Here we take the mean flow to be zero, but extension to nonzero Mach numbers in the core and bypass flow has also been completed. Copyright © 2009 by Nigel Peake & Ben Veitch.
Resumo:
Existing devices for communicating information to computers are bulky, slow to use, or unreliable. Dasher is a new interface incorporating language modelling and driven by continuous two-dimensional gestures, e.g. a mouse, touchscreen, or eye-tracker. Tests have shown that this device can be used to enter text at a rate of up to 34 words per minute, compared with typical ten-finger keyboard typing of 40-60 words per minute. Although the interface is slower than a conventional keyboard, it is small and simple, and could be used on personal data assistants and by motion-impaired computer users.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.
Resumo:
Based on the scaling criteria of polymer flooding reservoir obtained in our previous work in which the gravity and capillary forces, compressibility, non-Newtonian behavior, absorption, dispersion, and diffusion are considered, eight partial similarity models are designed. A new numerical approach of sensitivity analysis is suggested to quantify the dominance degree of relaxed dimensionless parameters for partial similarity model. The sensitivity factor quantifying the dominance degree of relaxed dimensionless parameter is defined. By solving the dimensionless governing equations including all dimensionless parameters, the sensitivity factor of each relaxed dimensionless parameter is calculated for each partial similarity model; thus, the dominance degree of the relaxed one is quantitatively determined. Based on the sensitivity analysis, the effect coefficient of partial similarity model is defined as the summation of product of sensitivity factor of relaxed dimensionless parameter and its relative relaxation quantity. The effect coefficient is used as a criterion to evaluate each partial similarity model. Then the partial similarity model with the smallest effect coefficient can be singled out to approximate to the prototype. Results show that the precision of partial similarity model is not only determined by the number of satisfied dimensionless parameters but also the relative relaxation quantity of the relaxed ones.
Resumo:
The transitions between the different contact models which include the Hertz, Bradley, Johnson-Kendall-Roberts (JKR), Derjaguin-Muller-Toporov (DMT) and Maugis-Dugdale (MD) models are revealed by analyzing their contact pressure profiles and surface interactions. Inside the contact area, surface interaction/adhesion induces tensile contact pressure around the contact edge. Outside the contact area, whether or not to consider the surface interaction has a significant influence on the contact system equilibrium. The difference in contact pressure due to the surface interaction inside the contact area and the equilibrium influenced by the surface interaction outside the contact area are physically responsible for the different results of the different models. A systematic study on the transitions between different models is shown by analyzing the contact pressure profiles and the surface interactions both inside and outside the contact area. The definitions of contact radius and the flatness of contact surfaces are also discussed. (C) Koninklijke Brill NV, Leiden, 2008.
Resumo:
Two types of peeling experiments are performed in the present research. One is for the Al film/Al2O3 substrate system with an adhesive layer between the film and the substrate. The other one is for the Cu film/Al2O3 substrate system without adhesive layer between the film and the substrate, and the Cu films are electroplated onto the Al2O3 substrates. For the case with adhesive layer, two kinds of adhesives are selected, which are all the mixtures of epoxy and polyimide with mass ratios 1:1.5 and 1:1, respectively. The relationships between energy release rate, the film thickness and the adhesive layer thickness are measured during the steady-state peeling process. The effects of the adhesive layer on the energy release rate are analyzed. Using the experimental results, several analytical criteria for the steady-state peeling based on the bending model and on the two-dimensional finite element analysis model are critically assessed. Through assessment of analytical models, we find that the cohesive zone criterion based on the beam bend model is suitable for a weak interface strength case and it describes a macroscale fracture process zone case, while the two-dimensional finite element model is effective to both the strong interface and weak interface, and it describes a small-scale fracture process zone case. (C) 2007 Elsevier Ltd. All rights reserved.