857 resultados para Facial Object Based Method
Resumo:
With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
One approach to microbial genotyping is to make use of sets of single-nucleotide polymorphisms (SNPs) in combination with binary markers. Here we report the modification and automation of a SNP-plus-binary-marker-based approach to the genotyping of Staphylococcus aureus and its application to 391 S. aureus isolates from southeast Queensland, Australia. The SNPs used were arcC210, tpi243, arcC162, gmk318, pta294, tpi36, tpi241, and pta383. These provide a Simpson's index of diversity (D) of 0.95 with respect to the S. aureus multilocus sequence typing database and define 61 genotypes and the major clonal complexes. The binary markers used were pvl, cna, sdrE, pT181, and pUB110. Two novel real-time PCR formats for interrogating these markers were compared. One of these makes use of light upon extension (LUX) primers and biplexed reactions, while the other is a streamlined modification of kinetic PCR using SYBR green. The latter format proved to be more robust. In addition, automated methods for DNA template preparation, reaction setup, and data analysis were developed. A single SNP-based method for ST-93 (Queensland clone) identification was also devised. The genotyping revealed the numerical importance of the South West Pacific and Queensland community-acquired methicillin-resistant S. aureus (MRSA) clones and the clonal complex 239 Aus-1/Aus-2 hospital-associated MRSA. There was a strong association between the community-acquired clones and pvl.
Resumo:
Study Design. Development of an automatic measurement algorithm and comparison with manual measurement methods. Objectives. To develop a new computer-based method for automatic measurement of vertebral rotation in idiopathic scoliosis from computed tomography images and to compare the automatic method with two manual measurement techniques. Summary of Background Data. Techniques have been developed for vertebral rotation measurement in idiopathic scoliosis using plain radiographs, computed tomography, or magnetic resonance images. All of these techniques require manual selection of landmark points and are therefore subject to interobserver and intraobserver error. Methods. We developed a new method for automatic measurement of vertebral rotation in idiopathic scoliosis using a symmetry ratio algorithm. The automatic method provided values comparable with Aaro and Ho's manual measurement methods for a set of 19 transverse computed tomography slices through apical vertebrae, and with Aaro's method for a set of 204 reformatted computed tomography images through vertebral endplates. Results. Confidence intervals (95%) for intraobserver and interobserver variability using manual methods were in the range 5.5 to 7.2. The mean (+/- SD) difference between automatic and manual rotation measurements for the 19 apical images was -0.5 degrees +/- 3.3 degrees for Aaro's method and 0.7 degrees +/- 3.4 degrees for Ho's method. The mean (+/- SD) difference between automatic and manual rotation measurements for the 204 endplate images was 0.25 degrees +/- 3.8 degrees. Conclusions. The symmetry ratio algorithm allows automatic measurement of vertebral rotation in idiopathic scoliosis without intraobserver or interobserver error due to landmark point selection.
Resumo:
Capacity limits in visual attention have traditionally been studied using static arrays of elements from which an observer must detect a target defined by a certain visual feature or combination of features. In the current study we use this visual search paradigm, with accuracy as the dependent variable, to examine attentional capacity limits for different visual features undergoing change over time. In Experiment 1, detectability of a single changing target was measured under conditions where the type of change (size, speed, colour), the magnitude of change, the set size and homogeneity of the unchanging distractors were all systematically varied. Psychometric function slopes were calculated for different experimental conditions and ‘change thresholds’extracted from these slopes were used in Experiment 2, in which multiple supra-threshold changes were made, simultaneously, either to a single or to two or three different stimulus elements. These experiments give an objective psychometric paradigm for measuring changes in visual features over time. Results favour object-based accounts of visual attention, and show consistent differences in the allocation of attentional capacity to different perceptual dimensions.
Resumo:
One of critical challenges in automatic recognition of TV commercials is to generate a unique, robust and compact signature. Uniqueness indicates the ability to identify the similarity among the commercial video clips which may have slight content variation. Robustness means the ability to match commercial video clips containing the same content but probably with different digitalization/encoding, some noise data, and/or transmission and recording distortion. Efficiency is about the capability of effectively matching commercial video sequences with a low computation cost and storage overhead. In this paper, we present a binary signature based method, which meets all the three criteria above, by combining the techniques of ordinal and color measurements. Experimental results on a real large commercial video database show that our novel approach delivers a significantly better performance comparing to the existing methods.
Resumo:
Non-technical losses (NTL) identification and prediction are important tasks for many utilities. Data from customer information system (CIS) can be used for NTL analysis. However, in order to accurately and efficiently perform NTL analysis, the original data from CIS need to be pre-processed before any detailed NTL analysis can be carried out. In this paper, we propose a feature selection based method for CIS data pre-processing in order to extract the most relevant information for further analysis such as clustering and classifications. By removing irrelevant and redundant features, feature selection is an essential step in data mining process in finding optimal subset of features to improve the quality of result by giving faster time processing, higher accuracy and simpler results with fewer features. Detailed feature selection analysis is presented in the paper. Both time-domain and load shape data are compared based on the accuracy, consistency and statistical dependencies between features.
Resumo:
Management of collaborative business processes that span multiple business entities has emerged as a key requirement for business success. These processes are embedded in sets of rules describing complex message-based interactions between parties such that if a logical expression defined on the set of received messages is satisfied, one or more outgoing messages are dispatched. The execution of these processes presents significant challenges since each contentrich message may contribute towards the evaluation of multiple expressions in different ways and the sequence of message arrival cannot be predicted. These challenges must be overcome in order to develop an efficient execution strategy for collaborative processes in an intensive operating environment with a large number of rules and very high throughput of messages. In this paper, we present a discussion on issues relevant to the evaluation of such expressions and describe a basic query-based method for this purpose, including suggested indexes for improved performance. We conclude by identifying several potential future research directions in this area. © 2010 IEEE. All rights reserved
Resumo:
A new control algorithm using parallel braking resistor (BR) and serial fault current limiter (FCL) for power system transient stability enhancement is presented in this paper. The proposed control algorithm can prevent transient instability during first swing by immediately taking away the transient energy gained in faulted period. It can also reduce generator oscillation time and efficiently make system back to the post-fault equilibrium. The algorithm is based on a new system energy function based method to choose optimal switching point. The parallel BR and serial FCL resistor can be switched at the calculated optimal point to get the best control result. This method allows optimum dissipation of the transient energy caused by disturbance so to make system back to equilibrium in minimum time. Case studies are given to verify the efficiency and effectiveness of this new control algorithm.
Resumo:
Investment in mining projects, like most business investment, is susceptible to risk and uncertainty. The ability to effectively identify, assess and manage risk may enable strategic investments to be sheltered and operations to perform closer to their potential. In mining, geological uncertainty is seen as the major contributor to not meeting project expectations. The need to assess and manage geological risk for project valuation and decision-making translates to the need to assess and manage risk in any pertinent parameter of open pit design and production scheduling. This is achieved by taking geological uncertainty into account in the mine optimisation process. This thesis develops methods that enable geological uncertainty to be effectively modelled and the resulting risk in long-term production scheduling to be quantified and managed. One of the main accomplishments of this thesis is the development of a new, risk-based method for the optimisation of long-term production scheduling. In addition to maximising economic returns, the new method minimises the risk of deviating from production forecasts, given the understanding of the orebody. This ability represents a major advance in the risk management of open pit mining.
Resumo:
We have carried out a discovery proteomics investigation aimed at identifying disease biomarkers present in saliva, and, more specifically, early biomarkers of inflammation. The proteomic characterization of saliva is possible due to the straightforward and non-invasive sample collection that allows repetitive analyses for pharmacokinetic studies. These advantages are particularly relevant in the case of newborn patients. The study was carried out with samples collected during the first 48 hours of life of the newborns according to an approved Ethic Committee procedure. In particular, the salivary samples were collected from healthy and infected (n=1) newborns. Proteins were extracted through cycles of sonication, precipitated in ice cold acetone, resuspended and resolved by 2D-electrophoresis. MALDI TOF/TOF mass spectrometry analysis was performed for each spot obtaining the proteins’ identifications. Then we compared healthy newborn salivary proteome and an infected newborn salivary proteome in order to investigate proteins differently expressed in inflammatory condition. In particular the protein alpha-1-antitrypsin (A1AT), correlated with inflammation, was detected differently expressed in the infected newborn saliva. Therefore, in the second part of the project we aimed to develop a robust LC-MS based method that identifies and quantifies this inflammatory protein within saliva that might represent the first relevant step to diagnose a condition of inflammation with a no-invasive assay. The same LC-MS method is also useful to investigate the presence of the F allelic variant of the A1AT in biological samples, which is correlated with the onset of pulmonary diseases. In the last part of the work we analysed newborn saliva samples in order to investigate how phospholipids and mediators of inflammation (eicosanoids) are subject to variations under inflammatory conditions and a trend was observed in lysophosphatidylcholines composition according to the inflammatory conditions.
Resumo:
Este estudo avaliou o posicionamento ântero posterior dos primeiros molares inferiores, durante o tratamento ortodôntico, utilizando o arco lingual inferior como acessório de ancoragem na técnica Straight-Wire, em comparação aos casos tratados pela técnica Edgewise, sem a utilização do arco lingual. Dois grupos foram selecionados, ambos apresentando má oclusão de Classe I de Angle7, tratados com extração dos primeiros pré-molares superiores e inferiores. Foi utilizada uma amostra de 255 telerradiografias em norma lateral, obtidas de pacientes brasileiros, de ambos os sexos, com média de idade de 13 anos e 6 meses e com diferentes padrões de crescimento facial. Embasado na análise e discussão dos resultados, concluiu-se que: 1) do início do tratamento ao fim da fase de nivelamento, a perda de ancoragem coronária do primeiro molar inferior foi maior nos casos tratados com a técnica Straight-Wire; 2) do fim da fase de nivelamento ao fim do tratamento, a perda de ancoragem coronária e radicular do primeiro molar inferior foi maior na técnica Edgewise; 3) do início ao fim tratamento a perda de ancoragem radicular foi maior nos pacientes tratados com a técnica Edgewise; e 4) o deslocamento ântero-posterior dos incisivos inferiores não apresentou diferença estatisticamente significante para ambas as técnicas, em todas as etapas observadas.(AU)
Resumo:
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.
Resumo:
The n-tuple pattern recognition method has been tested using a selection of 11 large data sets from the European Community StatLog project, so that the results could be compared with those reported for the 23 other algorithms the project tested. The results indicate that this ultra-fast memory-based method is a viable competitor with the others, which include optimisation-based neural network algorithms, even though the theory of memory-based neural computing is less highly developed in terms of statistical theory.
Resumo:
In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.