985 resultados para automatic model
Resumo:
Aquesta tesi està emmarcada dins la detecció precoç de masses, un dels símptomes més clars del càncer de mama, en imatges mamogràfiques. Primerament, s'ha fet un anàlisi extensiu dels diferents mètodes de la literatura, concloent que aquests mètodes són dependents de diferent paràmetres: el tamany i la forma de la massa i la densitat de la mama. Així, l'objectiu de la tesi és analitzar, dissenyar i implementar un mètode de detecció robust i independent d'aquests tres paràmetres. Per a tal fi, s'ha construït un patró deformable de la massa a partir de l'anàlisi de masses reals i, a continuació, aquest model és buscat en les imatges seguint un esquema probabilístic, obtenint una sèrie de regions sospitoses. Fent servir l'anàlisi 2DPCA, s'ha construït un algorisme capaç de discernir aquestes regions són realment una massa o no. La densitat de la mama és un paràmetre que s'introdueix de forma natural dins l'algorisme.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror system plasticity is sensitive to contingency (i.e., the extent to which activation of one representation predicts activation of another). In Experiment 1, residual automatic imitation was measured following incompatible training in which the action stimulus was a perfect predictor of the response (contingent) or not at all predictive of the response (noncontingent). A contingency effect was observed: There was less automatic imitation indicative of more learning in the contingent group. Experiment 2 replicated this contingency effect and showed that, as predicted by associative learning theory, it can be abolished by signaling trials in which the response occurs in the absence of an action stimulus. These findings support the view that mirror system development depends on associative learning and indicate that this learning is not purely Hebbian. If this is correct, associative learning theory could be used to explain, predict, and intervene in mirror system development.
Resumo:
This paper describes the application of artificial neural networks for automatic tuning of PID controllers using the Model Reference Adaptive Control approach. The effectiveness of the proposed method is shown through a simulated application.
Resumo:
World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.
Resumo:
Flood extents caused by fluvial floods in urban and rural areas may be predicted by hydraulic models. Assimilation may be used to correct the model state and improve the estimates of the model parameters or external forcing. One common observation assimilated is the water level at various points along the modelled reach. Distributed water levels may be estimated indirectly along the flood extents in Synthetic Aperture Radar (SAR) images by intersecting the extents with the floodplain topography. It is necessary to select a subset of levels for assimilation because adjacent levels along the flood extent will be strongly correlated. A method for selecting such a subset automatically and in near real-time is described, which would allow the SAR water levels to be used in a forecasting model. The method first selects candidate waterline points in flooded rural areas having low slope. The waterline levels and positions are corrected for the effects of double reflections between the water surface and emergent vegetation at the flood edge. Waterline points are also selected in flooded urban areas away from radar shadow and layover caused by buildings, with levels similar to those in adjacent rural areas. The resulting points are thinned to reduce spatial autocorrelation using a top-down clustering approach. The method was developed using a TerraSAR-X image from a particular case study involving urban and rural flooding. The waterline points extracted proved to be spatially uncorrelated, with levels reasonably similar to those determined manually from aerial photographs, and in good agreement with those of nearby gauges.
Resumo:
Previous versions of the Consortium for Small-scale Modelling (COSMO) numerical weather prediction model have used a constant sea-ice surface temperature, but observations show a high degree of variability on sub-daily timescales. To account for this, we have implemented a thermodynamic sea-ice module in COSMO and performed simulations at a resolution of 15 km and 5 km for the Laptev Sea area in April 2008. Temporal and spatial variability of surface and 2-m air temperature are verified by four automatic weather stations deployed along the edge of the western New Siberian polynya during the Transdrift XIII-2 expedition and by surface temperature charts derived from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data. A remarkable agreement between the new model results and these observations demonstrates that the implemented sea-ice module can be applied for short-range simulations. Prescribing the polynya areas daily, our COSMO simulations provide a high-resolution and high-quality atmospheric data set for the Laptev Sea for the period 14-30 April 2008. Based on this data set, we derive a mean total sea-ice production rate of 0.53 km3/day for all Laptev Sea polynyas under the assumption that the polynyas are ice-free and a rate of 0.30 km3/day if a 10-cm-thin ice layer is assumed. Our results indicate that ice production in Laptev Sea polynyas has been overestimated in previous studies.
Resumo:
A detailed climatology of the cyclogenesis over the Southern Atlantic Ocean (SAO) from 1990 to 1999 and how it is simulated by the RegCM3 (Regional Climate Model) is presented here. The simulation used as initial and boundary conditions the National Centers for Environmental Prediction-Department of Energy (NCEP/DOE) reanalysis. The cyclones were identified with an automatic scheme that searches for cyclonic relative vorticity (zeta(10)) obtained from a 10-m height wind field. All the systems with zeta(10) a parts per thousand currency sign -1.5 x 10(-5) s(-1) and lifetime equal or larger than 24 h were considered in the climatology. Over SAO, in 10 years were detected 2,760 and 2,787 cyclogeneses in the simulation and NCEP, respectively, with an annual mean of 276.0 +/- A 11.2 and 278.7 +/- A 11.1. This result suggests that the RegCM3 has a good skill to simulate the cyclogenesis climatology. However, the larger model underestimations (-9.8%) are found for the initially stronger systems (zeta(10) a parts per thousand currency sign -2.5 x 10(-5) s(-1)). It was noted that over the SAO the annual cycle of the cyclogenesis depends of its initial intensity. Considering the systems initiate with zeta(10) a parts per thousand currency sign -1.5 x 10(-5) s(-1), the annual cycle is not well defined and the higher frequency occurs in the autumn (summer) in the NCEP (RegCM3). The stronger systems (zeta(10) a parts per thousand currency sign -2.5 x 10(-5) s(-1)) have a well-characterized high frequency of cyclogenesis during the winter in both NCEP and RegCM3. This work confirms the existence of three cyclogenetic regions in the west sector of the SAO, near the South America east coast and shows that RegCM3 is able to reproduce the main features of these cyclogenetic areas.
Resumo:
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.
Resumo:
Parkinson's disease (PD) is a degenerative illness whose cardinal symptoms include rigidity, tremor, and slowness of movement. In addition to its widely recognized effects PD can have a profound effect on speech and voice.The speech symptoms most commonly demonstrated by patients with PD are reduced vocal loudness, monopitch, disruptions of voice quality, and abnormally fast rate of speech. This cluster of speech symptoms is often termed Hypokinetic Dysarthria.The disease can be difficult to diagnose accurately, especially in its early stages, due to this reason, automatic techniques based on Artificial Intelligence should increase the diagnosing accuracy and to help the doctors make better decisions. The aim of the thesis work is to predict the PD based on the audio files collected from various patients.Audio files are preprocessed in order to attain the features.The preprocessed data contains 23 attributes and 195 instances. On an average there are six voice recordings per person, By using data compression technique such as Discrete Cosine Transform (DCT) number of instances can be minimized, after data compression, attribute selection is done using several WEKA build in methods such as ChiSquared, GainRatio, Infogain after identifying the important attributes, we evaluate attributes one by one by using stepwise regression.Based on the selected attributes we process in WEKA by using cost sensitive classifier with various algorithms like MultiPass LVQ, Logistic Model Tree(LMT), K-Star.The classified results shows on an average 80%.By using this features 95% approximate classification of PD is acheived.This shows that using the audio dataset, PD could be predicted with a higher level of accuracy.
Resumo:
A key to maintain Enterprises competitiveness is the ability to describe, standardize, and adapt the way it reacts to certain types of business events, and how it interacts with suppliers, partners, competitors, and customers. In this context the field of organization modeling has emerged with the aim to create models that help to create a state of self-awareness in the organization. This project's context is the use of Semantic Web in the Organizational modeling area. The Semantic Web technology advantages can be used to improve the way of modeling organizations. This was accomplished using a Semantic wiki to model organizations. Our research and implementation had two main purposes: formalization of textual content in semantic wiki pages; and automatic generation of diagrams from organization data stored in the semantic wiki pages.
Resumo:
Nowadays, more than half of the computer development projects fail to meet the final users' expectations. One of the main causes is insufficient knowledge about the organization of the enterprise to be supported by the respective information system. The DEMO methodology (Design and Engineering Methodology for Organizations) has been proved as a well-defined method to specify, through models and diagrams, the essence of any organization at a high level of abstraction. However, this methodology is platform implementation independent, lacking the possibility of saving and propagating possible changes from the organization models to the implemented software, in a runtime environment. The Universal Enterprise Adaptive Object Model (UEAOM) is a conceptual schema being used as a basis for a wiki system, to allow the modeling of any organization, independent of its implementation, as well as the previously mentioned change propagation in a runtime environment. Based on DEMO and UEAOM, this project aims to develop efficient and standardized methods, to enable an automatic conversion of DEMO Ontological Models, based on UEAOM specification into BPMN (Business Process Model and Notation) models of processes, using clear semantics, without ambiguities, in order to facilitate the creation of processes, almost ready for being executed on workflow systems that support BPMN.
Resumo:
Esse trabalho tem por objetivo o desenvolvimento de um sistema inteligente para detecção da queima no processo de retificação tangencial plana através da utilização de uma rede neural perceptron multi camadas, treinada para generalizar o processo e, conseqüentemente, obter o limiar de queima. em geral, a ocorrência da queima no processo de retificação pode ser detectada pelos parâmetros DPO e FKS. Porém esses parâmetros não são eficientes nas condições de usinagem usadas nesse trabalho. Os sinais de emissão acústica e potência elétrica do motor de acionamento do rebolo são variáveis de entrada e a variável de saída é a ocorrência da queima. No trabalho experimental, foram empregados um tipo de aço (ABNT 1045 temperado) e um tipo de rebolo denominado TARGA, modelo ART 3TG80.3 NVHB.