965 resultados para object modeling from images
Resumo:
The aim of this study is to explore the suitability of chromospheric images for magnetic modeling of active regions. We use high-resolutionimages (≈0.2"-0.3"), from the Interferometric Bidimensional Spectrometer in the Ca II 8542 Å line, the Rapid Oscillations in the Solar Atmosphere instrument in the Hα 6563Å line, the Interface Region Imaging Spectrograph in the 2796Å line, and compare non-potential magnetic field models obtainedfrom those chromospheric images with those obtained from images of the Atmospheric Imaging Assembly in coronal (171 Å, etc.) and inchromospheric (304 Å) wavelengths. Curvi-linear structures are automatically traced in those images with the OCCULT-2 code, to which we forward-fitted magnetic field lines computed with the Vertical-current Approximation Nonlinear Force Free Field code. We find that the chromospheric images: (1) reveal crisp curvi-linear structures (fibrils, loop segments, spicules) that are extremely well-suited for constraining magnetic modeling; (2) that these curvi-linear structures arefield-aligned with the best-fit solution by a median misalignment angle of μ2 ≈ 4°–7° (3) the free energy computed from coronal data may underestimate that obtained from chromospheric data by a factor of ≈2–4, (4) the height range of chromospheric features is confined to h≲4000 km, while coronal features are detected up to h = 35,000 km; and (5) the plasma-β parameter is β ≈ 10^-5 - 10^-1 for all traced features. We conclude that chromospheric images reveal important magnetic structures that are complementary to coronal images and need to be included in comprehensive magnetic field models, something that is currently not accomodated in standard NLFFF codes.
Resumo:
Research in ubiquitous and pervasive technologies have made it possible to recognise activities of daily living through non-intrusive sensors. The data captured from these sensors are required to be classified using various machine learning or knowledge driven techniques to infer and recognise activities. The process of discovering the activities and activity-object patterns from the sensors tagged to objects as they are used is critical to recognising the activities. In this paper, we propose a topic model process of discovering activities and activity-object patterns from the interactions of low level state-change sensors. We also develop a recognition and segmentation algorithm to recognise activities and recognise activity boundaries. Experimental results we present validates our framework and shows it is comparable to existing approaches.
Resumo:
This paper addresses the estimation of object boundaries from a set of 3D points. An extension of the constrained clustering algorithm developed by Abrantes and Marques in the context of edge linking is presented. The object surface is approximated using rectangular meshes and simplex nets. Centroid-based forces are used for attracting the model nodes towards the data, using competitive learning methods. It is shown that competitive learning improves the model performance in the presence of concavities and allows to discriminate close surfaces. The proposed model is evaluated using synthetic data and medical images (MRI and ultrasound images).
Resumo:
The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
The objective of this study was to analyze changes in the spectral behavior of the soybean crop through spectral profiles of the vegetation indexes NDVI and GVI, expressed by different physical values such as apparent bi-directional reflectance factor (BRF), surface BRF, and normalized BRF derived from images of the Landsat 5/TM. A soybean area located in Cascavel, Paraná, was monitored by using five images of Landsat 5/TM during the 2004/2005 harvesting season. The images were submitted to radiometric transformation, atmospheric correction and normalization, determining physical values of apparent BRF, surface BRF and normalized BRF. NDVI and GVI images were generated in order to distinguish the soybean biomass spectral response. The treatments showed different results for apparent, surface and normalized BRF. Through the profiles of average NDVI and GVI, it was possible to monitor the entire soybean cycle, characterizing its development. It was also observed that the data from normalized BRF negatively affected the spectral curve of soybean crop, mainly, during the phase of vegetative growth, in the 12-9-2004 image.
Resumo:
Context. In April 2004, the first image was obtained of a planetary mass companion (now known as 2M 1207 b) in orbit around a self-luminous object different from our own Sun (the young brown dwarf 2MASSW J 1207334-393254, hereafter 2M 1207 A). That 2M 1207 b probably formed via fragmentation and gravitational collapse offered proof that such a mechanism can form bodies in the planetary mass regime. However, the predicted mass, luminosity, and radius of 2MI207 b depend on its age, distance, and other observables, such as effective temperature. Aims. To refine our knowledge of the physical properties of 2M 1207 b and its nature, we accurately determined the distance to the 2M 1207 A and b system by measuring of its trigonometric parallax at the milliarcsec level. Methods. With the ESO NTT/SUS12 telescope, we began a campaign of photometric and astrometric observations in 2006 to measure the trigonometric parallax of 2M 1207 A. Results. An accurate distance (52.4 +/- 1.1 pc) to 2M1207A was measured. From distance and proper motions we derived spatial velocities that are fully compatible with TWA membership. Conclusions. With this new distance estimate, we discuss three scenarios regarding the nature of 2M 1207 b: (1) a cool (1150 +/- 150 K) companion of mass 4 +/- 1 M-Jup (2) a warmer (1600 +/- 100 K) and heavier (8 +/- 2 M-Jup) companion occulted by an edge-on circumsecondary disk, or (3) a hot protoplanet collision afterglow.
Resumo:
This paper presents new experimental flow boiling heat transfer results in micro-scale tubes. The experimental data were obtained in a horizontal 2.3 mm I.D stainless steel tube with heating length of 464 mm, R134a and R245fa as working fluids, mass velocities ranging from 50 to 700 kg m(-2) s(-1), heat flux from 5 to 55 kW m(-2), exit saturation temperatures of 22, 31 and 41 degrees C, and vapor qualities ranging from 0.05 to 0.99. Flow pattern characterization was also performed from images obtained by high-speed filming. Heat transfer coefficient results from 1 to 14 kW m(-2) K(-1) were measured. It was found that the heat transfer coefficient is a strong function of heat flux, mass velocity and vapor quality. The experimental data were compared against ten flow boiling predictive methods from the literature. Liu and Winterton [3], Zhang et al. [5] and Saitoh et al. [6] worked best for both fluids, capturing most of the experimental heat transfer trends. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we propose a method based on association rule-mining to enhance the diagnosis of medical images (mammograms). It combines low-level features automatically extracted from images and high-level knowledge from specialists to search for patterns. Our method analyzes medical images and automatically generates suggestions of diagnoses employing mining of association rules. The suggestions of diagnosis are used to accelerate the image analysis performed by specialists as well as to provide them an alternative to work on. The proposed method uses two new algorithms, PreSAGe and HiCARe. The PreSAGe algorithm combines, in a single step, feature selection and discretization, and reduces the mining complexity. Experiments performed on PreSAGe show that this algorithm is highly suitable to perform feature selection and discretization in medical images. HiCARe is a new associative classifier. The HiCARe algorithm has an important property that makes it unique: it assigns multiple keywords per image to suggest a diagnosis with high values of accuracy. Our method was applied to real datasets, and the results show high sensitivity (up to 95%) and accuracy (up to 92%), allowing us to claim that the use of association rules is a powerful means to assist in the diagnosing task.
Resumo:
Ontologies are becoming an important mechanism to build information systems. Nevertheless, there is still no systematic approach to support the design of such systems using tools that are common to information systems developers. In this paper, we propose an approach for deriving object frameworks from domain ontologies and then we show the application of this approach in the software process domain.
Resumo:
Nos últimos anos, o fácil acesso em termos de custos, ferramentas de produção, edição e distribuição de conteúdos audiovisuais, contribuíram para o aumento exponencial da produção diária deste tipo de conteúdos. Neste paradigma de superabundância de conteúdos multimédia existe uma grande percentagem de sequências de vídeo que contém material explícito, sendo necessário existir um controlo mais rigoroso, de modo a não ser facilmente acessível a menores. O conceito de conteúdo explícito pode ser caraterizado de diferentes formas, tendo o trabalho descrito neste documento incidido sobre a deteção automática de nudez feminina presente em sequências de vídeo. Este processo de deteção e classificação automática de material para adultos pode constituir uma ferramenta importante na gestão de um canal de televisão. Diariamente podem ser recebidas centenas de horas de material sendo impraticável a implementação de um processo manual de controlo de qualidade. A solução criada no contexto desta dissertação foi estudada e desenvolvida em torno de um produto especifico ligado à área do broadcasting. Este produto é o mxfSPEEDRAIL F1000, sendo este uma solução da empresa MOG Technologies. O objetivo principal do projeto é o desenvolvimento de uma biblioteca em C++, acessível durante o processo de ingest, que permita, através de uma análise baseada em funcionalidades de visão computacional, detetar e sinalizar na metadata do sinal, quais as frames que potencialmente apresentam conteúdo explícito. A solução desenvolvida utiliza um conjunto de técnicas do estado da arte adaptadas ao problema a tratar. Nestas incluem-se algoritmos para realizar a segmentação de pele e deteção de objetos em imagens. Por fim é efetuada uma análise critica à solução desenvolvida no âmbito desta dissertação de modo a que em futuros desenvolvimentos esta seja melhorada a nível do consumo de recursos durante a análise e a nível da sua taxa de sucesso.
Resumo:
L'anàlisi de la significació tant del contingut com del discurs generat per les imatges s'aborda amb mètodes qualitatius. La interpretació de les dades manifestes i latents de la imatge és el resultat de la interacció de l'observador amb la imatge i està afectada pel coneixement del context, la interpretació simbòlica que se'n fa i l'entorn social, històric i cultural en què estan immersos tant l'observador com la mateixa imatge.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
In urban communities, there are often limited amounts of right-of-way available for establishing a large setback distance from the curb for fixed objects. Urban communities must constantly weigh the cost of purchasing additional right-of-way for clear zones against the risk of fixed object crashes. From 2004 to 2006, this type of crash on curbed roads represented 15% of all fatal crashes and 3% of all crashes in the state of Iowa. Many states have kept the current minimum AASHTO recommendations as their minimum clear zone standards; however, other states have decided that these recommendations are insufficient and have increased the required minimum clear zone distance to better suit the judgment of local designers. This report presents research on the effects of the clear zone on urban curbed streets. The research was conducted in two phases. The first phase involved a synthesis of practice that included a literature review and a survey of practices in jurisdictions that have developmental and historical patterns similar to those of Iowa. The second phase involved investigating the benefits of a 10 ft clear zone, which included examining urban corridors in Iowa that meet or do not meet the 10 ft clear zone goal. The results of this study indicate that a consistent fixed object offset results in a reduction in the number of fixed object crashes, a 5 ft clear zone is most effective when the goal is to minimize the number of fixed object c ashes, and a 3 ft clear zone is most effective when the goal is to minimize the cost of fixed object crashes.
Resumo:
PURPOSE: A new magnetic resonance imaging approach for detection of myocardial late enhancement during free-breathing was developed. METHODS AND RESULTS: For suppression of respiratory motion artifacts, a prospective navigator technology including real-time motion correction and a local navigator restore was implemented. Subject specific inversion times were defined from images with incrementally increased inversion times acquired during a single dynamic scout navigator-gated and real-time motion corrected free-breathing scan. Subsequently, MR-imaging of myocardial late enhancement was performed with navigator-gated and real-time motion corrected adjacent short axis and long axis (two, three and four chamber) views. This alternative approach was investigated in 7 patients with history of myocardial infarction 12 min after i. v. administration of 0.2 mmol/kg body weight gadolinium-DTPA. CONCLUSION: With the presented navigator-gated and real-time motion corrected sequence for MR-imaging of myocardial late enhancement data can be completely acquired during free-breathing. Time constraints of a breath-hold technique are abolished and optimized patient specific inversion time is ensured.