897 resultados para Mesh generation from image data
Resumo:
La microscopie par fluorescence de cellules vivantes produit de grandes quantités de données. Ces données sont composées d’une grande diversité au niveau de la forme des objets d’intérêts et possèdent un ratio signaux/bruit très bas. Pour concevoir un pipeline d’algorithmes efficaces en traitement d’image de microscopie par fluorescence, il est important d’avoir une segmentation robuste et fiable étant donné que celle-ci constitue l’étape initiale du traitement d’image. Dans ce mémoire, je présente MinSeg, un algorithme de segmentation d’image de microscopie par fluorescence qui fait peu d’assomptions sur l’image et utilise des propriétés statistiques pour distinguer le signal par rapport au bruit. MinSeg ne fait pas d’assomption sur la taille ou la forme des objets contenus dans l’image. Par ce fait, il est donc applicable sur une grande variété d’images. Je présente aussi une suite d’algorithmes pour la quantification de petits complexes dans des expériences de microscopie par fluorescence de molécules simples utilisant l’algorithme de segmentation MinSeg. Cette suite d’algorithmes a été utilisée pour la quantification d’une protéine nommée CENP-A qui est une variante de l’histone H3. Par cette technique, nous avons trouvé que CENP-A est principalement présente sous forme de dimère.
Resumo:
La tomographie d’émission par positrons (TEP) est une modalité d’imagerie moléculaire utilisant des radiotraceurs marqués par des isotopes émetteurs de positrons permettant de quantifier et de sonder des processus biologiques et physiologiques. Cette modalité est surtout utilisée actuellement en oncologie, mais elle est aussi utilisée de plus en plus en cardiologie, en neurologie et en pharmacologie. En fait, c’est une modalité qui est intrinsèquement capable d’offrir avec une meilleure sensibilité des informations fonctionnelles sur le métabolisme cellulaire. Les limites de cette modalité sont surtout la faible résolution spatiale et le manque d’exactitude de la quantification. Par ailleurs, afin de dépasser ces limites qui constituent un obstacle pour élargir le champ des applications cliniques de la TEP, les nouveaux systèmes d’acquisition sont équipés d’un grand nombre de petits détecteurs ayant des meilleures performances de détection. La reconstruction de l’image se fait en utilisant les algorithmes stochastiques itératifs mieux adaptés aux acquisitions à faibles statistiques. De ce fait, le temps de reconstruction est devenu trop long pour une utilisation en milieu clinique. Ainsi, pour réduire ce temps, on les données d’acquisition sont compressées et des versions accélérées d’algorithmes stochastiques itératifs qui sont généralement moins exactes sont utilisées. Les performances améliorées par l’augmentation de nombre des détecteurs sont donc limitées par les contraintes de temps de calcul. Afin de sortir de cette boucle et permettre l’utilisation des algorithmes de reconstruction robustes, de nombreux travaux ont été effectués pour accélérer ces algorithmes sur les dispositifs GPU (Graphics Processing Units) de calcul haute performance. Dans ce travail, nous avons rejoint cet effort de la communauté scientifique pour développer et introduire en clinique l’utilisation des algorithmes de reconstruction puissants qui améliorent la résolution spatiale et l’exactitude de la quantification en TEP. Nous avons d’abord travaillé sur le développement des stratégies pour accélérer sur les dispositifs GPU la reconstruction des images TEP à partir des données d’acquisition en mode liste. En fait, le mode liste offre de nombreux avantages par rapport à la reconstruction à partir des sinogrammes, entre autres : il permet d’implanter facilement et avec précision la correction du mouvement et le temps de vol (TOF : Time-Of Flight) pour améliorer l’exactitude de la quantification. Il permet aussi d’utiliser les fonctions de bases spatio-temporelles pour effectuer la reconstruction 4D afin d’estimer les paramètres cinétiques des métabolismes avec exactitude. Cependant, d’une part, l’utilisation de ce mode est très limitée en clinique, et d’autre part, il est surtout utilisé pour estimer la valeur normalisée de captation SUV qui est une grandeur semi-quantitative limitant le caractère fonctionnel de la TEP. Nos contributions sont les suivantes : - Le développement d’une nouvelle stratégie visant à accélérer sur les dispositifs GPU l’algorithme 3D LM-OSEM (List Mode Ordered-Subset Expectation-Maximization), y compris le calcul de la matrice de sensibilité intégrant les facteurs d’atténuation du patient et les coefficients de normalisation des détecteurs. Le temps de calcul obtenu est non seulement compatible avec une utilisation clinique des algorithmes 3D LM-OSEM, mais il permet également d’envisager des reconstructions rapides pour les applications TEP avancées telles que les études dynamiques en temps réel et des reconstructions d’images paramétriques à partir des données d’acquisitions directement. - Le développement et l’implantation sur GPU de l’approche Multigrilles/Multitrames pour accélérer l’algorithme LMEM (List-Mode Expectation-Maximization). L’objectif est de développer une nouvelle stratégie pour accélérer l’algorithme de référence LMEM qui est un algorithme convergent et puissant, mais qui a l’inconvénient de converger très lentement. Les résultats obtenus permettent d’entrevoir des reconstructions en temps quasi-réel que ce soit pour les examens utilisant un grand nombre de données d’acquisition aussi bien que pour les acquisitions dynamiques synchronisées. Par ailleurs, en clinique, la quantification est souvent faite à partir de données d’acquisition en sinogrammes généralement compressés. Mais des travaux antérieurs ont montré que cette approche pour accélérer la reconstruction diminue l’exactitude de la quantification et dégrade la résolution spatiale. Pour cette raison, nous avons parallélisé et implémenté sur GPU l’algorithme AW-LOR-OSEM (Attenuation-Weighted Line-of-Response-OSEM) ; une version de l’algorithme 3D OSEM qui effectue la reconstruction à partir de sinogrammes sans compression de données en intégrant les corrections de l’atténuation et de la normalisation dans les matrices de sensibilité. Nous avons comparé deux approches d’implantation : dans la première, la matrice système (MS) est calculée en temps réel au cours de la reconstruction, tandis que la seconde implantation utilise une MS pré- calculée avec une meilleure exactitude. Les résultats montrent que la première implantation offre une efficacité de calcul environ deux fois meilleure que celle obtenue dans la deuxième implantation. Les temps de reconstruction rapportés sont compatibles avec une utilisation clinique de ces deux stratégies.
Resumo:
Spectroscopic studies of laser -induced plasma from a high-temperature superconducting material, viz., YBa2Cu3O7 (YBCO), have been carried out. Electron temperature and electron density measurements were made from spectral data. The Stark broad ening of emission lines was used to determine the electron density, and the ratio of line in tensities was exploited for the determination of electron temperature. An initial electron temperature of 2.35 eV and electron density of 2.5 3 1017 cm2 3 were observed. The dependence on electron temperature and density on different experimental parameters such as distance from the target, delay time after the in itiation of the plasm a, and laser irradiance is also discussed in detail. Index Headings: Laser -plasma spectroscopy; Plasma diagnostics; Emission spectroscop y; YBa2Cu3O7.
Resumo:
Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.
Resumo:
Reducing fishing pressure in coastal waters is the need of the day in the Indian marine fisheries sector of the country which is fast changing from a mere vocational activity to a capital intensive industry. It requires continuous monitoring of the resource exploitation through a scientifically acceptable methodology, data on production of each species stock, the number and characteristics of the fishing gears of the fleet, various biological characteristics of each stock, the impact of fishing on the environment and the role of fishery—independent on availability and abundance. Besides this, there are issues relating to capabilities in stock assessment, taxonomy research, biodiversity, conservation and fisheries management. Generation of reliable data base over a fixed time frame, their analysis and interpretation are necessary before drawing conclusions on the stock size, maximum sustainable yield, maximum economic yield and to further implement various fishing regulatory measures. India being a signatory to several treaties and conventions, is obliged to carry out assessments of the exploited stocks and manage them at sustainable levels. Besides, the nation is bound by its obligation of protein food security to people and livelihood security to those engaged in marine fishing related activities. Also, there are regional variabilities in fishing technology and fishery resources. All these make it mandatory for India to continue and strengthen its marine capture fisheries research in general and deep sea fisheries in particular. Against this background, an attempt is made to strengthen the deep sea fish biodiversity and also to generate data on the distribution, abundance, catch per unit effort of fishery resources available beyond 200 m in the EEZ of southwest coast ofIndia and also unravel some of the aspects of life history traits of potentially important non conventional fish species inhabiting in the depth beyond 200 m. This study was carried out as part of the Project on Stock Assessment and Biology of Deep Sea Fishes of Indian EEZ (MoES, Govt. of India).
Resumo:
Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 8th International Fujaba Days aimed at bringing together Fujaba develop- ers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team.
Resumo:
Speaker: Dr Kieron O'Hara Organiser: Time: 04/02/2015 11:00-11:45 Location: B32/3077 Abstract In order to reap the potential societal benefits of big and broad data, it is essential to share and link personal data. However, privacy and data protection considerations mean that, to be shared, personal data must be anonymised, so that the data subject cannot be identified from the data. Anonymisation is therefore a vital tool for data sharing, but deanonymisation, or reidentification, is always possible given sufficient auxiliary information (and as the amount of data grows, both in terms of creation, and in terms of availability in the public domain, the probability of finding such auxiliary information grows). This creates issues for the management of anonymisation, which are exacerbated not only by uncertainties about the future, but also by misunderstandings about the process(es) of anonymisation. This talk discusses these issues in relation to privacy, risk management and security, reports on recent theoretical tools created by the UKAN network of statistics professionals (on which the author is one of the leads), and asks how long anonymisation can remain a useful tool, and what might replace it.
Resumo:
En las últimas tres décadas han sido formuladas distintas metodologías de estimación de las condiciones de salud en el mundo, en términos de conocer la carga global y particular de la morbilidad y la discapacidad, y de estimar la eficacia de las intervenciones en el ámbito de la salud pública. En Colombia, el avance más significativo en relación con la discapacidad es el Registro para la Localización y Caracterización de Personas con Discapacidad, elaborado por el DANE1 en 2003. La presente investigación usó los datos del Registro, analizó los factores contextuales ambientales, personales y sociales de la CIF2 con el propósito de identificar las relaciones determinantes de la discapacidad. El análisis secundario proviene de los datos de 86.622 registros (DANE3, 2005-2006), de las 20 localidades del Distrito Capital de Bogotá. Las variables fueron seleccionadas por conveniencia, obedeciendo a referentes empíricos de los factores determinantes de la CIF2 que se relacionan con los módulos del Registro sobre localización y vivienda, identificación personal, caracterización y origen de la discapacidad, salud, educación y participación. Se obtuvieron las distribuciones de frecuencia en valores absolutos y porcentuales para cada una de las variables. El análisis global por grupos de factores, personales y ambientales, sugiere un mayor peso de los segundos en la generación y exacerbación de la discapacidad, en la medida en que responden a determinantes relacionadas con modos y condiciones de vida asociados a los servicios, sistemas y políticas.
Resumo:
Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios
Resumo:
Ozone profiles from the Microwave Limb Sounder (MLS) onboard the Aura satellite of the NASA's Earth Observing System (EOS) were experimentally added to the European Centre for Medium-range Weather Forecasts (ECMWF) four-dimensional variational (4D-var) data assimilation system of version CY30R1, in which total ozone columns from Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY (SCIAMACHY) onboard the Envisat satellite and partial profiles from the Solar Backscatter Ultraviolet (SBUV/2) instrument onboard the NOAA-16 satellite have been operationally assimilated. As shown by results for the autumn of 2005, additional constraints from MLS data significantly improved the agreement of the analyzed ozone fields with independent observations throughout most of the stratosphere, owing to the daily near-global coverage and good vertical resolution of MLS observations. The largest impacts were seen in the middle and lower stratosphere, where model deficiencies could not be effectively corrected by the operational observations without the additional information on the ozone vertical distribution provided by MLS. Even in the upper stratosphere, where ozone concentrations are mainly determined by rapid chemical processes, dense and vertically resolved MLS data helped reduce the biases related to model deficiencies. These improvements resulted in a more realistic and consistent description of spatial and temporal variations in stratospheric ozone, as demonstrated by cases in the dynamically and chemically active regions. However, combined assimilation of the often discrepant ozone observations might lead to underestimation of tropospheric ozone. In addition, model deficiencies induced large biases in the upper stratosphere in the medium-range (5-day) ozone forecasts.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.
Resumo:
Stratospheric Sounding Units (SSU) on the NOAA operational satellites have been the main source of near global temperature trend data above the lower stratosphere. They have been used extensively for comparison with model-derived trends. The SSU senses in the 15 micron band of CO2 and hence the weighting function is sensitive to changes in CO2 concentrations. The impact of this change in weighting function has been ignored in all recent trend analyses. We show that the apparent trends in global mean brightness temperature due to the change in weighting function vary from about -0.4 K/decade to 0.4 K/decade depending on the altitude sensed by the different SSU channels. For some channels, this apparent trend is of a similar size to the trend deduced from SSU data but ignoring the change in weighting function. In the mid-stratosphere, the revised trends are now significantly more negative and in better agreement with model-calculated trends.
Resumo:
Within this paper modern techniques such as satellite image analysis and tools provided by geographic information systems (GIS.) are exploited in order to extend and improve existing techniques for mapping the spatial distribution of sediment transport processes. The processes of interest comprise mass movements such as solifluction, slope wash, dirty avalanches and rock- and boulder falls. They differ considerably in nature and therefore different approaches for the derivation of their spatial extent are required. A major challenge is addressing the differences between the comparably coarse resolution of the available satellite data (Landsat TM/ETM+, 30 in x 30 m) and the actual scale of sediment transport in this environment. A three-stepped approach has been developed which is based on the concept of Geomorphic Process Units (GPUs): parameterization, process area delineation and combination. Parameters include land cover from satellite data and digital elevation model derivatives. Process areas are identified using a hierarchical classification scheme utilizing thresholds and definition of topology. The approach has been developed for the Karkevagge in Sweden and could be successfully transferred to the Rabotsbekken catchment at Okstindan, Norway using similar input data. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Remote sensing can potentially provide information useful in improving pollution transport modelling in agricultural catchments. Realisation of this potential will depend on the availability of the raw data, development of information extraction techniques, and the impact of the assimilation of the derived information into models. High spatial resolution hyperspectral imagery of a farm near Hereford, UK is analysed. A technique is described to automatically identify the soil and vegetation endmembers within a field, enabling vegetation fractional cover estimation. The aerially-acquired laser altimetry is used to produce digital elevation models of the site. At the subfield scale the hypothesis that higher resolution topography will make a substantial difference to contaminant transport is tested using the AGricultural Non-Point Source (AGNPS) model. Slope aspect and direction information are extracted from the topography at different resolutions to study the effects on soil erosion, deposition, runoff and nutrient losses. Field-scale models are often used to model drainage water, nitrate and runoff/sediment loss, but the demanding input data requirements make scaling up to catchment level difficult. By determining the input range of spatial variables gathered from EO data, and comparing the response of models to the range of variation measured, the critical model inputs can be identified. Response surfaces to variation in these inputs constrain uncertainty in model predictions and are presented. Although optical earth observation analysis can provide fractional vegetation cover, cloud cover and semi-random weather patterns can hinder data acquisition in Northern Europe. A Spring and Autumn cloud cover analysis is carried out over seven UK sites close to agricultural districts, using historic satellite image metadata, climate modelling and historic ground weather observations. Results are assessed in terms of probability of acquisition probability and implications for future earth observation missions. (C) 2003 Elsevier Ltd. All rights reserved.