835 resultados para image-based rendering


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report gives a comprehensive and up-to-date review of Alzheimer's disease biomarkers. Recent years have seen significant advances in this field. Whilst considerable effort has focused on A�_ and tau related markers, a substantial number of other molecules have been identified, that may offer new opportunities.This Report : Identifies 60 candidate Alzheimer's (AD) biomarkers and their associated studies. Of these, 49 are single species or single parameters, 7 are combinations or panels and 4 involve the measurement of two species or parameters or their ratios. These include proteins (n=34), genes (n=11), image-based parameters (n=7), small molecules (n=3), proteins + genes (n=2) and others (n=3). Of these, 30 (50%) relate to species identified in CSF and 19 (32%) were found in the blood. These candidate may be classified on the basis of their diagnostic utility, namely those which i) may allow AD to be detected when the disease has developed (48 of 75†= 64%), ii) may allow early detection of AD (18 of 75† = 24%) and iii) may allow AD to be predicted before the disease has begun to develop (9 of 75†= 12%). † Note: Of these, 11 were linked to two or more of these capabilities (e.g. allowed both early-stage detection as well as diagnosis after the disease has developed).Biomarkers: AD biomarkers identified in this report show significant diversity, however of the 60 described, 18 (30%) are associated with amyloid beta (A�_) and 9 (15%) relate to Tau. The remainder of the biomarkers (just over half) fall into a number of different groups. Of these, some are associated with other hypotheses on the pathogenesis of AD however the vast majority are individually unique and not obviously linked with other markers. Analysis and discussion presented in this report includes summaries of the studies and clinical trials that have lead to the identification of these markers. Where it has been calculated, diagnostic sensitivity, specificity and the capacity of these markers to differentiate patients with suspected AD from healthy controls and individuals believed to be suffering from other neurodegenerative conditions, have been indicated. These findings are discussed in relation to existing hypotheses on the pathogenesis of the AD and the current drug development pipeline. Many uncertainties remain in relation to the pathogenesis of AD, in diagnosing and treating the disease and many of the studies carried out to identify disease markers are at an early stage and will require confirmation through larger and longer investigations. Nevertheless, significant advances in the identification of AD biomarkers have now been made. Moreover, whilst much of the research on AD biomarkers has focused on amyloid and tau related species, it is evident that a substantial number of other species may provide important opportunities.Purpose of Report: To provide a comprehensive review of important and recently discovered candidate biomarkers of AD, in particular those with potential to reliably detect the disease or with utility in clinical development, drug repurposing, in studies of the pathogenesis and in monitoring drug response and the course of the disease. Other key goals were to identify markers that support current pipeline developments, indicate new potential drug targets or which advance understanding of the pathogenesis of this disease.Drug Repurposing: Studies of the pathogenesis of AD have identified aberrant changes in a number of other disease areas including inflammation, diabetes, oxidative stress, lipid metabolism and others. These findings have prompted studies to evaluate some existing approved drugs to treat AD. This report identifies studies of 9 established drug classes currently being investigated for potential repurposing.Alzheimer’s Disease: In 2005, the global prevalence of dementia was estimated at 25 million, with more than 4 million new cases occurring each year. It is also calculated that the number of people affected will double every 20 years, to 80 million by 2040, if a cure is not found. More than 50% of dementia cases are due to AD. Today, approximately 5 million individuals in the US suffer from AD, representing one in eight people over the age of 65. Direct and indirect costs of AD and other forms of dementia in the US are around $150 billion annually. Worldwide, costs for dementia care are estimated at $315 billion annually. Despite significant research into this debilitating and ultimately fatal disease, advances in the development of diagnostic tests for AD and moreover, effective treatments, remain elusive.Background: Alzheimer's disease is the most common cause of dementia, yet its clinical diagnosis remains uncertain until an eventual post-mortem histopathology examination is carried out. Currently, therapy for patients with Alzheimer disease only treats the symptoms; however, it is anticipated that new disease-modifying drugs will soon become available. The urgency for new and effective treatments for AD is matched by the need for new tests to detect and diagnose the condition. Uncertainties in the diagnosis of AD mean that the disease is often undiagnosed and under treated. Moreover, it is clear that clinical confirmation of AD, using cognitive tests, can only be made after substantial neuronal cell loss has occurred; a process that may have taken place over many years. Poor response to current therapies may therefore, in part, reflect the fact that such treatments are generally commenced only after neuronal damage has occurred. The absence of tests to detect or diagnose presymptomatic AD also means that there is no standard that can be applied to validate experimental findings (e.g. in drug discovery) without performing lengthy studies, and eventual confirmation by autopsy.These limitations are focusing considerable effort on the identification of biomarkers that advance understanding of the pathogenesis of AD and how the disease can be diagnosed in its early stages and treated. It is hoped that developments in these areas will help physicians to detect AD and guide therapy before the first signs of neuronal damage appears. The last 5-10 years have seen substantial research into the pathogenesis of AD and this has lead to the identification of a substantial number of AD biomarkers, which offer important insights into this disease. This report brings together the latest advances in the identification of AD biomarkers and analyses the opportunities they offer in drug R&D and diagnostics.��

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: Respiratory motion correction remains a challenge in coronary magnetic resonance imaging (MRI) and current techniques, such as navigator gating, suffer from sub-optimal scan efficiency and ease-of-use. To overcome these limitations, an image-based self-navigation technique is proposed that uses "sub-images" and compressed sensing (CS) to obtain translational motion correction in 2D. The method was preliminarily implemented as a 2D technique and tested for feasibility for targeted coronary imaging. METHODS: During a 2D segmented radial k-space data acquisition, heavily undersampled sub-images were reconstructed from the readouts collected during each cardiac cycle. These sub-images may then be used for respiratory self-navigation. Alternatively, a CS reconstruction may be used to create these sub-images, so as to partially compensate for the heavy undersampling. Both approaches were quantitatively assessed using simulations and in vivo studies, and the resulting self-navigation strategies were then compared to conventional navigator gating. RESULTS: Sub-images reconstructed using CS showed a lower artifact level than sub-images reconstructed without CS. As a result, the final image quality was significantly better when using CS-assisted self-navigation as opposed to the non-CS approach. Moreover, while both self-navigation techniques led to a 69% scan time reduction (as compared to navigator gating), there was no significant difference in image quality between the CS-assisted self-navigation technique and conventional navigator gating, despite the significant decrease in scan time. CONCLUSIONS: CS-assisted self-navigation using 2D translational motion correction demonstrated feasibility of producing coronary MRA data with image quality comparable to that obtained with conventional navigator gating, and does so without the use of additional acquisitions or motion modeling, while still allowing for 100% scan efficiency and an improved ease-of-use. In conclusion, compressed sensing may become a critical adjunct for 2D translational motion correction in free-breathing cardiac imaging with high spatial resolution. An expansion to modern 3D approaches is now warranted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Patient-specific simulations of the hemodynamics in intracranial aneurysms can be constructed by using image-based vascular models and CFD techniques. This work evaluates the impact of the choice of imaging technique on these simulations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Transient balanced steady-state free-precession (bSSFP) has shown substantial promise for noninvasive assessment of coronary arteries but its utilization at 3.0 T and above has been hampered by susceptibility to field inhomogeneities that degrade image quality. The purpose of this work was to refine, implement, and test a robust, practical single-breathhold bSSFP coronary MRA sequence at 3.0 T and to test the reproducibility of the technique. METHODS: A 3D, volume-targeted, high-resolution bSSFP sequence was implemented. Localized image-based shimming was performed to minimize inhomogeneities of both the static magnetic field and the radio frequency excitation field. Fifteen healthy volunteers and three patients with coronary artery disease underwent examination with the bSSFP sequence (scan time = 20.5 ± 2.0 seconds), and acquisitions were repeated in nine subjects. The images were quantitatively analyzed using a semi-automated software tool, and the repeatability and reproducibility of measurements were determined using regression analysis and intra-class correlation coefficient (ICC), in a blinded manner. RESULTS: The 3D bSSFP sequence provided uniform, high-quality depiction of coronary arteries (n = 20). The average visible vessel length of 100.5 ± 6.3 mm and sharpness of 55 ± 2% compared favorably with earlier reported navigator-gated bSSFP and gradient echo sequences at 3.0 T. Length measurements demonstrated a highly statistically significant degree of inter-observer (r = 0.994, ICC = 0.993), intra-observer (r = 0.894, ICC = 0.896), and inter-scan concordance (r = 0.980, ICC = 0.974). Furthermore, ICC values demonstrated excellent intra-observer, inter-observer, and inter-scan agreement for vessel diameter measurements (ICC = 0.987, 0.976, and 0.961, respectively), and vessel sharpness values (ICC = 0.989, 0.938, and 0.904, respectively). CONCLUSIONS: The 3D bSSFP acquisition, using a state-of-the-art MR scanner equipped with recently available technologies such as multi-transmit, 32-channel cardiac coil, and localized B0 and B1+ shimming, allows accelerated and reproducible multi-segment assessment of the major coronary arteries at 3.0 T in a single breathhold. This rapid sequence may be especially useful for functional imaging of the coronaries where the acquisition time is limited by the stress duration and in cases where low navigator-gating efficiency prohibits acquisition of a free breathing scan in a reasonable time period.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä työssä raportoidaan harjoitustyön kehittäminen ja toteuttaminen Aktiivisen- ja robottinäön kurssille. Harjoitustyössä suunnitellaan ja toteutetaan järjestelmä joka liikuttaa kappaleita robottikäsivarrella kolmiuloitteisessa avaruudessa. Kappaleidenpaikkojen määrittämiseen järjestelmä käyttää digitaalisia kuvia. Tässä työssä esiteltävässä harjoitustyötoteutuksessa käytettiin raja-arvoistusta HSV-väriavaruudessa kappaleiden segmentointiin kuvasta niiden värien perusteella. Segmentoinnin tuloksena saatavaa binäärikuvaa suodatettiin mediaanisuotimella kuvan häiriöiden poistamiseksi. Kappaleen paikkabinäärikuvassa määritettiin nimeämällä yhtenäisiä pikseliryhmiä yhtenäisen alueen nimeämismenetelmällä. Kappaleen paikaksi määritettiin suurimman nimetyn pikseliryhmän paikka. Kappaleiden paikat kuvassa yhdistettiin kolmiuloitteisiin koordinaatteihin kalibroidun kameran avulla. Järjestelmä liikutti kappaleita niiden arvioitujen kolmiuloitteisten paikkojen perusteella.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis is related to the topic of image-based characterization of fibers in pulp suspension during the papermaking process. Papermaking industry is focusing on process control optimization and automatization, which makes it possible to manufacture highquality products in a resource-efficient way. Being a part of the process control, pulp suspension analysis allows to predict and modify properties of the end product. This work is a part of the tree species identification task and focuses on analysis of fiber parameters in the pulp suspension at the wet stage of paper production. The existing machine vision methods for pulp characterization were investigated, and a method exploiting direction sensitive filtering, non-maximum suppression, hysteresis thresholding, tensor voting, and curve extraction from tensor maps was developed. Application of the method to the microscopic grayscale pulp images made it possible to detect curves corresponding to fibers in the pulp image and to compute their morphological characteristics. Performance of the method was evaluated based on the manually produced ground truth data. An accuracy of fiber characteristics estimation, including length, width, and curvature, for the acacia pulp images was found to be 84, 85, and 60% correspondingly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Kandidaatintyö tehtiin osana PulpVision-tutkimusprojektia, jonka tarkoituksena on kehittää kuvapohjaisia laskenta- ja luokittelumetodeja sellun laaduntarkkailuun paperin valmistuksessa. Tämän tutkimusprojektin osana on aiemmin kehitetty metodi, jolla etsittiin kaarevia rakenteita kuvista, ja tätä metodia hyödynnettiin kuitujen etsintään kuvista. Tätä metodia käytettiin lähtökohtana kandidaatintyölle. Työn tarkoituksena oli tutkia, voidaanko erilaisista kuitukuvista laskettujen piirteiden avulla tunnistaa kuvassa olevien kuitujen laji. Näissä kuitukuvissa oli kuituja neljästä eri puulajista ja yhdestä kasvista. Nämä lajit olivat akasia, koivu, mänty, eukalyptus ja vehnä. Jokaisesta lajista valittiin 100 kuitukuvaa ja nämä kuvat jaettiin kahteen ryhmään, joista ensimmäistä käytettiin opetusryhmänä ja toista testausryhmänä. Opetusryhmän avulla jokaiselle kuitulajille laskettiin näitä kuvaavia piirteitä, joiden avulla pyrittiin tunnistamaan testausryhmän kuvissa olevat kuitulajit. Nämä kuvat oli tuottanut CEMIS-Oulu (Center for Measurement and Information Systems), joka on mittaustekniikkaan keskittynyt yksikkö Oulun yliopistossa. Yksittäiselle opetusryhmän kuitukuvalle laskettiin keskiarvot ja keskihajonnat kolmesta eri piirteestä, jotka olivat pituus, leveys ja kaarevuus. Lisäksi laskettiin, kuinka monta kuitua kuvasta löydettiin. Näiden piirteiden eri yhdistelmien avulla testattiin tunnistamisen tarkkuutta käyttämällä k:n lähimmän naapurin menetelmää ja Naiivi Bayes -luokitinta testausryhmän kuville. Testeistä saatiin lupaavia tuloksia muun muassa pituuden ja leveyden keskiarvoja käytettäessä saavutettiin jopa noin 98 %:n tarkkuus molemmilla algoritmeilla. Tunnistuksessa kuitujen keskimäärinen pituus vaikutti olevan kuitukuvia parhaiten kuvaava piirre. Käytettyjen algoritmien välillä ei ollut suurta vaihtelua tarkkuudessa. Testeissä saatujen tulosten perusteella voidaan todeta, että kuitukuvien tunnistaminen on mahdollista. Testien perusteella kuitukuvista tarvitsee laskea vain kaksi piirrettä, joilla kuidut voidaan tunnistaa tarkasti. Käytetyt lajittelualgoritmit olivat hyvin yksinkertaisia, mutta ne toimivat testeissä hyvin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We discuss a variety of object recognition experiments in which human subjects were presented with realistically rendered images of computer-generated three-dimensional objects, with tight control over stimulus shape, surface properties, illumination, and viewpoint, as well as subjects' prior exposure to the stimulus objects. In all experiments recognition performance was: (1) consistently viewpoint dependent; (2) only partially aided by binocular stereo and other depth information, (3) specific to viewpoints that were familiar; (4) systematically disrupted by rotation in depth more than by deforming the two-dimensional images of the stimuli. These results are consistent with recently advanced computational theories of recognition based on view interpolation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Jerdon's courser Rhinoptilus bitorquatus is a nocturnally active cursorial bird that is only known to occur in a small area of scrub jungle in Andhra Pradesh, India, and is listed as critically endangered by the IUCN. Information on its habitat requirements is needed urgently to underpin conservation measures. We quantified the habitat features that correlated with the use of different areas of scrub jungle by Jerdon's coursers, and developed a model to map potentially suitable habitat over large areas from satellite imagery and facilitate the design of surveys of Jerdon's courser distribution. 2. We used 11 arrays of 5-m long tracking strips consisting of smoothed fine soil to detect the footprints of Jerdon's coursers, and measured tracking rates (tracking events per strip night). We counted the number of bushes and trees, and described other attributes of vegetation and substrate in a 10-m square plot centred on each strip. We obtained reflectance data from Landsat 7 satellite imagery for the pixel within which each strip lay. 3. We used logistic regression models to describe the relationship between tracking rate by Jerdon's coursers and characteristics of the habitat around the strips, using ground-based survey data and satellite imagery. 4. Jerdon's coursers were most likely to occur where the density of large (>2 m tall) bushes was in the range 300-700 ha(-1) and where the density of smaller bushes was less than 1000 ha(-1). This habitat was detectable using satellite imagery. 5. Synthesis and applications. The occurrence of Jerdon's courser is strongly correlated with the density of bushes and trees, and is in turn affected by grazing with domestic livestock, woodcutting and mechanical clearance of bushes to create pasture, orchards and farmland. It is likely that there is an optimal level of grazing and woodcutting that would maintain or create suitable conditions for the species. Knowledge of the species' distribution is incomplete and there is considerable pressure from human use of apparently suitable habitats. Hence, distribution mapping is a high conservation priority. A two-step procedure is proposed, involving the use of ground surveys of bush density to calibrate satellite image-based mapping of potential habitat. These maps could then be used to select priority areas for Jerdon's courser surveys. The use of tracking strips to study habitat selection and distribution has potential in studies of other scarce and secretive species.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Jerdon's courser Rhinoptilus bitorquatus is a nocturnally active cursorial bird that is only known to occur in a small area of scrub jungle in Andhra Pradesh, India, and is listed as critically endangered by the IUCN. Information on its habitat requirements is needed urgently to underpin conservation measures. We quantified the habitat features that correlated with the use of different areas of scrub jungle by Jerdon's coursers, and developed a model to map potentially suitable habitat over large areas from satellite imagery and facilitate the design of surveys of Jerdon's courser distribution. 2. We used 11 arrays of 5-m long tracking strips consisting of smoothed fine soil to detect the footprints of Jerdon's coursers, and measured tracking rates (tracking events per strip night). We counted the number of bushes and trees, and described other attributes of vegetation and substrate in a 10-m square plot centred on each strip. We obtained reflectance data from Landsat 7 satellite imagery for the pixel within which each strip lay. 3. We used logistic regression models to describe the relationship between tracking rate by Jerdon's coursers and characteristics of the habitat around the strips, using ground-based survey data and satellite imagery. 4. Jerdon's coursers were most likely to occur where the density of large (>2 m tall) bushes was in the range 300-700 ha(-1) and where the density of smaller bushes was less than 1000 ha(-1). This habitat was detectable using satellite imagery. 5. Synthesis and applications. The occurrence of Jerdon's courser is strongly correlated with the density of bushes and trees, and is in turn affected by grazing with domestic livestock, woodcutting and mechanical clearance of bushes to create pasture, orchards and farmland. It is likely that there is an optimal level of grazing and woodcutting that would maintain or create suitable conditions for the species. Knowledge of the species' distribution is incomplete and there is considerable pressure from human use of apparently suitable habitats. Hence, distribution mapping is a high conservation priority. A two-step procedure is proposed, involving the use of ground surveys of bush density to calibrate satellite image-based mapping of potential habitat. These maps could then be used to select priority areas for Jerdon's courser surveys. The use of tracking strips to study habitat selection and distribution has potential in studies of other scarce and secretive species.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we address issues in segmentation Of remotely sensed LIDAR (LIght Detection And Ranging) data. The LIDAR data, which were captured by airborne laser scanner, contain 2.5 dimensional (2.5D) terrain surface height information, e.g. houses, vegetation, flat field, river, basin, etc. Our aim in this paper is to segment ground (flat field)from non-ground (houses and high vegetation) in hilly urban areas. By projecting the 2.5D data onto a surface, we obtain a texture map as a grey-level image. Based on the image, Gabor wavelet filters are applied to generate Gabor wavelet features. These features are then grouped into various windows. Among these windows, a combination of their first and second order of statistics is used as a measure to determine the surface properties. The test results have shown that ground areas can successfully be segmented from LIDAR data. Most buildings and high vegetation can be detected. In addition, Gabor wavelet transform can partially remove hill or slope effects in the original data by tuning Gabor parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal was to quantitatively estimate and compare the fidelity of images acquired with a digital imaging system (ADAR 5500) and generated through scanning of color infrared aerial photographs (SCIRAP) using image-based metrics. Images were collected nearly simultaneously in two repetitive flights to generate multi-temporal datasets. Spatial fidelity of ADAR was lower than that of SCIRAP images. Radiometric noise was higher for SCIRAP than for ADAR images, even though noise from misregistration effects was lower. These results suggest that with careful control of film scanning, the overall fidelity of SCIRAP imagery can be comparable to that of digital multispectral camera data. Therefore, SCIRAP images can likely be used in conjunction with digital metric camera imagery in long-term landcover change analyses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As mídias sociais vêm ganhando grande importância nos últimos anos, e transformando a maneira como as pessoas se articulam, se engajam ou simplesmente trocam informações a respeito de todos os assuntos. A evolução das tecnologias móveis de comunicação, cada vez mais robustas, e a disseminação de smartphones, aparatos modernos e completos para a convergência de voz e imagem, têm cumprido um papel importante no contexto de conexão permanente das pessoas, com tudo e com todos. Essa pesquisa se propõe a debater como campanhas de boca-a-boca (eWOM) no Facebook (a maior mídia social de todas) vêm impactando a gestão de reputação das corporações e de imagem de marcas, a partir de pesquisa de campo que capturou a visão de executivos de agências de mídia digital, complementada por pesquisa secundária para a análise de experiências vividas por algumas empresas de grande visibilidade. Os resultados demonstram que as mídias sociais tornaram mais complexo o processo de gestão de reputação, que está cada vez mais fora do controle absoluto das organizações e mais compartilhado com os seus públicos de interesse. Indicam, ainda, que as mídias sociais podem representar mais oportunidades para as organizações que se prepararem para elas e mais ameaças para as que forem em sentido contrário.