935 resultados para Optical signal and image processing device
Resumo:
Peer reviewed
Resumo:
Polymer optical fibre Bragg gratings are useful for strain sensor applications for large dynamic range. We report recent progress in developing polymer optical fibres with higher photosensitivity and fabricating POF gratings at alternative wavelength. © 2010 Optical Society of America.
Resumo:
Prenyltransferase enzymes promote the membrane localization of their target proteins by directing the attachment of a hydrophobic lipid group at a conserved C-terminal CAAX motif. Subsequently, the prenylated protein is further modified by postprenylation processing enzymes that cleave the terminal 3 amino acids and carboxymethylate the prenylated cysteine residue. Many prenylated proteins, including Ras1 and Ras-like proteins, require this multistep membrane localization process in order to function properly. In the human fungal pathogen Cryptococcus neoformans, previous studies have demonstrated that two distinct forms of protein prenylation, farnesylation and geranylgeranylation, are both required for cellular adaptation to stress, as well as full virulence in animal infection models. Here, we establish that the C. neoformans RAM1 gene encoding the farnesyltransferase β-subunit, though not strictly essential for growth under permissive in vitro conditions, is absolutely required for cryptococcal pathogenesis. We also identify and characterize postprenylation protease and carboxyl methyltransferase enzymes in C. neoformans. In contrast to the prenyltransferases, deletion of the genes encoding the Rce1 protease and Ste14 carboxyl methyltransferase results in subtle defects in stress response and only partial reductions in virulence. These postprenylation modifications, as well as the prenylation events themselves, do play important roles in mating and hyphal transitions, likely due to their regulation of peptide pheromones and other proteins involved in development. IMPORTANCE Cryptococcus neoformans is an important human fungal pathogen that causes disease and death in immunocompromised individuals. The growth and morphogenesis of this fungus are controlled by conserved Ras-like GTPases, which are also important for its pathogenicity. Many of these proteins require proper subcellular localization for full function, and they are directed to cellular membranes through a posttranslational modification process known as prenylation. These studies investigate the roles of one of the prenylation enzymes, farnesyltransferase, as well as the postprenylation processing enzymes in C. neoformans. We demonstrate that the postprenylation processing steps are dispensable for the localization of certain substrate proteins. However, both protein farnesylation and the subsequent postprenylation processing steps are required for full pathogenesis of this fungus.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
Much of the bridge stock on major transport links in North America and Europe was constructed in the 1950s and 1960s and has since deteriorated or is carrying loads far in excess of the original design loads. Structural Health Monitoring Systems (SHM) can provide valuable information on the bridge capacity but the application of such systems is currently limited by access and bridge type. This paper investigates the use of computer vision systems for SHM. A series of field tests have been carried out to test the accuracy of displacement measurements using contactless methods. A video image of each test was processed using a modified version of the optical flow tracking method to track displacement. These results have been validated with an established measurement method using linear variable differential transformers (LVDTs). The results obtained from the algorithm provided an accurate comparison with the validation measurements. The calculated displacements agree within 2% of the verified LVDT measurements, a number of post processing methods were then applied to attempt to reduce this error.
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
In situ methods used for water quality assessment have both physical and time constraints. Just a limited number of sampling points can be performed due to this, making it difficult to capture the range and variability of coastal processes and constituents. In addition, the mixing between fresh and oceanic water creates complex physical, chemical and biological environment that are difficult to understand, causing the existing measurement methodologies to have significant logistical, technical, and economic challenges and constraints. Remote sensing of ocean colour makes it possible to acquire information on the distribution of chlorophyll and other constituents over large areas of the oceans in short periods. There are many potential applications of ocean colour data. Satellite-derived products are a key data source to study the distribution pattern of organisms and nutrients (Guillaud et al. 2008) and fishery research (Pillai and Nair 2010; Solanki et al. 2001. Also, the study of spatial and temporal variability of phytoplankton blooms, red tide identification or harmful algal blooms monitoring (Sarangi et al. 2001; Sarangi et al. 2004; Sarangi et al. 2005; Bhagirathan et al., 2014), river plume or upwelling assessments (Doxaran et al. 2002; Sravanthi et al. 2013), global productivity analyses (Platt et al. 1988; Sathyendranath et al. 1995; IOCCG2006) and oil spill detection (Maianti et al. 2014). For remote sensing to be accurate in the complex coastal waters, it has to be validated with the in situ measured values. In this thesis an attempt to study, measure and validate the complex waters with the help of satellite data has been done. Monitoring of coastal ecosystem health of Arabian Sea in a synoptic way requires an intense, extensive and continuous monitoring of the water quality indicators. Phytoplankton determined from chl-a concentration, is considered as an indicator of the state of the coastal ecosystems. Currently, satellite sensors provide the most effective means for frequent, synoptic, water-quality observations over large areas and represent a potential tool to effectively assess chl-a concentration over coastal and oceanic waters; however, algorithms designed to estimate chl-a at global scales have been shown to be less accurate in Case 2 waters, due to the presence of water constituents other than phytoplankton which do not co-vary with the phytoplankton. The constituents of Arabian Sea coastal waters are region-specific because of the inherent variability of these optically-active substances affected by factors such as riverine input (e.g. suspended matter type and grain size, CDOM) and phytoplankton composition associated with seasonal changes.
Resumo:
Polymer Optical Fibers have occupied historically a place for large core flexible fibers operating in short distances. In addition to their practical passive application in short-haul communication they constitute a potential research field as active devices with organic dopants. Organic dyes are preferred as dopants over organic semiconductors due to their higher optical cross section. Thus organic dyes as gain media in a polymer fiber is used to develop efficient and narrow laser sources with a tunability throughout the visible region or optical amplifier with high gain. Dyes incorporated in fiber form has added advantage over other solid state forms such as films since the pump power required to excite the molecules in the core of the fiber is less thereby utilising the pump power effectively. In 1987, Muto et.al investigated a dye doped step index polymer fiber laser. Afterwards, numerous researches have been carried out in this area demonstrating laser emission from step index, graded index and hollow optical fibers incorporating various dyes. Among various dyes, Rhodamine6G is the most widely and commonly used laser dye for the last four decades. Rhodamine6G has many desirable optical properties which make it preferable over other organic dyes such as Coumarin, Nile Blue, Curcumin etc. The research focus on the implementation of efficient fiber lasers and amplifiers for short fiber distances. Developing efficient plastic lasers with electrical pumping can be a new proposal in this field which demands lowest possible threshold pump energy of the gain medium in the cavity as an important parameter. One way of improving the efficiency of the lasers, through low threshold pump energy, is by modifying the gain of the amplifiers in the resonator/cavity. Success in the field of Radiative Decay Engineering can pave way to this problem. Laser gain media consisting of dye-nanoparticle composites can improve the efficiency by lowering the lasing threshold and enhancing the photostability. The electric field confined near the surface of metal nanoparticles due to Localized Surface Plasmon Resonance can be very effective for the excitation of active centers to impart high optical gain for lasing. Since the Surface Plasmon Resonance of nanoparticles of gold and silver lies in the visible range, it can affect the spectral emission characteristics of organic dyes such as Rhodamine6G through plasmon field generated by the particles. The change in emission of the dye placed near metal nanoparticles depend on plasmon field strength which in turn depends on the type of metal, size of nanoparticle, surface modification of the particle and the wavelength of incident light. Progress in fabrication of different types of nanostructures lead to the advent of nanospheres, nanoalloys, core-shell and nanowires to name a few. The thesis deals with the fabrication and characterisation of polymer optical fibers with various metallic and bimetallic nanostructures incorporated in the gain media for efficient fiber lasers with low threshold and improved photostability.
Resumo:
We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
Résumé : Une définition opérationnelle de la dyslexie qui est adéquate et pertinente à l'éducation n'a pu être identifiée suite à une recension des écrits. Les études sur la dyslexie se retrouvent principalement dans trois champs: la neurologie, la neurolinguistique et la génétique. Les résultats de ces recherches cependant, se limitent au domaine médical et ont peu d'utilité pour une enseignante ou un enseignant. La classification de la dyslexie de surface et la dyslexie profonde est la plus appropriée lorsque la dyslexie est définie comme trouble de lecture dans le contexte de l'éducation. L'objectif de ce mémoire était de développer un cadre conceptuel théorique dans lequel les troubles de lecture chez les enfants dyslexiques sont dû à une difficulté en résolution de problèmes dans le traitement de l'information. La validation du cadre conceptuel a été exécutée à l'aide d'un expert en psychologie cognitive, un expert en dyslexie et une enseignante. La perspective de la résolution de problèmes provient du traitement de l'information en psychologie cognitive. Le cadre conceptuel s'adresse uniquement aux troubles de lectures qui sont manifestés par les enfants dyslexiques.||Abstract : An extensive literature review failed to uncover an adequate operational definition of dyslexia applicable to education. The predominant fields of research that have produced most of the studies on dyslexia are neurology, neurolinguistics and genetics. Their perspectives were shown to be more pertinent to medical experts than to teachers. The categorization of surface and deep dyslexia was shown to be the best description of dyslexia in an educational context. The purpose of the present thesis was to develop a theoretical conceptual framework which describes a link between dyslexia, a text-processing model and problem solving. This conceptual framework was validated by three experts specializing in a specific field (either cognitive psychology, dyslexia or teaching). The concept of problem solving was based on information-processing theories in cognitive psychology. This framework applies specifically to reading difficulties which are manifested by dyslexic children.
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008