930 resultados para Line and edge detection
Resumo:
Firefly Curios and Sundry Lights contains 33 poems and 55 pages, mostly free verse lyric narratives issuing from various geographic, emotional, and temporal landscapes. The book is divided into four sections which might roughly be titled: "before," examining themes of childhood and death: "on-the-road," relaying the compulsion to travel, "odd-and- ends-limbo," including pieces which have no context within the time line; and "in-one- place-for-now," reflecting modes of communication, ordering, and longing. Other concerns include speculations about existence, observations of nature, and the importance of science as a means of apprehending the world. The work reveals a belief in the interconnectedness of mind and matter, combines seriousness and humor, and displays a sonic sensibility. These poems of solitude and observation are themselves vehicles, their motion a means of dislocation in order to find the self. Firefly Curios and Sundry Lights is smaller than a bread box and you can dance to it.
Resumo:
Benzodiazepines are among the most prescribed compounds for anti-anxiety and are present in many toxicological screens. These drugs are also prominent in the commission of drug facilitated sexual assaults due their effects on the central nervous system. Due to their potency, a low dose of these compounds is often administered to victims; therefore, the target detection limit for these compounds in biological samples is 10 ng/mL. Currently these compounds are predominantly analyzed using immunoassay techniques; however more specific screening methods are needed. ^ The goal of this dissertation was to develop a rapid, specific screening technique for benzodiazepines in urine samples utilizing surface-enhanced Raman spectroscopy (SERS), which has previously been shown be capable of to detect trace quantities of pharmaceutical compounds in aqueous solutions. Surface enhanced Raman spectroscopy has the advantage of overcoming the low sensitivity and fluorescence effects seen with conventional Raman spectroscopy. The spectra are obtained by applying an analyte onto a SERS-active metal substrate such as colloidal metal particles. SERS signals can be further increased with the addition of aggregate solutions. These agents cause the nanoparticles to amass and form hot-spots which increase the signal intensity. ^ In this work, the colloidal particles are spherical gold nanoparticles in aqueous solution with an average size of approximately 30 nm. The optimum aggregating agent for the detection of benzodiazepines was determined to be 16.7 mM MgCl2, providing the highest signal intensities at the lowest drug concentrations with limits of detection between 0.5 and 127 ng/mL. A supported liquid extraction technique was utilized as a rapid clean extraction for benzodiazepines from urine at a pH of 5.0, allowing for clean extraction with limits of detection between 6 and 640 ng/mL. It was shown that at this pH other drugs that are prevalent in urine samples can be removed providing the selective detection of the benzodiazepine of interest. ^ This technique has been shown to provide rapid (less than twenty minutes), sensitive, and specific detection of benzodiazepines at low concentrations in urine. It provides the forensic community with a sensitive and specific screening technique for the detection of benzodiazepines in drug facilitated assault cases.^
Immunoexpression of integrins in ameloblastoma, adenomatoid odontogenic tumor, and human tooth germs
Resumo:
The expression of integrins alpha2beta1, alpha3beta1, and alpha5beta1 in 30 ameloblastomas (20 solid and 10 unicystic tumors), 12 adenomatoid odontogenic tumors (AOTs), and 5 human tooth germs in different stages of odontogenesis was analyzed. The distribution, location, pattern, and intensity of immunohistochemical expression were evaluated. Intensity was analyzed using scores (0 = absence, 1 = weak staining, and 2 = strong staining). No difference in the immunoexpression of the integrins was observed between solid and unicystic ameloblastomas. When these two ameloblastoma types were pooled into a single group, the following significant differences were found: immunoexpression of integrin alpha2beta1 was stronger in ameloblastomas than in AOTs and tooth germs, and the expression of integrin alpha5beta1 was stronger in ameloblastomas than in AOTs. The lack of detection of integrin alpha3beta1 in tooth germs and its detection in the odontogenic tumors studied suggest that this integrin might be used as a marker of neoplastic transformation in odontogenic tissues.
Immunoexpression of integrins in ameloblastoma, adenomatoid odontogenic tumor, and human tooth germs
Resumo:
The expression of integrins alpha2beta1, alpha3beta1, and alpha5beta1 in 30 ameloblastomas (20 solid and 10 unicystic tumors), 12 adenomatoid odontogenic tumors (AOTs), and 5 human tooth germs in different stages of odontogenesis was analyzed. The distribution, location, pattern, and intensity of immunohistochemical expression were evaluated. Intensity was analyzed using scores (0 = absence, 1 = weak staining, and 2 = strong staining). No difference in the immunoexpression of the integrins was observed between solid and unicystic ameloblastomas. When these two ameloblastoma types were pooled into a single group, the following significant differences were found: immunoexpression of integrin alpha2beta1 was stronger in ameloblastomas than in AOTs and tooth germs, and the expression of integrin alpha5beta1 was stronger in ameloblastomas than in AOTs. The lack of detection of integrin alpha3beta1 in tooth germs and its detection in the odontogenic tumors studied suggest that this integrin might be used as a marker of neoplastic transformation in odontogenic tissues.
Resumo:
This report is a review of additive and subtractive manufacturing techniques. This approach (additive manufacturing) has resided largely in the prototyping realm, where the methods of producing complex freeform solid objects directly from a computer model without part-specific tooling or knowledge. But these technologies are evolving steadily and are beginning to encompass related systems of material addition, subtraction, assembly, and insertion of components made by other processes. Furthermore, these various additive processes are starting to evolve into rapid manufacturing techniques for mass-customized products, away from narrowly defined rapid prototyping. Taking this idea far enough down the line, and several years hence, a radical restructuring of manufacturing could take place. Manufacturing itself would move from a resource base to a knowledge base and from mass production of single use products to mass customized, high value, life cycle products, majority of research and development was focused on advanced development of existing technologies by improving processing performance, materials, modelling and simulation tools, and design tools to enable the transition from prototyping to manufacturing of end use parts.
Resumo:
In this study, we developed and improved the numerical mode matching (NMM) method which has previously been shown to be a fast and robust semi-analytical solver to investigate the propagation of electromagnetic (EM) waves in an isotropic layered medium. The applicable models, such as cylindrical waveguide, optical fiber, and borehole with earth geological formation, are generally modeled as an axisymmetric structure which is an orthogonal-plano-cylindrically layered (OPCL) medium consisting of materials stratified planarly and layered concentrically in the orthogonal directions.
In this report, several important improvements have been made to extend applications of this efficient solver to the anisotropic OCPL medium. The formulas for anisotropic media with three different diagonal elements in the cylindrical coordinate system are deduced to expand its application to more general materials. The perfectly matched layer (PML) is incorporated along the radial direction as an absorbing boundary condition (ABC) to make the NMM method more accurate and efficient for wave diffusion problems in unbounded media and applicable to scattering problems with lossless media. We manipulate the weak form of Maxwell's equations and impose the correct boundary conditions at the cylindrical axis to solve the singularity problem which is ignored by all previous researchers. The spectral element method (SEM) is introduced to more efficiently compute the eigenmodes of higher accuracy with less unknowns, achieving a faster mode matching procedure between different horizontal layers. We also prove the relationship of the field between opposite mode indices for different types of excitations, which can reduce the computational time by half. The formulas for computing EM fields excited by an electric or magnetic dipole located at any position with an arbitrary orientation are deduced. And the excitation are generalized to line and surface current sources which can extend the application of NMM to the simulations of controlled source electromagnetic techniques. Numerical simulations have demonstrated the efficiency and accuracy of this method.
Finally, the improved numerical mode matching (NMM) method is introduced to efficiently compute the electromagnetic response of the induction tool from orthogonal transverse hydraulic fractures in open or cased boreholes in hydrocarbon exploration. The hydraulic fracture is modeled as a slim circular disk which is symmetric with respect to the borehole axis and filled with electrically conductive or magnetic proppant. The NMM solver is first validated by comparing the normalized secondary field with experimental measurements and a commercial software. Then we analyze quantitatively the induction response sensitivity of the fracture with different parameters, such as length, conductivity and permeability of the filled proppant, to evaluate the effectiveness of the induction logging tool for fracture detection and mapping. Casings with different thicknesses, conductivities and permeabilities are modeled together with the fractures in boreholes to investigate their effects for fracture detection. It reveals that the normalized secondary field will not be weakened at low frequencies, ensuring the induction tool is still applicable for fracture detection, though the attenuation of electromagnetic field through the casing is significant. A hybrid approach combining the NMM method and BCGS-FFT solver based integral equation has been proposed to efficiently simulate the open or cased borehole with tilted fractures which is a non-axisymmetric model.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Sudden changes in the stiffness of a structure are often indicators of structural damage. Detection of such sudden stiffness change from the vibrations of structures is important for Structural Health Monitoring (SHM) and damage detection. Non-contact measurement of these vibrations is a quick and efficient way for successful detection of sudden stiffness change of a structure. In this paper, we demonstrate the capability of Laser Doppler Vibrometry to detect sudden stiffness change in a Single Degree Of Freedom (SDOF) oscillator within a laboratory environment. The dynamic response of the SDOF system was measured using a Polytec RSV-150 Remote Sensing Vibrometer. This instrument employs Laser Doppler Vibrometry for measuring dynamic response. Additionally, the vibration response of the SDOF system was measured through a MicroStrain G-Link Wireless Accelerometer mounted on the SDOF system. The stiffness of the SDOF system was experimentally determined through calibrated linear springs. The sudden change of stiffness was simulated by introducing the failure of a spring at a certain instant in time during a given period of forced vibration. The forced vibration on the SDOF system was in the form of a white noise input. The sudden change in stiffness was successfully detected through the measurements using Laser Doppler Vibrometry. This detection from optically obtained data was compared with a detection using data obtained from the wireless accelerometer. The potential of this technique is deemed important for a wide range of applications. The method is observed to be particularly suitable for rapid damage detection and health monitoring of structures under a model-free condition or where information related to the structure is not sufficient.
Resumo:
The study of III-nitride materials (InN, GaN and AlN) gained huge research momentum after breakthroughs in the production light emitting diodes (LEDs) and laser diodes (LDs) over the past two decades. Last year, the Nobel Prize in Physics was awarded jointly to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for inventing a new energy efficient and environmental friendly light source: blue light-emitting diode (LED) from III-nitride semiconductors in the early 1990s. Nowadays, III-nitride materials not only play an increasingly important role in the lighting technology, but also become prospective candidates in other areas, for example, the high frequency (RF) high electron mobility transistor (HEMT) and photovoltaics. These devices require the growth of high quality III-nitride films, which can be prepared using metal organic vapour phase epitaxy (MOVPE). The main aim of my thesis is to study and develop the growth of III-nitride films, including AlN, u-AlGaN, Si-doped AlGaN, and InAlN, serving as sample wafers for fabrication of ultraviolet (UV) LEDs, in order to replace the conventional bulky, expensive and environmentally harmful mercury lamp as new UV light sources. For application to UV LEDs, reducing the threading dislocation density (TDD) in AlN epilayers on sapphire substrates is a key parameter for achieving high-efficiency AlGaNbased UV emitters. In Chapter 4, after careful and systematic optimisation, a working set of conditions, the screw and edge type dislocation density in the AlN were reduced to around 2.2×108 cm-2 and 1.3×109 cm-2 , respectively, using an optimized three-step process, as estimated by TEM. An atomically smooth surface with an RMS roughness of around 0.3 nm achieved over 5×5 µm 2 AFM scale. Furthermore, the motion of the steps in a one dimension model has been proposed to describe surface morphology evolution, especially the step bunching feature found under non-optimal conditions. In Chapter 5, control of alloy composition and the maintenance of compositional uniformity across a growing epilayer surface were demonstrated for the development of u-AlGaN epilayers. Optimized conditions (i.e. a high growth temperature of 1245 °C) produced uniform and smooth film with a low RMS roughness of around 2 nm achieved in 20×20 µm 2 AFM scan. The dopant that is most commonly used to obtain n-type conductivity in AlxGa1-xN is Si. However, the incorporation of Si has been found to increase the strain relaxation and promote unintentional incorporation of other impurities (O and C) during Si-doped AlGaN growth. In Chapter 6, reducing edge-type TDs is observed to be an effective appoach to improve the electric and optical properties of Si-doped AlGaN epilayers. In addition, the maximum electron concentration of 1.3×1019 cm-3 and 6.4×1018 cm-3 were achieved in Si-doped Al0.48Ga0.52N and Al0.6Ga0.4N epilayers as measured using Hall effect. Finally, in Chapter 7, studies on the growth of InAlN/AlGaN multiple quantum well (MQW) structures were performed, and exposing InAlN QW to a higher temperature during the ramp to the growth temperature of AlGaN barrier (around 1100 °C) will suffer a significant indium (In) desorption. To overcome this issue, quasi-two-tempeature (Q2T) technique was applied to protect InAlN QW. After optimization, an intense UV emission from MQWs has been observed in the UV spectral range from 320 to 350 nm measured by room temperature photoluminescence.
Resumo:
The absence of rapid, low cost and highly sensitive biodetection platform has hindered the implementation of next generation cheap and early stage clinical or home based point-of-care diagnostics. Label-free optical biosensing with high sensitivity, throughput, compactness, and low cost, plays an important role to resolve these diagnostic challenges and pushes the detection limit down to single molecule. Optical nanostructures, specifically the resonant waveguide grating (RWG) and nano-ribbon cavity based biodetection are promising in this context. The main element of this dissertation is design, fabrication and characterization of RWG sensors for different spectral regions (e.g. visible, near infrared) for use in label-free optical biosensing and also to explore different RWG parameters to maximize sensitivity and increase detection accuracy. Design and fabrication of the waveguide embedded resonant nano-cavity are also studied. Multi-parametric analyses were done using customized optical simulator to understand the operational principle of these sensors and more important the relationship between the physical design parameters and sensor sensitivities. Silicon nitride (SixNy) is a useful waveguide material because of its wide transparency across the whole infrared, visible and part of UV spectrum, and comparatively higher refractive index than glass substrate. SixNy based RWGs on glass substrate are designed and fabricated applying both electron beam lithography and low cost nano-imprint lithography techniques. A Chromium hard mask aided nano-fabrication technique is developed for making very high aspect ratio optical nano-structure on glass substrate. An aspect ratio of 10 for very narrow (~60 nm wide) grating lines is achieved which is the highest presented so far. The fabricated RWG sensors are characterized for both bulk (183.3 nm/RIU) and surface sensitivity (0.21nm/nm-layer), and then used for successful detection of Immunoglobulin-G (IgG) antibodies and antigen (~1μg/ml) both in buffer and serum. Widely used optical biosensors like surface plasmon resonance and optical microcavities are limited in the separation of bulk response from the surface binding events which is crucial for ultralow biosensing application with thermal or other perturbations. A RWG based dual resonance approach is proposed and verified by controlled experiments for separating the response of bulk and surface sensitivity. The dual resonance approach gives sensitivity ratio of 9.4 whereas the competitive polarization based approach can offer only 2.5. The improved performance of the dual resonance approach would help reducing probability of false reading in precise bio-assay experiments where thermal variations are probable like portable diagnostics.
Resumo:
Aims The pubococcygeal line (PCL) is an important reference line for determining measures of pelvic organ support on sagittal-plane magnetic resonance imaging (MRI); however, there is no consensus on where to place the posterior point of the PCL. As coccyx movement produced during pelvic floor muscle (PFM) contractions may affect other measures, optimal placement of the posterior point is important. This study compared two methods for measuring the PCL, with different posterior points, on T2-weighted sagittal MRI to determine the effect of coccygeal movement on measures of pelvic organ support in older women. Methods MRI of the pelvis was performed in the midsagittal plane, at rest and during PFM contractions, on 47 community-dwelling women 60 and over. The first PCL was measured to the tip of the coccyx (PCLtip) and the second to the sacrococcygeal joint (PCLjnt). Four measures of pelvic organ support were made using each PCL as the reference line: urethrovesical junction height, uterovaginal junction height, M-line and levator plate angle. Results During the PFM contraction the PCLtip shortened and lifted (P < 0.001); the PCLjnt did not change (P > 0.05). The changes in the four measures of pelvic organ support were smaller when measured relative to the PCLtip as compared to those to the PCLjnt (P < 0.001). Conclusions Coccyx movement affected the length and position of the PCLtip, which resulted in underestimates of the pelvic-organ lift produced by the PFM contraction. Therefore, we recommend that the PCL be measured to the sacrococcygeal joint and not to the tip of the coccyx
Resumo:
Benzodiazepines are among the most prescribed compounds for anti-anxiety and are present in many toxicological screens. These drugs are also prominent in the commission of drug facilitated sexual assaults due their effects on the central nervous system. Due to their potency, a low dose of these compounds is often administered to victims; therefore, the target detection limit for these compounds in biological samples is 10 ng/mL. Currently these compounds are predominantly analyzed using immunoassay techniques; however more specific screening methods are needed. The goal of this dissertation was to develop a rapid, specific screening technique for benzodiazepines in urine samples utilizing surface-enhanced Raman spectroscopy (SERS), which has previously been shown be capable of to detect trace quantities of pharmaceutical compounds in aqueous solutions. Surface enhanced Raman spectroscopy has the advantage of overcoming the low sensitivity and fluorescence effects seen with conventional Raman spectroscopy. The spectra are obtained by applying an analyte onto a SERS-active metal substrate such as colloidal metal particles. SERS signals can be further increased with the addition of aggregate solutions. These agents cause the nanoparticles to amass and form hot-spots which increase the signal intensity. In this work, the colloidal particles are spherical gold nanoparticles in aqueous solution with an average size of approximately 30 nm. The optimum aggregating agent for the detection of benzodiazepines was determined to be 16.7 mM MgCl2, providing the highest signal intensities at the lowest drug concentrations with limits of detection between 0.5 and 127 ng/mL. A supported liquid extraction technique was utilized as a rapid clean extraction for benzodiazepines from urine at a pH of 5.0, allowing for clean extraction with limits of detection between 6 and 640 ng/mL. It was shown that at this pH other drugs that are prevalent in urine samples can be removed providing the selective detection of the benzodiazepine of interest. This technique has been shown to provide rapid (less than twenty minutes), sensitive, and specific detection of benzodiazepines at low concentrations in urine. It provides the forensic community with a sensitive and specific screening technique for the detection of benzodiazepines in drug facilitated assault cases.
Resumo:
In certain European countries and the United States of America, canines have been successfully used in human scent identification. There is however, limited scientific knowledge on the composition of human scent and the detection mechanism that produces an alert from canines. This lack of information has resulted in successful legal challenges to human scent evidence in the courts of law. The main objective of this research was to utilize science to validate the current practices of using human scent evidence in criminal cases. The goals of this study were to utilize Headspace Solid Phase Micro Extraction Gas Chromatography Mass Spectrometry (HS-SPME-GC/MS) to determine the optimum collection and storage conditions for human scent samples, to investigate whether the amount of DNA deposited upon contact with an object affects the alerts produced by human scent identification canines, and to create a prototype pseudo human scent which could be used for training purposes. Hand odor samples which were collected on different sorbent materials and exposed to various environmental conditions showed that human scent samples should be stored without prolonged exposure to UVA/UVB light to allow minimal changes to the overall scent profile. Various methods of collecting human scent from objects were also investigated and it was determined that passive collection methods yields ten times more VOCs by mass than active collection methods. Through the use of polymerase chain reaction (PCR) no correlation was found between the amount of DNA that was deposited upon contact with an object and the alerts that were produced by human scent identification canines. Preliminary studies conducted to create a prototype pseudo human scent showed that it is possible to produce fractions of a human scent sample which can be presented to the canines to determine whether specific fractions or the entire sample is needed to produce alerts by the human scent identification canines.
Resumo:
The presence of harmful algal blooms (HAB) is a growing concern in aquatic environments. Among HAB organisms, cyanobacteria are of special concern because they have been reported worldwide to cause environmental and human health problem through contamination of drinking water. Although several analytical approaches have been applied to monitoring cyanobacteria toxins, conventional methods are costly and time-consuming so that analyses take weeks for field sampling and subsequent lab analysis. Capillary electrophoresis (CE) becomes a particularly suitable analytical separation method that can couple very small samples and rapid separations to a wide range of selective and sensitive detection techniques. This paper demonstrates a method for rapid separation and identification of four microcystin variants commonly found in aquatic environments. CE coupled to UV and electrospray ionization time-of-flight mass spectrometry (ESI-TOF) procedures were developed. All four analytes were separated within 6 minutes. The ESI-TOF experiment provides accurate molecular information, which further identifies analytes.