861 resultados para Diagnostic imaging Digital techniques
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
Glioblastoma (GBM) is a highly aggressive and fatal brain cancer that is associated with a number of diagnostic, therapeutic, and treatment monitoring challenges. At the time of writing, inhibition of a protein called poly (ADP-ribose) polymerase-1 (PARP-1) in combination with chemotherapy was being investigated as a novel approach for the treatment of these tumours. However, human studies have encountered toxicity problems due to sub-optimal PARP-1 inhibitor and chemotherapeutic dosing regiments. Nuclear imaging of PARP-1 could help to address these issues and provide additional insight into potential PARP-1 inhibitor resistance mechanisms. Furthermore, nuclear imaging of the translocator protein (TSPO) could be used to improve GBM diagnosis, pre-surgical planning, and treatment monitoring as TSPO is overexpressed by GBM lesions in good contrast to surrounding brain tissue. To date, relatively few nuclear imaging radiotracers have been discovered for PARP-1. On the other hand, numerous tracers exist for TSPO many of which have been investigated in humans. However, these TSPO radiotracers suffer from either poor pharmacokinetic properties or high sensitivity to human TSPO polymorphism that can affect their binding to TSPO. Bearing in mind the above and the high attrition rates associated with advancement of radiotracers to the clinic, there is a need for novel radiotracers that can be used to image PARP-1 and TSPO. This thesis reports the pre-clinical discovery programme that led to the identification of two potent PARP-1 inhibitors, 4 and 17, that were successfully radiolabelled to generate the potential SPECT and PET imaging agents [123I]-4 and [18F]-17 respectively. Evaluation of these radiotracers in mice bearing subcutaneous human GBM xenografts using ex vivo biodistribution techniques revealed that the agents were retained in tumour tissue due to specific PARP-1 binding. This thesis also describes the pre-clinical in vivo evaluation of [18F]-AB5186, which is a novel radiotracer discovered previously within the research group with potential for PET imaging of TSPO. Using ex vivo autoradiography and PET imaging the agent was revealed to accumulate in intracranial human GBM tumour xenografts in good contrast to surrounding brain tissue, which was due to specific binding to TSPO. The in vivo data for all three radiolabelled compounds warrants further pre-clinical investigations with potential for clinical advancement in mind.
Resumo:
The optical access engine integrated with the diagnostic and optical measurement techniques is a great platform for engine research because it provides clear visual access to the combustion chamber inside the engines. An optical access engine customized based on a 4-cylinder spark ignited direct injection (SIDI) production engine is located in the Advanced Power Systems Laboratories (APS LABS) at Michigan Technological University. This optical access engine inside the test cell has been set up for different engine research. In this report, two SAE papers in engine research utilizing the optical access engine are reviewed to gain basic understanding of the methodology. Though the optical engine in APS LABS is a little bit different from the engines used in the literature, the methodology in the papers provides guidelines for engine research through optical access engines. In addition, the optical access engine instrumentation including the test cell setup and the optical engine setup is described in detail in the report providing a solid record for later troubleshooting and reference. Finally, the motoring tests, firing tests and optical imaging experiment on the optical engine have been performed to validate the instrumentation. This report only describes so far the instrumentation of the optical engine in the APS LABS by April 2015.
Resumo:
Nanoparticles are often considered as efficient drug delivery vehicles for precisely dispensing the therapeutic payloads specifically to the diseased sites in the patient’s body, thereby minimizing the toxic side effects of the payloads on the healthy tissue. However, the fundamental physics that underlies the nanoparticles’ intrinsic interaction with the surrounding cells is inadequately elucidated. The ability of the nanoparticles to precisely control the release of its payloads externally (on-demand) without depending on the physiological conditions of the target sites has the potential to enable patient- and disease-specific nanomedicine, also known as Personalized NanoMedicine (PNM). In this dissertation, magneto-electric nanoparticles (MENs) were utilized for the first time to enable important functions, such as (i) field-controlled high-efficacy dissipation-free targeted drug delivery system and on-demand release at the sub-cellular level, (ii) non-invasive energy-efficient stimulation of deep brain tissue at body temperature, and (iii) a high-sensitivity contrasting agent to map the neuronal activity in the brain non-invasively. First, this dissertation specifically focuses on using MENs as energy-efficient and dissipation-free field-controlled nano-vehicle for targeted delivery and on-demand release of a anti-cancer Paclitaxel (Taxol) drug and a anti-HIV AZT 5’-triphosphate (AZTTP) drug from 30-nm MENs (CoFe2O4-BaTiO3) by applying low-energy DC and low-frequency (below 1000 Hz) AC fields to separate the functions of delivery and release, respectively. Second, this dissertation focuses on the use of MENs to non-invasively stimulate the deep brain neuronal activity via application of a low energy and low frequency external magnetic field to activate intrinsic electric dipoles at the cellular level through numerical simulations. Third, this dissertation describes the use of MENs to track the neuronal activities in the brain (non-invasively) using a magnetic resonance and a magnetic nanoparticle imaging by monitoring the changes in the magnetization of the MENs surrounding the neuronal tissue under different states. The potential therapeutic and diagnostic impact of this innovative and novel study is highly significant not only in HIV-AIDS, Cancer, Parkinson’s and Alzheimer’s disease but also in many CNS and other diseases, where the ability to remotely control targeted drug delivery/release, and diagnostics is the key.
Resumo:
A prospective randomised controlled clinical trial of treatment decisions informed by invasive functional testing of coronary artery disease severity compared with standard angiography-guided management was implemented in 350 patients with a recent non-ST elevation myocardial infarction (NSTEMI) admitted to 6 hospitals in the National Health Service. The main aims of this study were to examine the utility of both invasive fractional flow reserve (FFR) and non-invasive cardiac magnetic resonance imaging (MRI) amongst patients with a recent diagnosis of NSTEMI. In summary, the findings of this thesis are: (1) the use of FFR combined with intravenous adenosine was feasible and safe amongst patients with NSTEMI and has clinical utility; (2) there was discordance between the visual, angiographic estimation of lesion significance and FFR; (3). The use of FFR led to changes in treatment strategy and an increase in prescription of medical therapy in the short term compared with an angiographically guided strategy; (4) in the incidence of major adverse cardiac events (MACE) at 12 months follow up was similar in the two groups. Cardiac MRI was used in a subset of patients enrolled in two hospitals in the West of Scotland. T1 and T2 mapping methods were used to delineate territories of acute myocardial injury. T1 and T2 mapping were superior when compared with conventional T2-weighted dark blood imaging for estimation of the ischaemic area-at-risk (AAR) with less artifact in NSTEMI. There was poor correlation between the angiographic AAR and MRI methods of AAR estimation in patients with NSTEMI. FFR had a high accuracy at predicting inducible perfusion defects demonstrated on stress perfusion MRI. This thesis describes the largest randomized trial published to date specifically looking at the clinical utility of FFR in the NSTEMI population. We have provided evidence of the diagnostic and clinical utility of FFR in this group of patients and provide evidence to inform larger studies. This thesis also describes the largest ever MRI cohort, including with myocardial stress perfusion assessments, specifically looking at the NSTEMI population. We have demonstrated the diagnostic accuracy of FFR to predict reversible ischaemia as referenced to a non-invasive gold standard with MRI. This thesis has also shown the futility of using dark blood oedema imaging amongst all comer NSTEMI patients when compared to novel T1 and T2 mapping methods.
Resumo:
Due to the high standards expected from diagnostic medical imaging, the analysis of information regarding waiting lists via different information systems is of utmost importance. Such analysis, on the one hand, may improve the diagnostic quality and, on the other hand, may lead to the reduction of waiting times, with the concomitant increase of the quality of services and the reduction of the inherent financial costs. Hence, the purpose of this study is to assess the waiting time in the delivery of diagnostic medical imaging services, like computed tomography and magnetic resonance imaging. Thereby, this work is focused on the development of a decision support system to assess waiting times in diagnostic medical imaging with recourse to operational data of selected attributes extracted from distinct information systems. The computational framework is built on top of a Logic Programming Case-base Reasoning approach to Knowledge Representation and Reasoning that caters for the handling of in-complete, unknown, or even self-contradictory information.
Resumo:
This paper considers the question, ‘what is co-creative media, and why is it a useful idea in social media research’? The term ‘co-creative media’ is now used by Creative Industries researchers at QUT to describe their digital storytelling practices. Digital storytelling is a set of collaborative digital media production techniques that have been used to facilitate social participation in numerous Australian and international contexts. Digital storytelling has been adapted by Creative Industries researchers at QUT as a platform for researching the potential of vernacular creativity in a variety of contexts, including social inclusion of marginalized and disadvantaged groups; inclusion in public histories of narratives that might be overlooked; and articulation of voices that otherwise remain silent in the formulation of social and economic development strategies. The adaption of digital storytelling to different contexts has been shaped by the reflexive, recursive, and pragmatic requirements of action research. Amongst other things, this activity draws attention to the agency of researchers in facilitating these kinds of participatory media processes and outcomes. This discussion serves to problematise concepts of participatory media by introducing the term ‘co-creative media’ and differentiating these from other social media production practices.
Resumo:
Non Alcoholic Fatty Liver Disease (NAFLD) is a condition that is frequently seen but seldom investigated. Until recently, NAFLD was considered benign, self-limiting and unworthy of further investigation. This opinion is based on retrospective studies with relatively small numbers and scant follow-up of histology data. (1) The prevalence for adults, in the USA is, 30%, and NAFLD is recognized as a common and increasing form of liver disease in the paediatric population (1). Australian data, from New South Wales, suggests the prevalence of NAFLD in “healthy” 15 year olds as being 10%.(2) Non-alcoholic fatty liver disease is a condition where fat progressively invades the liver parenchyma. The degree of infiltration ranges from simple steatosis (fat only) to steatohepatitis (fat and inflammation) steatohepatitis plus fibrosis (fat, inflammation and fibrosis) to cirrhosis (replacement of liver texture by scarred, fibrotic and non functioning tissue).Non-alcoholic fatty liver is diagnosed by exclusion rather than inclusion. None of the currently available diagnostic techniques -liver biopsy, liver function tests (LFT) or Imaging; ultrasound, Computerised tomography (CT) or Magnetic Resonance Imaging (MRI) are specific for non-alcoholic fatty liver. An association exists between NAFLD, Non Alcoholic Steatosis Hepatitis (NASH) and irreversible liver damage, cirrhosis and hepatoma. However, a more pervasive aspect of NAFLD is the association with Metabolic Syndrome. This Syndrome is categorised by increased insulin resistance (IR) and NAFLD is thought to be the hepatic representation. Those with NAFLD have an increased risk of death (3) and it is an independent predictor of atherosclerosis and cardiovascular disease (1). Liver biopsy is considered the gold standard for diagnosis, (4), and grading and staging, of non-alcoholic fatty liver disease. Fatty-liver is diagnosed when there is macrovesicular steatosis with displacement of the nucleus to the edge of the cell and at least 5% of the hepatocytes are seen to contain fat (4).Steatosis represents fat accumulation in liver tissue without inflammation. However, it is only called non-alcoholic fatty liver disease when alcohol - >20gms-30gms per day (5), has been excluded from the diet. Both non-alcoholic and alcoholic fatty liver are identical on histology. (4).LFT’s are indicative, not diagnostic. They indicate that a condition may be present but they are unable to diagnosis what the condition is. When a patient presents with raised fasting blood glucose, low HDL (high density lipoprotein), and elevated fasting triacylglycerols they are likely to have NAFLD. (6) Of the imaging techniques MRI is the least variable and the most reproducible. With CT scanning liver fat content can be semi quantitatively estimated. With increasing hepatic steatosis, liver attenuation values decrease by 1.6 Hounsfield units for every milligram of triglyceride deposited per gram of liver tissue (7). Ultrasound permits early detection of fatty liver, often in the preclinical stages before symptoms are present and serum alterations occur. Earlier, accurate reporting of this condition will allow appropriate intervention resulting in better patient health outcomes. References 1. Chalasami N. Does fat alone cause significant liver disease: It remains unclear whether simple steatosis is truly benign. American Gastroenterological Association Perspectives, February/March 2008 www.gastro.org/wmspage.cfm?parm1=5097 Viewed 20th October, 2008 2. Booth, M. George, J.Denney-Wilson, E: The population prevalence of adverse concentrations with adiposity of liver tests among Australian adolescents. Journal of Paediatrics and Child Health.2008 November 3. Catalano, D, Trovato, GM, Martines, GF, Randazzo, M, Tonzuso, A. Bright liver, body composition and insulin resistance changes with nutritional intervention: a follow-up study .Liver Int.2008; February 1280-9 4. Choudhury, J, Sanysl, A. Clinical aspects of Fatty Liver Disease. Semin in Liver Dis. 2004:24 (4):349-62 5. Dionysus Study Group. Drinking factors as cofactors of risk for alcohol induced liver change. Gut. 1997; 41 845-50 6. Preiss, D, Sattar, N. Non-alcoholic fatty liver disease: an overview of prevalence, diagnosis, pathogenesis and treatment considerations. Clin Sci.2008; 115 141-50 7. American Gastroenterological Association. Technical review on nonalcoholic fatty liver disease. Gastroenterology.2002; 123: 1705-25
Resumo:
Learning a digital tool is often a hidden process. We tend to learn new tools in a bewildering range of ways. Formal, informal, structured, random, conscious, unconscious, individual, group strategies, may all play a part, but are often lost to us in the complex and demanding processes of learning. But when we reflect carefully on the experience, some patterns and surprising techniques emerge. This monograph presents the thinking of seven students in MDN642, Digital Pedagogies, where they have deliberately reflected on the mental processes at work as they learnt a digital technology of their choice.
Resumo:
Research is often characterised as the search for new ideas and understanding. The language of this view privileges the cognitive and intellectual aspects of discovery. However, in the research process theoretical claims are usually evaluated in practice and, indeed, the observations and experiences of practical circumstances often lead to new research questions. This feedback loop between speculation and experimentation is fundamental to research in many disciplines, and is also appropriate for research in the creative arts. In this chapter we will examine how our creative desire for artistic expressivity results in interplay between actions and ideas that direct the development of techniques and approaches for our audio/visual live-coding activities.
Resumo:
Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.
Resumo:
Free-radical processes underpin the thermo-oxidative degradation of polyolefins. Thus, to extend the lifetime of these polymers, stabilizers are generally added during processing to scavenge the free radicals formed as the polymer degrades. Nitroxide radical precursors, such as hindered amine stabilizers (HAS),1,2 are common polypropylene additives as the nitroxide moiety is a potent scavenger of polymer alkyl radicals (R¥). Oxidation of HAS by radicals formed during polypropylene degradation yields nitroxide radicals (RRNO¥), which rapidly trap the polymer degradation species to produce alkoxyamines, thus retarding oxidative polymer degradation. This increase in polymer stability is demonstrated by a lengthening of the “induction period” of the polymer (the time prior to a sharp rise in the oxidation of the polymer). Instrumental techniques such as chemiluminescence or infrared spectroscopy are somewhat limited in detecting changes in the polymer during the initial stages of degradation. Therefore, other methods for observing polymer degradation have been sought as the useful life of a polymer does not extend far beyond its “induction period”
Resumo:
The eyelids play an important role in lubricating and protecting the surface of the eye. Each blink serves to spread fresh tears, remove debris and replenish the smooth optical surface of the eye. Yet little is known about how the eyelids contact the ocular surface and what pressure distribution exists between the eyelids and cornea. As the principal refractive component of the eye, the cornea is a major element of the eye’s optics. The optical properties of the cornea are known to be susceptible to the pressure exerted by the eyelids. Abnormal eyelids, due to disease, have altered pressure on the ocular surface due to changes in the shape, thickness or position of the eyelids. Normal eyelids also cause corneal distortions that are most often noticed when they are resting closer to the corneal centre (for example during reading). There were many reports of monocular diplopia after reading due to corneal distortion, but prior to videokeratoscopes these localised changes could not be measured. This thesis has measured the influence of eyelid pressure on the cornea after short-term near tasks and techniques were developed to quantify eyelid pressure and its distribution. The profile of the wave-like eyelid-induced corneal changes and the refractive effects of these distortions were investigated. Corneal topography changes due to both the upper and lower eyelids were measured for four tasks involving two angles of vertical downward gaze (20° and 40°) and two near work tasks (reading and steady fixation). After examining the depth and shape of the corneal changes, conclusions were reached regarding the magnitude and distribution of upper and lower eyelid pressure for these task conditions. The degree of downward gaze appears to alter the upper eyelid pressure on the cornea, with deeper changes occurring after greater angles of downward gaze. Although the lower eyelid was further from the corneal centre in large angles of downward gaze, its effect on the cornea was greater than that of the upper eyelid. Eyelid tilt, curvature, and position were found to be influential in the magnitude of eyelid-induced corneal changes. Refractively these corneal changes are clinically and optically significant with mean spherical and astigmatic changes of about 0.25 D after only 15 minutes of downward gaze (40° reading and steady fixation conditions). Due to the magnitude of these changes, eyelid pressure in downward gaze offers a possible explanation for some of the day-to-day variation observed in refraction. Considering the magnitude of these changes and previous work on their regression, it is recommended that sustained tasks performed in downward gaze should be avoided for at least 30 minutes before corneal and refractive assessment requiring high accuracy. Novel procedures were developed to use a thin (0.17 mm) tactile piezoresistive pressure sensor mounted on a rigid contact lens to measure eyelid pressure. A hydrostatic calibration system was constructed to convert raw digital output of the sensors to actual pressure units. Conditioning the sensor prior to use regulated the measurement response and sensor output was found to stabilise about 10 seconds after loading. The influences of various external factors on sensor output were studied. While the sensor output drifted slightly over several hours, it was not significant over the measurement time of 30 seconds used for eyelid pressure, as long as the length of the calibration and measurement recordings were matched. The error associated with calibrating at room temperature but measuring at ocular surface temperature led to a very small overestimation of pressure. To optimally position the sensor-contact lens combination under the eyelid margin, an in vivo measurement apparatus was constructed. Using this system, eyelid pressure increases were observed when the upper eyelid was placed on the sensor and a significant increase was apparent when the eyelid pressure was increased by pulling the upper eyelid tighter against the eye. For a group of young adult subjects, upper eyelid pressure was measured using this piezoresistive sensor system. Three models of contact between the eyelid and ocular surface were used to calibrate the pressure readings. The first model assumed contact between the eyelid and pressure sensor over more than the pressure cell width of 1.14 mm. Using thin pressure sensitive carbon paper placed under the eyelid, a contact imprint was measured and this width used for the second model of contact. Lastly as Marx’s line has been implicated as the region of contact with the ocular surface, its width was measured and used as the region of contact for the third model. The mean eyelid pressures calculated using these three models for the group of young subjects were 3.8 ± 0.7 mmHg (whole cell), 8.0 ± 3.4 mmHg (imprint width) and 55 ± 26 mmHg (Marx’s line). The carbon imprints using Pressurex-micro confirmed previous suggestions that a band of the eyelid margin has primary contact with the ocular surface and provided the best estimate of the contact region and hence eyelid pressure. Although it is difficult to directly compare the results with previous eyelid pressure measurement attempts, the eyelid pressure calculated using this model was slightly higher than previous manometer measurements but showed good agreement with the eyelid force estimated using an eyelid tensiometer. The work described in this thesis has shown that the eyelids have a significant influence on corneal shape, even after short-term tasks (15 minutes). Instrumentation was developed using piezoresistive sensors to measure eyelid pressure. Measurements for the upper eyelid combined with estimates of the contact region between the cornea and the eyelid enabled quantification of the upper eyelid pressure for a group of young adult subjects. These techniques will allow further investigation of the interaction between the eyelids and the surface of the eye.