956 resultados para Photon Counting
Resumo:
We explored the potential of a carbon nanotube (CNT) coating working in conjunction with a recently developed localized surface plasmon (LSP) device (based upon a nanostructured thin film consisting of of nano-wires of platinum) with ultra-high sensitivity to changes in the surrounding index. The uncoated LSP sensor’s transmission resonances exhibited a refractive index sensitivity of Δλ/Δn ~ -6200nm/RIU and ΔΙ/Δn ~5900dB/RIU, which is the highest reported spectral sensitivity of a fiber optic sensor to bulk index changes within the gas regime. The complete device provides the first demonstration of the chemically specific gas sensing capabilities of CNTs utilizing their optical characteristics. This is proven by investigating the spectral response of the sensor before and after the adhesion of CNTs to alkane gases along with carbon dioxide. The device shows a distinctive spectral response in the presence of gaseous CO2 over and above what is expected from general changes in the bulk refractive index. This fiber device yielded a limit of detection of 150ppm for CO2 at a pressure of one atmosphere. Additionally the adhered CNTs actually reduce sensitivity of the device to changes in bulk refractive index of the surrounding medium. The polarization properties of the LSP sensor resonances are also investigated and it is shown that there is a reduction in the overall azimuthal polarization after the CNTs are applied. These optical devices offer a way of exploiting optically the chemical selectivity of carbon nanotubes, thus providing the potential for real-world applications in gas sensing in many inflammable and explosive environments. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.
Resumo:
Automated crowd counting allows excessive crowding to be detected immediately, without the need for constant human surveillance. Current crowd counting systems are location specific, and for these systems to function properly they must be trained on a large amount of data specific to the target location. As such, configuring multiple systems to use is a tedious and time consuming exercise. We propose a scene invariant crowd counting system which can easily be deployed at a different location to where it was trained. This is achieved using a global scaling factor to relate crowd sizes from one scene to another. We demonstrate that a crowd counting system trained at one viewpoint can achieve a correct classification rate of 90% at a different viewpoint.
Resumo:
This paper investigates the links between various approaches to managing equity and diversity and their effectiveness in changing the measures of inclusivity of women in organisations as a means of auditing and mapping managing diversity outcomes in Australia. The authors argue that managing diversity is more than changing systems and counting numbers it is also about managing the substantive culture change required in order to achieve inclusivity particularly intercultural inclusivity. Research in one sector of the education industry that investigated the competency skills required for culture change is offered as a model or guide for understanding and reflecting upon intercultural competency and its sequential development.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.
Resumo:
There is considerable public, political and professional debate about the need for additional hospital beds in Australia. However, there is no clarity in regard to the definition, meaning and significance of hospital bed counts. Relative to population, there has been a total decline in bed availability in Australia over the past 15 years of 14.6% (22.9% for public hospital beds). This decline is partly offset by reductions in length of stay and changes to models of care; however, the net effect is increased bed occupancy which has in turn resulted in system-wide congestion. Future bed capability needs to be better planned to meet growing demands while at the same time continuing trends for more efficient use. Future planning should be based in part on weighted bed capability matched to need.
Resumo:
Railway signaling facilitates two main functions, namely, train detection and train control, in order to maintain safe separations among the trains. Track circuits are the most commonly used train detection means with the simple open/close circuit principles; and subsequent adoption of axle counters further allows the detection of trains under adverse track conditions. However, with electrification and power electronics traction drive systems, aggravated by the electromagnetic interference in the vicinity of the signaling system, railway engineers often find unstable or even faulty operations of track circuits and axle counting systems, which inevitably jeopardizes the safe operation of trains. A new means of train detection, which is completely free from electromagnetic interference, is therefore required for the modern railway signaling system. This paper presents a novel optical fiber sensor signaling system. The sensor operation, field setup, axle detection solution set, and test results of an installation in a trial system on a busy suburban railway line are given.
Resumo:
This paper describes a scene invariant crowd counting algorithm that uses local features to monitor crowd size. Unlike previous algorithms that require each camera to be trained separately, the proposed method uses camera calibration to scale between viewpoints, allowing a system to be trained and tested on different scenes. A pre-trained system could therefore be used as a turn-key solution for crowd counting across a wide range of environments. The use of local features allows the proposed algorithm to calculate local occupancy statistics, and Gaussian process regression is used to scale to conditions which are unseen in the training data, also providing confidence intervals for the crowd size estimate. A new crowd counting database is introduced to the computer vision community to enable a wider evaluation over multiple scenes, and the proposed algorithm is tested on seven datasets to demonstrate scene invariance and high accuracy. To the authors' knowledge this is the first system of its kind due to its ability to scale between different scenes and viewpoints.
Resumo:
Early-number is a rich fabric of interconnected ideas that is often misunderstood and thus taught in ways that do not lead to rich understanding. In this presentation, a visual language is used to describe the organisation of this domain of knowledge. This visual language is based upon Piaget’s notion of reflective abstraction (Dubinsky, 1991; Piaget, 1977/2001), and thus captures the epistemological associations that link the problems, concepts and representations of the domain. The constructs of this visual language are introduced and then applied to the early-number domain. The introduction to this visual language may prompt reflection upon its suitability and significance to the description of other domains of knowledge. Through such a process of analysis and description, the visual language may serve as a scaffold for enhancing pedagogical content knowledge and thus ultimately improve learning outcomes.