996 resultados para COMMERCIAL DETECTOR ARRAYS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this thesis dealt with single crystals of thallium bromide grown for gamma-ray detector applications. The crystals were used to fabricate room temperature gamma-ray detectors. Routinely produced TlBr detectors often are poor quality. Therefore, this study concentrated on developing the manufacturing processes for TlBr detectors and methods of characterisation that can be used for optimisation of TlBr purity and crystal quality. The processes under concern were TlBr raw material purification, crystal growth, annealing and detector fabrication. The study focused on single crystals of TlBr grown from material purified by a hydrothermal recrystallisation method. In addition, hydrothermal conditions for synthesis, recrystallisation, crystal growth and annealing of TlBr crystals were examined. The final manufacturing process presented in this thesis deals with TlBr material purified by the Bridgman method. Then, material is hydrothermally recrystallised in pure water. A travelling molten zone (TMZ) method is used for additional purification of the recrystallised product and then for the final crystal growth. Subsequent processing is similar to that described in the literature. In this thesis, literature on improving quality of TlBr material/crystal and detector performance is reviewed. Aging aspects as well as the influence of different factors (temperature, time, electrode material and so on) on detector stability are considered and examined. The results of the process development are summarised and discussed. This thesis shows the considerable improvement in the charge carrier properties of a detector due to additional purification by hydrothermal recrystallisation. As an example, a thick (4 mm) TlBr detector produced by the process was fabricated and found to operate successfully in gamma-ray detection, confirming the validity of the proposed purification and technological steps. However, for the complete improvement of detector performance, further developments in crystal growth are required. The detector manufacturing process was optimized by characterisation of material and crystals using methods such as X-ray diffraction (XRD), polarisation microscopy, high-resolution inductively coupled plasma mass (HR-ICPM), Fourier transform infrared (FTIR), ultraviolet and visual (UV-Vis) spectroscopy, field emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDS), current-voltage (I-V) and capacity voltage (CV) characterisation, and photoconductivity, as well direct detector examination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a highly sensitive genome wide search method for recessive mutations. The method is suitable for distantly related samples that are divided into phenotype positives and negatives. High throughput genotype arrays are used to identify and compare homozygous regions between the cohorts. The method is demonstrated by comparing colorectal cancer patients against unaffected references. The objective is to find homozygous regions and alleles that are more common in cancer patients. We have designed and implemented software tools to automate the data analysis from genotypes to lists of candidate genes and to their properties. The programs have been designed in respect to a pipeline architecture that allows their integration to other programs such as biological databases and copy number analysis tools. The integration of the tools is crucial as the genome wide analysis of the cohort differences produces many candidate regions not related to the studied phenotype. CohortComparator is a genotype comparison tool that detects homozygous regions and compares their loci and allele constitutions between two sets of samples. The data is visualised in chromosome specific graphs illustrating the homozygous regions and alleles of each sample. The genomic regions that may harbour recessive mutations are emphasised with different colours and a scoring scheme is given for these regions. The detection of homozygous regions, cohort comparisons and result annotations are all subjected to presumptions many of which have been parameterized in our programs. The effect of these parameters and the suitable scope of the methods have been evaluated. Samples with different resolutions can be balanced with the genotype estimates of their haplotypes and they can be used within the same study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Silicon strip detectors are fast, cost-effective and have an excellent spatial resolution. They are widely used in many high-energy physics experiments. Modern high energy physics experiments impose harsh operation conditions on the detectors, e.g., of LHC experiments. The high radiation doses cause the detectors to eventually fail as a result of excessive radiation damage. This has led to a need to study radiation tolerance using various techniques. At the same time, a need to operate sensors approaching the end their lifetimes has arisen. The goal of this work is to demonstrate that novel detectors can survive the environment that is foreseen for future high-energy physics experiments. To reach this goal, measurement apparatuses are built. The devices are then used to measure the properties of irradiated detectors. The measurement data are analyzed, and conclusions are drawn. Three measurement apparatuses built as a part of this work are described: two telescopes measuring the tracks of the beam of a particle accelerator and one telescope measuring the tracks of cosmic particles. The telescopes comprise layers of reference detectors providing the reference track, slots for the devices under test, the supporting mechanics, electronics, software, and the trigger system. All three devices work. The differences between these devices are discussed. The reconstruction of the reference tracks and analysis of the device under test are presented. Traditionally, silicon detectors have produced a very clear response to the particles being measured. In the case of detectors nearing the end of their lifefimes, this is no longer true. A new method benefitting from the reference tracks to form clusters is presented. The method provides less biased results compared to the traditional analysis, especially when studying the response of heavily irradiated detectors. Means to avoid false results in demonstrating the particle-finding capabilities of a detector are also discussed. The devices and analysis methods are primarily used to study strip detectors made of Magnetic Czochralski silicon. The detectors studied were irradiated to various fluences prior to measurement. The results show that Magnetic Czochralski silicon has a good radiation tolerance and is suitable for future high-energy physics experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A robust, compact optical measurement unit for motion measurement in micro-cantilever arrays enables development of portable micro-cantilever sensors. This paper reports on an optical beam deflection-based system to measure the deflection of micro-cantilevers in an array that employs a single laser source, a single detector, and a resonating reflector to scan the measurement laser across the array. A strategy is also proposed to extract the deflection of individual cantilevers from the acquired data. The proposed system and measurement strategy are experimentally evaluated and demonstrated to measure motion of multiple cantilevers in an array. (C) 2015 AIP Publishing LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A substantial amount of important scientific information is contained within astronomical data at the submillimeter and far-infrared (FIR) wavelengths, including information regarding dusty galaxies, galaxy clusters, and star-forming regions; however, these wavelengths are among the least-explored fields in astronomy because of the technological difficulties involved in such research. Over the past 20 years, considerable efforts have been devoted to developing submillimeter- and millimeter-wavelength astronomical instruments and telescopes.

The number of detectors is an important property of such instruments and is the subject of the current study. Future telescopes will require as many as hundreds of thousands of detectors to meet the necessary requirements in terms of the field of view, scan speed, and resolution. A large pixel count is one benefit of the development of multiplexable detectors that use kinetic inductance detector (KID) technology.

This dissertation presents the development of a KID-based instrument including a portion of the millimeter-wave bandpass filters and all aspects of the readout electronics, which together enabled one of the largest detector counts achieved to date in submillimeter-/millimeter-wavelength imaging arrays: a total of 2304 detectors. The work presented in this dissertation has been implemented in the MUltiwavelength Submillimeter Inductance Camera (MUSIC), a new instrument for the Caltech Submillimeter Observatory (CSO).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fast moving arrays of periodic sub-diffraction-limit pits were dynamically read out via a silver thin film. The mechanism of the dynamic readout is analysed and discussed in detail, both experimentally and theoretically. The analysis and experiment show that, in the course of readout, surface plasmons can be excited at the silver/air interface by the focused laser beam and amplified by the silver thin film. The surface plasmons are transmitted into the substrate/silver interface with a large enhancement. The surface waves at the substrate/silver interface are scattered by the sinusoidal pits of sub-diffraction-limit size. The scattered waves are collected by a converging lens and guided into the detector for the readout.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports the development of solar-blind aluminum gallium nitride (AlGaN) 128x128 UV Focal Plane Arrays (FPAs). The back-illuminated hybrid FPA architecture consists of an 128x128 back-illuminated AlGaN PIN detector array that is bump-mounted to a matching 128x128 silicon CMOS readout integrated circuit (ROIC) chip. The 128x128 p-i-n photodiode arrays with cuton and cutoff wavelengths of 233 and 258 nm, with a sharp reduction in response to UVB (280-320 nm) light. Several examples of solar-blind images are provided. This solar-blind band FPA has much better application prospect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A miniaturized fluorescence detector using a high-brightness light-emitting diode as an excitation source was constructed and evaluated. A windowless flow cell based on a commercial four-port cross fitting was designed to reduce the stray-light level and to eliminate the optical alignment. The observed detection limit for fluorescein was 26 nM in the continuous-flow mode. The error in the reproducibility of the responses was evaluated by the FIA method, and was found to be within 2% RSD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the successful fabrication of arrays of switchable nanocapacitors made by harnessing the self-assembly of materials. The structures are composed of arrays of 20-40 nm diameter Pt nanowires, spaced 50-100 nm apart, electrodeposited through nanoporous alumina onto a thin film lower electrode on a silicon wafer. A thin film ferroelectric (both barium titanate (BTO) and lead zirconium titanate (PZT)) has been deposited on top of the nanowire array, followed by the deposition of thin film upper electrodes. The PZT nanocapacitors exhibit hysteresis loops with substantial remnant polarizations, while although the switching performance was inferior, the low-field characteristics of the BTO nanocapacitors show dielectric behavior comparable to conventional thin film heterostructures. While registration is not sufficient for commercial RAM production, this is nevertheless an embryonic form of the highest density hard-wired FRAM capacitor array reported to date and compares favorably with atomic force microscopy read-write densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale commercial exploitation of wave energy is certain to require the deployment of wave energy converters (WECs) in arrays, creating ‘WEC farms’. An understanding of the hydrodynamic interactions in such arrays is essential for determining optimum layouts of WECs, as well as calculating the area of ocean that the farms will require. It is equally important to consider the potential impact of wave farms on the local and distal wave climates and coastal processes; a poor understanding of the resulting environmental impact may hamper progress, as it would make planning consents more difficult to obtain. It is therefore clear that an understanding the interactions between WECs within a farm is vital for the continued development of the wave energy industry.To support WEC farm design, a range of different numerical models have been developed, with both wave phase-resolving and wave phase-averaging models now available. Phase-resolving methods are primarily based on potential flow models and include semi-analytical techniques, boundary element methods and methods involving the mild-slope equations. Phase-averaging methods are all based around spectral wave models, with supra-grid and sub-grid wave farm models available as alternative implementations.The aims, underlying principles, strengths, weaknesses and obtained results of the main numerical methods currently used for modelling wave energy converter arrays are described in this paper, using a common framework. This allows a qualitative comparative analysis of the different methods to be performed at the end of the paper. This includes consideration of the conditions under which the models may be applied, the output of the models and the relationship between array size and computational effort. Guidance for developers is also presented on the most suitable numerical method to use for given aspects of WEC farm design. For instance, certain models are more suitable for studying near-field effects, whilst others are preferable for investigating far-field effects of the WEC farms. Furthermore, the analysis presented in this paper identifies areas in which the numerical modelling of WEC arrays is relatively weak and thus highlights those in which future developments are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented describes the development and evaluation of two flow-injection analysis (FIA) systems for the automated determination of carbaryl in spiked natural waters and commercial formulations. Samples are injected directly into the system where they are subjected to alkaline hydrolysis thus forming 1-naphthol. This product is readily oxidised at a glassy carbon electrode. The electrochemical behaviour of 1-naphthol allows the development of an FIA system with an amperometric detector in which 1-naphthol determination, and thus measurement of carbaryl concentration, can be performed. Linear response over the range 1.0×10–7 to 1.0×10–5 mol L–1, with a sampling rate of 80 samples h–1, was recorded. The detection limit was 1.0×10–8 mol L–1. Another FIA manifold was constructed but this used a colorimetric detector. The methodology was based on the coupling of 1-naphthol with phenylhydrazine hydrochloride to produce a red complex which has maximum absorbance at 495 nm. The response was linear from 1.0×10–5 to 1.5×10–3 mol L–1 with a detection limit of 1.0×10–6 mol L–1. Sample-throughput was about 60 samples h–1. Validation of the results provided by the two FIA methodologies was performed by comparing them with results from a standard HPLC–UV technique. The relative deviation was <5%. Recovery trials were also carried out and the values obtained ranged from 97.0 to 102.0% for both methods. The repeatability (RSD, %) of 12 consecutive injections of one sample was 0.8% and 1.6% for the amperometric and colorimetric systems, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optische Spektroskopie ist eine sehr wichtige Messtechnik mit einem hohen Potential für zahlreiche Anwendungen in der Industrie und Wissenschaft. Kostengünstige und miniaturisierte Spektrometer z.B. werden besonders für moderne Sensorsysteme “smart personal environments” benötigt, die vor allem in der Energietechnik, Messtechnik, Sicherheitstechnik (safety and security), IT und Medizintechnik verwendet werden. Unter allen miniaturisierten Spektrometern ist eines der attraktivsten Miniaturisierungsverfahren das Fabry Pérot Filter. Bei diesem Verfahren kann die Kombination von einem Fabry Pérot (FP) Filterarray und einem Detektorarray als Mikrospektrometer funktionieren. Jeder Detektor entspricht einem einzelnen Filter, um ein sehr schmales Band von Wellenlängen, die durch das Filter durchgelassen werden, zu detektieren. Ein Array von FP-Filter wird eingesetzt, bei dem jeder Filter eine unterschiedliche spektrale Filterlinie auswählt. Die spektrale Position jedes Bandes der Wellenlänge wird durch die einzelnen Kavitätshöhe des Filters definiert. Die Arrays wurden mit Filtergrößen, die nur durch die Array-Dimension der einzelnen Detektoren begrenzt werden, entwickelt. Allerdings erfordern die bestehenden Fabry Pérot Filter-Mikrospektrometer komplizierte Fertigungsschritte für die Strukturierung der 3D-Filter-Kavitäten mit unterschiedlichen Höhen, die nicht kosteneffizient für eine industrielle Fertigung sind. Um die Kosten bei Aufrechterhaltung der herausragenden Vorteile der FP-Filter-Struktur zu reduzieren, wird eine neue Methode zur Herstellung der miniaturisierten FP-Filtern mittels NanoImprint Technologie entwickelt und präsentiert. In diesem Fall werden die mehreren Kavitäten-Herstellungsschritte durch einen einzigen Schritt ersetzt, die hohe vertikale Auflösung der 3D NanoImprint Technologie verwendet. Seit dem die NanoImprint Technologie verwendet wird, wird das auf FP Filters basierende miniaturisierte Spectrometer nanospectrometer genannt. Ein statischer Nano-Spektrometer besteht aus einem statischen FP-Filterarray auf einem Detektorarray (siehe Abb. 1). Jeder FP-Filter im Array besteht aus dem unteren Distributed Bragg Reflector (DBR), einer Resonanz-Kavität und einen oberen DBR. Der obere und untere DBR sind identisch und bestehen aus periodisch abwechselnden dünnen dielektrischen Schichten von Materialien mit hohem und niedrigem Brechungsindex. Die optischen Schichten jeder dielektrischen Dünnfilmschicht, die in dem DBR enthalten sind, entsprechen einen Viertel der Design-Wellenlänge. Jeder FP-Filter wird einer definierten Fläche des Detektorarrays zugeordnet. Dieser Bereich kann aus einzelnen Detektorelementen oder deren Gruppen enthalten. Daher werden die Seitenkanal-Geometrien der Kavität aufgebaut, die dem Detektor entsprechen. Die seitlichen und vertikalen Dimensionen der Kavität werden genau durch 3D NanoImprint Technologie aufgebaut. Die Kavitäten haben Unterschiede von wenigem Nanometer in der vertikalen Richtung. Die Präzision der Kavität in der vertikalen Richtung ist ein wichtiger Faktor, der die Genauigkeit der spektralen Position und Durchlässigkeit des Filters Transmissionslinie beeinflusst.