980 resultados para peaks over threshold
Resumo:
Abnormally high price spikes in spot electricity markets represent a significant risk to market participants. As such, a literature has developed that focuses on forecasting the probability of such spike events, moving beyond simply forecasting the level of price. Many univariate time series models have been proposed to dealwith spikes within an individual market region. This paper is the first to develop a multivariate self-exciting point process model for dealing with price spikes across connected regions in the Australian National Electricity Market. The importance of the physical infrastructure connecting the regions on the transmission of spikes is examined. It is found that spikes are transmitted between the regions, and the size of spikes is influenced by the available transmission capacity. It is also found that improved risk estimates are obtained when inter-regional linkages are taken into account.
Resumo:
Atmospheric Rivers (ARs), narrow plumes of enhanced moisture transport in the lower troposphere, are a key synoptic feature behind winter flooding in midlatitude regions. This article develops an algorithm which uses the spatial and temporal extent of the vertically integrated horizontal water vapor transport for the detection of persistent ARs (lasting 18 h or longer) in five atmospheric reanalysis products. Applying the algorithm to the different reanalyses in the vicinity of Great Britain during the winter half-years of 1980–2010 (31 years) demonstrates generally good agreement of AR occurrence between the products. The relationship between persistent AR occurrences and winter floods is demonstrated using winter peaks-over-threshold (POT) floods (with on average one flood peak per winter). In the nine study basins, the number of winter POT-1 floods associated with persistent ARs ranged from approximately 40 to 80%. A Poisson regression model was used to describe the relationship between the number of ARs in the winter half-years and the large-scale climate variability. A significant negative dependence was found between AR totals and the Scandinavian Pattern (SCP), with a greater frequency of ARs associated with lower SCP values.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of São Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of São Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
En la presente Tesis se ha llevado a cabo el contraste y desarrollo de metodologías que permitan mejorar el cálculo de las avenidas de proyecto y extrema empleadas en el cálculo de la seguridad hidrológica de las presas. En primer lugar se ha abordado el tema del cálculo de las leyes de frecuencia de caudales máximos y su extrapolación a altos periodos de retorno. Esta cuestión es de gran relevancia, ya que la adopción de estándares de seguridad hidrológica para las presas cada vez más exigentes, implica la utilización de periodos de retorno de diseño muy elevados cuya estimación conlleva una gran incertidumbre. Es importante, en consecuencia incorporar al cálculo de los caudales de diseño todas la técnicas disponibles para reducir dicha incertidumbre. Asimismo, es importante hacer una buena selección del modelo estadístico (función de distribución y procedimiento de ajuste) de tal forma que se garantice tanto su capacidad para describir el comportamiento de la muestra, como para predecir de manera robusta los cuantiles de alto periodo de retorno. De esta forma, se han realizado estudios a escala nacional con el objetivo de determinar el esquema de regionalización que ofrece mejores resultados para las características hidrológicas de las cuencas españolas, respecto a los caudales máximos anuales, teniendo en cuenta el numero de datos disponibles. La metodología utilizada parte de la identificación de regiones homogéneas, cuyos límites se han determinado teniendo en cuenta las características fisiográficas y climáticas de las cuencas, y la variabilidad de sus estadísticos, comprobando posteriormente su homogeneidad. A continuación, se ha seleccionado el modelo estadístico de caudales máximos anuales con un mejor comportamiento en las distintas zonas de la España peninsular, tanto para describir los datos de la muestra como para extrapolar a los periodos de retorno más altos. El proceso de selección se ha basado, entre otras cosas, en la generación sintética de series de datos mediante simulaciones de Monte Carlo, y el análisis estadístico del conjunto de resultados obtenido a partir del ajuste de funciones de distribución a estas series bajo distintas hipótesis. Posteriormente, se ha abordado el tema de la relación caudal-volumen y la definición de los hidrogramas de diseño en base a la misma, cuestión que puede ser de gran importancia en el caso de presas con grandes volúmenes de embalse. Sin embargo, los procedimientos de cálculo hidrológico aplicados habitualmente no tienen en cuenta la dependencia estadística entre ambas variables. En esta Tesis se ha desarrollado un procedimiento para caracterizar dicha dependencia estadística de una manera sencilla y robusta, representando la función de distribución conjunta del caudal punta y el volumen en base a la función de distribución marginal del caudal punta y la función de distribución condicionada del volumen respecto al caudal. Esta última se determina mediante una función de distribución log-normal, aplicando un procedimiento de ajuste regional. Se propone su aplicación práctica a través de un procedimiento de cálculo probabilístico basado en la generación estocástica de un número elevado de hidrogramas. La aplicación a la seguridad hidrológica de las presas de este procedimiento requiere interpretar correctamente el concepto de periodo de retorno aplicado a variables hidrológicas bivariadas. Para ello, se realiza una propuesta de interpretación de dicho concepto. El periodo de retorno se entiende como el inverso de la probabilidad de superar un determinado nivel de embalse. Al relacionar este periodo de retorno con las variables hidrológicas, el hidrograma de diseño de la presa deja de ser un único hidrograma para convertirse en una familia de hidrogramas que generan un mismo nivel máximo en el embalse, representados mediante una curva en el plano caudal volumen. Esta familia de hidrogramas de diseño depende de la propia presa a diseñar, variando las curvas caudal-volumen en función, por ejemplo, del volumen de embalse o la longitud del aliviadero. El procedimiento propuesto se ilustra mediante su aplicación a dos casos de estudio. Finalmente, se ha abordado el tema del cálculo de las avenidas estacionales, cuestión fundamental a la hora de establecer la explotación de la presa, y que puede serlo también para estudiar la seguridad hidrológica de presas existentes. Sin embargo, el cálculo de estas avenidas es complejo y no está del todo claro hoy en día, y los procedimientos de cálculo habitualmente utilizados pueden presentar ciertos problemas. El cálculo en base al método estadístico de series parciales, o de máximos sobre un umbral, puede ser una alternativa válida que permite resolver esos problemas en aquellos casos en que la generación de las avenidas en las distintas estaciones se deba a un mismo tipo de evento. Se ha realizado un estudio con objeto de verificar si es adecuada en España la hipótesis de homogeneidad estadística de los datos de caudal de avenida correspondientes a distintas estaciones del año. Asimismo, se han analizado los periodos estacionales para los que es más apropiado realizar el estudio, cuestión de gran relevancia para garantizar que los resultados sean correctos, y se ha desarrollado un procedimiento sencillo para determinar el umbral de selección de los datos de tal manera que se garantice su independencia, una de las principales dificultades en la aplicación práctica de la técnica de las series parciales. Por otra parte, la aplicación practica de las leyes de frecuencia estacionales requiere interpretar correctamente el concepto de periodo de retorno para el caso estacional. Se propone un criterio para determinar los periodos de retorno estacionales de forma coherente con el periodo de retorno anual y con una distribución adecuada de la probabilidad entre las distintas estaciones. Por último, se expone un procedimiento para el cálculo de los caudales estacionales, ilustrándolo mediante su aplicación a un caso de estudio. The compare and develop of a methodology in order to improve the extreme flow estimation for dam hydrologic security has been developed. First, the work has been focused on the adjustment of maximum peak flows distribution functions from which to extrapolate values for high return periods. This has become a major issue as the adoption of stricter standards on dam hydrologic security involves estimation of high design return periods which entails great uncertainty. Accordingly, it is important to incorporate all available techniques for the estimation of design peak flows in order to reduce this uncertainty. Selection of the statistical model (distribution function and adjustment method) is also important since its ability to describe the sample and to make solid predictions for high return periods quantiles must be guaranteed. In order to provide practical application of previous methodologies, studies have been developed on a national scale with the aim of determining a regionalization scheme which features best results in terms of annual maximum peak flows for hydrologic characteristics of Spanish basins taking into account the length of available data. Applied methodology starts with the delimitation of regions taking into account basin’s physiographic and climatic characteristics and the variability of their statistical properties, and continues with their homogeneity testing. Then, a statistical model for maximum annual peak flows is selected with the best behaviour for the different regions in peninsular Spain in terms of describing sample data and making solid predictions for high return periods. This selection has been based, among others, on synthetic data series generation using Monte Carlo simulations and statistical analysis of results from distribution functions adjustment following different hypothesis. Secondly, the work has been focused on the analysis of the relationship between peak flow and volume and how to define design flood hydrographs based on this relationship which can be highly important for large volume reservoirs. However, commonly used hydrologic procedures do not take statistical dependence between these variables into account. A simple and sound method for statistical dependence characterization has been developed by the representation of a joint distribution function of maximum peak flow and volume which is based on marginal distribution function of peak flow and conditional distribution function of volume for a given peak flow. The last one is determined by a regional adjustment procedure of a log-normal distribution function. Practical application is proposed by a probabilistic estimation procedure based on stochastic generation of a large number of hydrographs. The use of this procedure for dam hydrologic security requires a proper interpretation of the return period concept applied to bivariate hydrologic data. A standard is proposed in which it is understood as the inverse of the probability of exceeding a determined reservoir level. When relating return period and hydrological variables the only design flood hydrograph changes into a family of hydrographs which generate the same maximum reservoir level and that are represented by a curve in the peak flow-volume two-dimensional space. This family of design flood hydrographs depends on the dam characteristics as for example reservoir volume or spillway length. Two study cases illustrate the application of the developed methodology. Finally, the work has been focused on the calculation of seasonal floods which are essential when determining the reservoir operation and which can be also fundamental in terms of analysing the hydrologic security of existing reservoirs. However, seasonal flood calculation is complex and nowadays it is not totally clear. Calculation procedures commonly used may present certain problems. Statistical partial duration series, or peaks over threshold method, can be an alternative approach for their calculation that allow to solve problems encountered when the same type of event is responsible of floods in different seasons. A study has been developed to verify the hypothesis of statistical homogeneity of peak flows for different seasons in Spain. Appropriate seasonal periods have been analyzed which is highly relevant to guarantee correct results. In addition, a simple procedure has been defined to determine data selection threshold on a way that ensures its independency which is one of the main difficulties in practical application of partial series. Moreover, practical application of seasonal frequency laws requires a correct interpretation of the concept of seasonal return period. A standard is proposed in order to determine seasonal return periods coherently with the annual return period and with an adequate seasonal probability distribution. Finally a methodology is proposed to calculate seasonal peak flows. A study case illustrates the application of the proposed methodology.
Resumo:
The thermodynamics of the binding of D-galactopyranoside (Gal), 2-acetamido-2-deoxygalactopyranoside (GalNAc), methyl-alpha-D-galactopyranoside, and methyl-beta-D-galactopyranoside to the basic agglutinin from winged bean (WBAI) in 0.02 M sodium phosphate and 0.15 M sodium chloride buffer have been investigated from 298.15 to 333.15 K by titration calorimetry and at the denaturation temperature by differential scanning calorimetry (DSC). WBAI is a dimer with two binding sites. The titration calorimetry yielded single-site binding constants ranging from 0.56 +/- 0.14 x 10(3) M-1 for Gal at 323.15 K to 7.2 +/- 0.5 x 10(3) M-1 for GalNAc at 298.15 K and binding enthalpies ranging from -28.0 +/- 2.0 kJ mol-1 for GalNAc at 298.15 K to -14.3 +/- 0.1 kJ mol-1 for methyl-beta-D-galactopyranoside at 322.65 K. The denaturation transition consisted of two overlapping peaks over the pH range 5.6-7.4. Fits of the differential scanning calorimetry data to a two-state transition model showed that the low temperature transition (341.6 +/- 0.4 K at pH 7.4) consisted of two domains unfolding as a single entity while the higher temperature transition (347.8 +/- 0.6 K at pH 7.4) is of the remaining WBAI dimer unfolding into two monomers. Both transitions shift to higher temperatures and higher calorimetric enthalpies with increase in added ligand concentration at pH 7.4. Analysis of the temperature increase as a function of added ligand concentration suggests that one ligand binds to the two domains unfolding at 341.6 +/- 0.6 K and one ligand binds to the domain unfolding at 347.8 +/- 0.6 K.
Resumo:
Les détecteurs à pixels Medipix ont été développés par la collaboration Medipix et permettent de faire de l'imagerie en temps réel. Leur surface active de près de $2\cm^2$ est divisée en 65536~pixels de $55\times 55\um^2$ chacun. Seize de ces détecteurs, les Medipix2, sont installés dans l'expérience ATLAS au CERN afin de mesurer en temps réel les champs de radiation produits par les collisions de hadrons au LHC. Ils seront prochainement remplacés par des Timepix, la plus récente version de ces détecteurs, qui permettent de mesurer directement l'énergie déposée dans chaque pixel en mode \textit{time-over-threshold} (TOT) lors du passage d'une particule dans le semi-conducteur. En vue d'améliorer l'analyse des données recueillies avec ces détecteurs Timepix dans ATLAS, un projet de simulation Geant4 a été amorcé par John Id\'{a}rraga à l'Université de Montréal. Dans le cadre de l'expérience ATLAS, cette simulation pourra être utilisée conjointement avec Athena, le programme d'analyse d'ATLAS, et la simulation complète du détecteur ATLAS. Sous l'effet de leur propre répulsion, les porteurs de charge créés dans le semi-conducteur sont diffusés vers les pixels adjacents causant un dépôt d'énergie dans plusieurs pixels sous l'effet du partage de charges. Un modèle effectif de cette diffusion latérale a été développé pour reproduire ce phénomène sans résoudre d'équation différentielle de transport de charge. Ce modèle, ainsi que le mode TOT du Timepix, qui permet de mesurer l'énergie déposée dans le détecteur, ont été inclus dans la simulation afin de reproduire adéquatement les traces laissées par les particules dans le semi-conducteur. On a d'abord étalonné le détecteur pixel par pixel à l'aide d'une source de $\Am$ et de $\Ba$. Ensuite, on a validé la simulation à l'aide de mesures d'interactions de protons et de particules $\alpha$ produits au générateur Tandem van de Graaff du Laboratoire René-J.-A.-Lévesque de l'Université de Montréal.
Resumo:
A statistical methodology is proposed and tested for the analysis of extreme values of atmospheric wave activity at mid-latitudes. The adopted methods are the classical block-maximum and peak over threshold, respectively based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD). Time-series of the ‘Wave Activity Index’ (WAI) and the ‘Baroclinic Activity Index’ (BAI) are computed from simulations of the General Circulation Model ECHAM4.6, which is run under perpetual January conditions. Both the GEV and the GPD analyses indicate that the extremes ofWAI and BAI areWeibull distributed, this corresponds to distributions with an upper bound. However, a remarkably large variability is found in the tails of such distributions; distinct simulations carried out under the same experimental setup provide sensibly different estimates of the 200-yr WAI return level. The consequences of this phenomenon in applications of the methodology to climate change studies are discussed. The atmospheric configurations characteristic of the maxima and minima of WAI and BAI are also examined.
Resumo:
A frequently used diagram summarizing the annual- and global-mean energy budget of the earth and atmosphere indicates that the irradiance reaching the top of the atmosphere from the surface, through the midinfrared atmospheric window, is 40 W m−2; this can be compared to the total outgoing longwave radiation (OLR) of about 235 W m−2. The value of 40 W m−2 was estimated in an ad hoc manner. A more detailed calculation of this component, termed here the surface transmitted irradiance (STI), is presented, using a line-by-line radiation code and 3D climatologies of temperature, humidity, cloudiness, etc. No assumption is made as to the wavelengths at which radiation from the surface can reach the top of the atmosphere. The role of the water vapor continuum is highlighted. In clear skies, if the continuum is excluded, the global- and annual-mean STI is calculated to be about 100 W m−2 with a broad maximum throughout the tropics and subtropics. When the continuum is included, the clear-sky STI is reduced to 66 W m−2, with a distinctly different geographic distribution, with a minimum in the tropics and local peaks over subtropical deserts. The inclusion of clouds reduces the STI to about 22 W m−2. The actual value is likely somewhat smaller due to processes neglected here, and an STI value of 20 W m−2 (with an estimated uncertainty of about ±20%) is suggested to be much more realistic than the previous estimate of 40 W m−2. This indicates that less than one-tenth of the OLR originates directly from the surface.
Resumo:
Possible changes in the frequency and intensity of windstorms under future climate conditions during the 21st century are investigated based on an ECHAM5 GCM multi-scenario ensemble. The intensity of a storm is quantified by the associated estimated loss derived with using an empirical model. The geographical focus is ‘Core Europe’, which comprises countries of Western Europe. Possible changes of losses are analysed by comparing ECHAM5 GCM data for recent (20C, 1960 to 2000) and future climate conditions (B1, A1B, A2; 2060 to 2100), each with 3 ensemble members. Changes are quantified using both rank statistics and return periods (RP) estimated by fitting an extreme value distribution using the peak over threshold method to potential storm losses. The estimated losses for ECHAM5 20C and reanalysis events show similar statistical features in terms of return periods. Under future climate conditions, all climate scenarios show an increase in both frequency and magnitude of potential losses caused by windstorms for Core Europe. Future losses that are double the highest ECHAM5 20C loss are identified for some countries. While positive changes of ranking are significant for many countries and multiple scenarios, significantly shorter RPs are mostly found under the A2 scenario for return levels correspondent to 20 yr losses or less. The emergence time of the statistically significant changes in loss varies from 2027 to 2100. These results imply an increased risk of occurrence of windstorm-associated losses, which can be largely attributed to changes in the meteorological severity of the events. Additionally, factors such as changes in the cyclone paths and in the location of the wind signatures relative to highly populated areas are also important to explain the changes in estimated losses.
Resumo:
Flow injection analysis (FIA) with amperometric detection was employed for the quantification of N-acetylcysteine (NAC) in pharmaceutical formulations, utilizing an ordinary pyrolytic graphite (OPG) electrode modified with cobalt phthalocyanine (CoPc). Cyclic voltammetry was used in preliminary studies to establish the best conditions for NAC analysis. In FIA-amperometric experiments the OPG-CoPc electrode exhibited sharp and reproducible current peaks over a wide linear working range (5.0 x 10(-5)-1.0 x 10(-3) mol L(-1)) in 0.1 mol L(-1) NaOH solution. High sensitivity (130 mA mol(-1) cm(2)) and a low detection limit (9.0 x 10(-7) mol L(-1)) were achieved using the sensor. The repeatability (R.S.D.%) for 13 successive flow injections of a solution containing 5.0 x 10(-4) mol L(-1) NAC was 1.1%. The new procedure was applied in analyses of commercial pharmaceutical products and the results were in excellent agreement with those obtained using the official titrimetric method. The proposed amperometric method is highly suitable for quality control analyses of NAC in pharmaceuticals since it is rapid, precise and requires much less work than the recommended titrimetric method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.
Resumo:
Future experiments in nuclear and particle physics are moving towards the high luminosity regime in order to access rare processes. In this framework, particle detectors require high rate capability together with excellent timing resolution for precise event reconstruction. In order to achieve this, the development of dedicated FrontEnd Electronics (FEE) for detectors has become increasingly challenging and expensive. Thus, a current trend in R&D is towards flexible FEE that can be easily adapted to a great variety of detectors, without impairing the required high performance. This thesis reports on a novel FEE for two different detector types: imaging Cherenkov counters and plastic scintillator arrays. The former requires high sensitivity and precision for detection of single photon signals, while the latter is characterized by slower and larger signals typical of scintillation processes. The FEE design was developed using high-bandwidth preamplifiers and fast discriminators which provide Time-over-Threshold (ToT). The use of discriminators allowed for low power consumption, minimal dead-times and self-triggering capabilities, all fundamental aspects for high rate applications. The output signals of the FEE are readout by a high precision TDC system based on FPGA. The performed full characterization of the analogue signals under realistic conditions proved that the ToT information can be used in a novel way for charge measurements or walk corrections, thus improving the obtainable timing resolution. Detailed laboratory investigations proved the feasibility of the ToT method. The full readout chain was investigated in test experiments at the Mainz Microtron: high counting rates per channel of several MHz were achieved, and a timing resolution of better than 100 ps after walk correction based on ToT was obtained. Ongoing applications to fast Time-of-Flight counters and future developments of FEE have been also recently investigated.
Resumo:
Esta tesis recoje un trabajo experimental centrado en profundizar sobre el conocimiento de los bloques detectores monolíticos como alternativa a los detectores segmentados para tomografía por emisión de positrones (Positron Emission Tomography, PET). El trabajo llevado a cabo incluye el desarrollo, la caracterización, la puesta a punto y la evaluación de prototipos demostradores PET utilizando bloques monolíticos de ortosilicato de lutecio ytrio dopado con cerio (Cerium-Doped Lutetium Yttrium Orthosilicate, LYSO:Ce) usando sensores compatibles con altos campos magnéticos, tanto fotodiodos de avalancha (Avalanche Photodiodes, APDs) como fotomultiplicadores de silicio (Silicon Photomultipliers, SiPMs). Los prototipos implementados con APDs se construyeron para estudiar la viabilidad de un prototipo PET de alta sensibilidad previamente simulado, denominado BrainPET. En esta memoria se describe y caracteriza la electrónica frontal integrada utilizada en estos prototipos junto con la electrónica de lectura desarrollada específicamente para los mismos. Se muestran los montajes experimentales para la obtención de las imágenes tomográficas PET y para el entrenamiento de los algoritmos de red neuronal utilizados para la estimación de las posiciones de incidencia de los fotones γ sobre la superficie de los bloques monolíticos. Con el prototipo BrainPET se obtuvieron resultados satisfactorios de resolución energética (13 % FWHM), precisión espacial de los bloques monolíticos (~ 2 mm FWHM) y resolución espacial de la imagen PET de 1,5 - 1,7 mm FWHM. Además se demostró una capacidad resolutiva en la imagen PET de ~ 2 mm al adquirir simultáneamente imágenes de fuentes radiactivas separadas a distancias conocidas. Sin embargo, con este prototipo se detectaron también dos limitaciones importantes. En primer lugar, se constató una falta de flexibilidad a la hora de trabajar con un circuito integrado de aplicación específica (Application Specific Integrated Circuit, ASIC) cuyo diseño electrónico no era propio sino comercial, unido al elevado coste que requieren las modificaciones del diseño de un ASIC con tales características. Por otra parte, la caracterización final de la electrónica integrada del BrainPET mostró una resolución temporal con amplio margen de mejora (~ 13 ns FWHM). Tomando en cuenta estas limitaciones obtenidas con los prototipos BrainPET, junto con la evolución tecnológica hacia matrices de SiPM, el conocimiento adquirido con los bloques monolíticos se trasladó a la nueva tecnología de sensores disponible, los SiPMs. A su vez se inició una nueva estrategia para la electrónica frontal, con el ASIC FlexToT, un ASIC de diseño propio basado en un esquema de medida del tiempo sobre umbral (Time over Threshold, ToT), en donde la duración del pulso de salida es proporcional a la energía depositada. Una de las características más interesantes de este esquema es la posibilidad de manejar directamente señales de pulsos digitales, en lugar de procesar la amplitud de las señales analógicas. Con esta arquitectura electrónica se sustituyen los conversores analógicos digitales (Analog to Digital Converter, ADCs) por conversores de tiempo digitales (Time to Digital Converter, TDCs), pudiendo implementar éstos de forma sencilla en matrices de puertas programmable ‘in situ’ (Field Programmable Gate Array, FPGA), reduciendo con ello el consumo y la complejidad del diseño. Se construyó un nuevo prototipo demostrador FlexToT para validar dicho ASIC para bloques monolíticos o segmentados. Se ha llevado a cabo el diseño y caracterización de la electrónica frontal necesaria para la lectura del ASIC FlexToT, evaluando su linealidad y rango dinámico, el comportamiento frente a ruido así como la no linealidad diferencial obtenida con los TDCs implementados en la FPGA. Además, la electrónica presentada en este trabajo es capaz de trabajar con altas tasas de actividad y de discriminar diferentes centelleadores para aplicaciones phoswich. El ASIC FlexToT proporciona una excelente resolución temporal en coincidencia para los eventos correspondientes con el fotopico de 511 keV (128 ps FWHM), solventando las limitaciones de resolución temporal del prototipo BrainPET. Por otra parte, la resolución energética con bloques monolíticos leidos por ASICs FlexToT proporciona una resolución energética de 15,4 % FWHM a 511 keV. Finalmente, se obtuvieron buenos resultados en la calidad de la imagen PET y en la capacidad resolutiva del demostrador FlexToT, proporcionando resoluciones espaciales en el centro del FoV en torno a 1,4 mm FWHM. ABSTRACT This thesis is focused on the development of experimental activities used to deepen the knowledge of monolithic detector blocks as an alternative to segmented detectors for Positron Emission Tomography (PET). It includes the development, characterization, setting up, running and evaluation of PET demonstrator prototypes with monolithic detector blocks of Cerium-doped Lutetium Yttrium Orthosilicate (LYSO:Ce) using magnetically compatible sensors such as Avalanche Photodiodes (APDs) and Silicon Photomultipliers (SiPMs). The prototypes implemented with APDs were constructed to validate the viability of a high-sensitivity PET prototype that had previously been simulated, denominated BrainPET. This work describes and characterizes the integrated front-end electronics used in these prototypes, as well as the electronic readout system developed especially for them. It shows the experimental set-ups to obtain the tomographic PET images and to train neural networks algorithms used for position estimation of photons impinging on the surface of monolithic blocks. Using the BrainPET prototype, satisfactory energy resolution (13 % FWHM), spatial precision of monolithic blocks (~ 2 mm FWHM) and spatial resolution of the PET image (1.5 – 1.7 mm FWHM) in the center of the Field of View (FoV) were obtained. Moreover, we proved the imaging capabilities of this demonstrator with extended sources, considering the acquisition of two simultaneous sources of 1 mm diameter placed at known distances. However, some important limitations were also detected with the BrainPET prototype. In the first place, it was confirmed that there was a lack of flexibility working with an Application Specific Integrated Circuit (ASIC) whose electronic design was not own but commercial, along with the high cost required to modify an ASIC design with such features. Furthermore, the final characterization of the BrainPET ASIC showed a timing resolution with room for improvement (~ 13 ns FWHM). Taking into consideration the limitations obtained with the BrainPET prototype, along with the technological evolution in magnetically compatible devices, the knowledge acquired with the monolithic blocks were transferred to the new technology available, the SiPMs. Moreover, we opted for a new strategy in the front-end electronics, the FlexToT ASIC, an own design ASIC based on a Time over Threshold (ToT) scheme. One of the most interesting features underlying a ToT architecture is the encoding of the analog input signal amplitude information into the duration of the output signals, delivering directly digital pulses. The electronic architecture helps substitute the Analog to Digital Converters (ADCs) for Time to Digital Converters (TDCs), and they are easily implemented in Field Programmable Gate Arrays (FPGA), reducing the consumption and the complexity of the design. A new prototype demonstrator based on SiPMs was implemented to validate the FlexToT ASIC for monolithic or segmented blocks. The design and characterization of the necessary front-end electronic to read-out the signals from the ASIC was carried out by evaluating its linearity and dynamic range, its performance with an external noise signal, as well as the differential nonlinearity obtained with the TDCs implemented in the FPGA. Furthermore, the electronic presented in this work is capable of working at high count rates and discriminates different phoswich scintillators. The FlexToT ASIC provides an excellent coincidence time resolution for events that correspond to 511 keV photopeak (128 ps FWHM), resolving the limitations of the poor timing resolution of the BrainPET prototype. Furthermore, the energy resolution with monolithic blocks read by FlexToT ASICs provides an energy resolution of 15.4 % FWHM at 511 keV. Finally, good results were obtained in the quality of the PET image and the resolving power of the FlexToT demonstrator, providing spatial resolutions in the centre of the FoV at about 1.4 mm FWHM.
Resumo:
Protective/suppressive major histocompatibility complex (MHC) class II alleles have been identified in humans and mice where they exert a disease-protective and immunosuppressive effect. Various modes of action have been proposed, among them differential expression of MHC class II genes in different types of antigen-presenting cells impacting on the T helper type 1 (Th1)–Th2 balance. To test this possibility, the expression of H-2 molecules from the four haplotypes H-2b, H-2d, H-2k, and H-2q was determined on bone marrow-derived macrophages (BMDMs) and splenic B cells. The I-Ab and I-Ek molecules, both well characterized as protective/suppressive, are expressed at a high level on almost all CD11b+ BMDMs for 5–8 days, after which expression slowly declines. In contrast, I-Ad, I-Ak, and I-Aq expression is lower, peaks over a shorter period, and declines more rapidly. No differential expression could be detected on B cells. In addition, the differential MHC class II expression found on macrophages skews the cytokine response of T cells as shown by an in vitro restimulation assay with BMDMs as antigen-presenting cells. The results indicate that macrophages of the protective/suppressive haplotypes express MHC class II molecules at a high level and exert Th1 bias, whereas low-level expression favors a Th2 response. We suggest that the extent of expression of the class II gene gates the back signal from T cells and in this way controls the activity of macrophages. This effect mediated by polymorphic nonexon segments of MHC class II genes may play a role in determining disease susceptibility in humans and mice.