760 resultados para Positron annihilation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The completion of the third-order QCD corrections to the inclusive top-pair production cross section near threshold demonstrates that the strong dynamics is under control at the few percent level. In this paper we consider the effects of the Higgs boson on the cross section and, for the first time, combine the third-order QCD result with the third-order P-wave, the leading QED and the leading non-resonant contributions. We study the size of the different effects and investigate the sensitivity of the cross section to variations of the top-quark Yukawa coupling due to possible new physics effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present review, we deliver an overview of the involvement of metabotropic glutamate receptor 5 (mGluR5) activity and density in pathological anxiety, mood disorders and addiction. Specifically, we will describe mGluR5 studies in humans that employed Positron Emission Tomography (PET) and combined the findings with preclinical animal research. This combined view of different methodological approaches-from basic neurobiological approaches to human studies-might give a more comprehensive and clinically relevant view of mGluR5 function in mental health than the view on preclinical data alone. We will also review the current research data on mGluR5 along the Research Domain Criteria (RDoC). Firstly, we found evidence of abnormal glutamate activity related to the positive and negative valence systems, which would suggest that antagonistic mGluR5 intervention has prominent anti-addictive, anti-depressive and anxiolytic effects. Secondly, there is evidence that mGluR5 plays an important role in systems for social functioning and the response to social stress. Finally, mGluR5's important role in sleep homeostasis suggests that this glutamate receptor may play an important role in RDoC's arousal and modulatory systems domain. Glutamate was previously mostly investigated in non-human studies, however initial human clinical PET research now also supports the hypothesis that, by mediating brain excitability, neuroplasticity and social cognition, abnormal metabotropic glutamate activity might predispose individuals to a broad range of psychiatric problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM To evaluate the diagnostic value (sensitivity, specificity) of positron emission mammography (PEM) in a single site non-interventional study using the maximum PEM uptake value (PUVmax). PATIENTS, METHODS In a singlesite, non-interventional study, 108 patients (107 women, 1 man) with a total of 151 suspected lesions were scanned with a PEM Flex Solo II (Naviscan) at 90 min p.i. with 3.5 MBq 18F-FDG per kg of body weight. In this ROI(region of interest)-based analysis, maximum PEM uptake value (PUV) was determined in lesions, tumours (PUVmaxtumour), benign lesions (PUVmaxnormal breast) and also in healthy tissues on the contralateral side (PUVmaxcontralateral breast). These values were compared and contrasted. In addition, the ratios of PUVmaxtumour / PUVmaxcontralateral breast and PUVmaxnormal breast / PUVmaxcontralateral breast were compared. The image data were interpreted independently by two experienced nuclear medicine physicians and compared with histology in cases of suspected carcinoma. RESULTS Based on a criteria of PUV>1.9, 31 out of 151 lesions in the patient cohort were found to be malignant (21%). A mean PUVmaxtumour of 3.78 ± 2.47 was identified in malignant tumours, while a mean PUVmaxnormal breast of 1.17 ± 0.37 was reported in the glandular tissue of the healthy breast, with the difference being statistically significant (p < 0.001). Similarly, the mean ratio between tumour and healthy glandular tissue in breast cancer patients (3.15 ± 1.58) was found to be significantly higher than the ratio for benign lesions (1.17 ± 0.41, p < 0.001). CONCLUSION PEM is capable of differentiating breast tumours from benign lesions with 100% sensitivity along with a high specificity of 96%, when a threshold of PUVmax >1.9 is applied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE Our main objective was to prospectively determine the prognostic value of [(18)F]fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) after two cycles of rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone given every 14 days (R-CHOP-14) under standardized treatment and PET evaluation criteria. PATIENTS AND METHODS Patients with any stage of diffuse large B-cell lymphoma were treated with six cycles of R-CHOP-14 followed by two cycles of rituximab. PET/CT examinations were performed at baseline, after two cycles (and after four cycles if the patient was PET-positive after two cycles), and at the end of treatment. PET/CT examinations were evaluated locally and by central review. The primary end point was event-free survival at 2 years (2-year EFS). RESULTS Median age of the 138 evaluable patients was 58.5 years with a WHO performance status of 0, 1, or 2 in 56%, 36%, or 8% of the patients, respectively. By local assessment, 83 PET/CT scans (60%) were reported as positive and 55 (40%) as negative after two cycles of R-CHOP-14. Two-year EFS was significantly shorter for PET-positive compared with PET-negative patients (48% v 74%; P = .004). Overall survival at 2 years was not significantly different, with 88% for PET-positive versus 91% for PET-negative patients (P = .46). By using central review and the Deauville criteria, 2-year EFS was 41% versus 76% (P < .001) for patients who had interim PET/CT scans after two cycles of R-CHOP-14 and 24% versus 72% (P < .001) for patients who had PET/CT scans at the end of treatment. CONCLUSION Our results confirmed that an interim PET/CT scan has limited prognostic value in patients with diffuse large B-cell lymphoma homogeneously treated with six cycles of R-CHOP-14 in a large prospective trial. At this point, interim PET/CT scanning is not ready for clinical use to guide treatment decisions in individual patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CONTEXT Radiolabelled choline positron emission tomography has changed the management of prostate cancer patients. However, new emerging radiopharmaceutical agents, like radiolabelled prostate specific membrane antigen, and new promising hybrid imaging will begin new challenges in the diagnostic field. OBJECTIVE The continuous evolution in nuclear medicine has led to the improvement in the detection of recurrent prostate cancer (PCa), particularly distant metastases. New horizons have been opened for radiolabelled choline positron emission tomography (PET)/computed tomography (CT) as a guide for salvage therapy or for the assessment of systemic therapies. In addition, new tracers and imaging tools have been recently tested, providing important information for the management of PCa patients. Herein we discuss: (1) the available evidence in literature on radiolabelled choline PET and their recent indications, (2) the role of alternative radiopharmaceutical agents, and (3) the advantages of a recent hybrid imaging device (PET/magnetic resonance imaging) in PCa. EVIDENCE ACQUISITION Data from recently published (2010-2015), original articles concerning the role of choline PET/CT, new emerging radiotracers, and a new imaging device are analysed. This review is reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. EVIDENCE SYNTHESIS In the restaging phase, the detection rate of choline PET varies between 4% and 97%, mainly depending on the site of recurrence and prostate-specific antigen levels. Both 68gallium (68Ga)-prostate specific membrane antigen and 18F-fluciclovine are shown to be more accurate in the detection of recurrent disease as compared with radiolabelled choline PET/CT. Particularly, Ga68-PSMA has a detection rate of 50% and 68%, respectively for prostate-specific antigen levels < 0.5ng/ml and 0.5-2ng/ml. Moreover, 68Ga- PSMA PET/magnetic resonance imaging demonstrated a particularly higher accuracy in detecting PCa than PET/CT. New tracers, such as radiolabelled bombesin or urokinase-type plasminogen activator receptor, are promising, but few data in clinical practice are available today. CONCLUSIONS Some limitations emerge from the published papers, both for radiolabelled choline PET/CT and also for new radiopharmaceutical agents. Efforts are still needed to enhance the impact of published data in the world of oncology, in particular when new radiopharmaceuticals are introduced into the clinical arena. PATIENT SUMMARY In the present review, the authors summarise the last evidences in clinical practice for the assessment of prostate cancer, by using nuclear medicine modalities, like positron emission tomography/computed tomography and positron emission tomography/magnetic resonance imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to evaluate factors regulating substrate metabolism in vivo positron emitting radionuclides were used for the assessment of skeletal muscle blood flow and glucose utilization. The potassium analog, Rb-82 was used to measure skeletal muscle blood flow and the glucose analog, 18-F-2-deoxy-2-fluoro-D-glucose (FDG) was used to examine the kinetics of skeletal muscle transport and phosphorylation.^ New Zealand white rabbits' blood flow ranged from 1.0-70 ml/min/100g with the lowest flows occurring under baseline conditions and the highest flows were measured immediately after exercise. Elevated plasma glucose had no effect on increasing blood flow, whereas high physiologic to pharmacologic levels of insulin doubled flow as measured by the radiolabeled microspheres, but a proportionate increase was not detected by Rb-82. The data suggest that skeletal muscle blood flow can be measured using the positron emitting K+ analog Rb-82 under low flow and high flow conditions but not when insulin levels in the plasma are elevated. This may be due to the fact that insulin induces an increase in the Na+/K+-ATPase activity of the cell indirectly through a direct increase in the Na+/H+pump activity. This suggests that the increased cation pump activity counteracts the normal decrease in extraction seen at higher flows resulting in an underestimation of flow as measured by rubidium-82.^ Glucose uptake as measured by FDG employed a three compartment mathematical model describing the rates of transport, countertransport and phosphorylation of hexose. The absolute values for the metabolic rate of FDG were found to be an order of magnitude higher than those reported by other investigators. Changes noted in the rate constant for transport (k1) were found to disagree with the a priori information on the effects of insulin on skeletal muscle hexose transport. Glucose metabolism was however, found to increase above control levels with administration of insulin and electrical stimulation. The data indicate that valid measurements of skeletal muscle glucose transport and phosphorylation using the positron emitting glucose analog FDG requires further model application and biochemical validation. (Abstract shortened with permission of author.) ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The AEgIS experiment is an interdisciplinary collaboration between atomic, plasma and particle physicists, with the scientific goal of performing the first precision measurement of the Earth's gravitational acceleration on antimatter. The principle of the experiment is as follows: cold antihydrogen atoms are synthesized in a Penning-Malmberg trap and are Stark accelerated towards a moiré deflectometer, the classical counterpart of an atom interferometer, and annihilate on a position sensitive detector. Crucial to the success of the experiment is an antihydrogen detector that will be used to demonstrate the production of antihydrogen and also to measure the temperature of the anti-atoms and the creation of a beam. The operating requirements for the detector are very challenging: it must operate at close to 4 K inside a 1 T solenoid magnetic field and identify the annihilation of the antihydrogen atoms that are produced during the 1 μs period of antihydrogen production. Our solution—called the FACT detector—is based on a novel multi-layer scintillating fiber tracker with SiPM readout and off the shelf FPGA based readout system. This talk will present the design of the FACT detector and detail the operation of the detector in the context of the AEgIS experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of the AEgIS experiment is to measure the gravitational acceleration of antihydrogen – the simplest atom consisting entirely of antimatter – with the ultimate precision of 1%. We plan to verify the Weak Equivalence Principle (WEP), one of the fundamental laws of nature, with an antimatter beam. The experiment consists of a positron accumulator, an antiproton trap and a Stark accelerator in a solenoidal magnetic field to form and accelerate a pulsed beam of antihydrogen atoms towards a free-fall detector. The antihydrogen beam passes through a moir ́e deflectometer to measure the vertical displacement due to the gravitational force. A position and time sensitive hybrid detector registers the annihilation points of the antihydrogen atoms and their time-of-flight. The detection principle has been successfully tested with antiprotons and a miniature moir ́e deflectometer coupled to a nuclear emulsion detector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We developed a new FPGA-based method for coincidence detection in positronemissiontomography. The method requires low device resources and no specific peripherals in order to resolve coincident digital pulses within a time window of a few nanoseconds. This method has been validated with a low-end Xilinx Spartan-3E and provided coincidence resolutions lower than 6 ns. This resolution depends directly on the signal propagation properties of the target device and the maximum available clock frequency, therefore it is expected to improve considerably on higher-end FPGAs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Respiratory motion is a major source of reduced quality in positron emission tomography (PET). In order to minimize its effects, the use of respiratory synchronized acquisitions, leading to gated frames, has been suggested. Such frames, however, are of low signal-to-noise ratio (SNR) as they contain reduced statistics. Super-resolution (SR) techniques make use of the motion in a sequence of images in order to improve their quality. They aim at enhancing a low-resolution image belonging to a sequence of images representing different views of the same scene. In this work, a maximum a posteriori (MAP) super-resolution algorithm has been implemented and applied to respiratory gated PET images for motion compensation. An edge preserving Huber regularization term was used to ensure convergence. Motion fields were recovered using a B-spline based elastic registration algorithm. The performance of the SR algorithm was evaluated through the use of both simulated and clinical datasets by assessing image SNR, as well as the contrast, position and extent of the different lesions. Results were compared to summing the registered synchronized frames on both simulated and clinical datasets. The super-resolution image had higher SNR (by a factor of over 4 on average) and lesion contrast (by a factor of 2) than the single respiratory synchronized frame using the same reconstruction matrix size. In comparison to the motion corrected or the motion free images a similar SNR was obtained, while improvements of up to 20% in the recovered lesion size and contrast were measured. Finally, the recovered lesion locations on the SR images were systematically closer to the true simulated lesion positions. These observations concerning the SNR, lesion contrast and size were confirmed on two clinical datasets included in the study. In conclusion, the use of SR techniques applied to respiratory motion synchronized images lead to motion compensation combined with improved image SNR and contrast, without any increase in the overall acquisition times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ion beam therapy is a valuable method for the treatment of deep-seated and radio-resistant tumors thanks to the favorable depth-dose distribution characterized by the Bragg peak. Hadrontherapy facilities take advantage of the specific ion range, resulting in a highly conformal dose in the target volume, while the dose in critical organs is reduced as compared to photon therapy. The necessity to monitor the delivery precision, i.e. the ion range, is unquestionable, thus different approaches have been investigated, such as the detection of prompt photons or annihilation photons of positron emitter nuclei created during the therapeutic treatment. Based on the measurement of the induced β+ activity, our group has developed various in-beam PET prototypes: the one under test is composed by two planar detector heads, each one consisting of four modules with a total active area of 10 × 10 cm2. A single detector module is made of a LYSO crystal matrix coupled to a position sensitive photomultiplier and is read-out by dedicated frontend electronics. A preliminary data taking was performed at the Italian National Centre for Oncological Hadron Therapy (CNAO, Pavia), using proton beams in the energy range of 93–112 MeV impinging on a plastic phantom. The measured activity profiles are presented and compared with the simulated ones based on the Monte Carlo FLUKA package.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study dynamics of the bistable logistic map with delayed feedback, under the influence of white Gaussian noise and periodic modulation applied to the variable. This system may serve as a model to describe population dynamics under finite resources in noisy environment with seasonal fluctuations. While a very small amount of noise has no effect on the global structure of the coexisting attractors in phase space, an intermediate noise totally eliminates one of the attractors. Slow periodic modulation enhances the attractor annihilation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is considerable evidence from animal studies that gonadal steroid hormones modulate neuronal activity and affect behavior. To study this in humans directly, we used H215O positron-emission tomography to measure regional cerebral blood flow (rCBF) in young women during three pharmacologically controlled hormonal conditions spanning 4–5 months: ovarian suppression induced by the gonadotropin-releasing hormone agonist leuprolide acetate (Lupron), Lupron plus estradiol replacement, and Lupron plus progesterone replacement. Estradiol and progesterone were administered in a double-blind cross-over design. On each occasion positron-emission tomography scans were performed during (i) the Wisconsin Card Sorting Test, a neuropsychological test that physiologically activates prefrontal cortex (PFC) and an associated cortical network including inferior parietal lobule and posterior inferolateral temporal gyrus, and (ii) a no-delay matching-to-sample sensorimotor control task. During treatment with Lupron alone (i.e., with virtual absence of gonadal steroid hormones), there was marked attenuation of the typical Wisconsin Card Sorting Test activation pattern even though task performance did not change. Most strikingly, there was no rCBF increase in PFC. When either progesterone or estrogen was added to the Lupron regimen, there was normalization of the rCBF activation pattern with augmentation of the parietal and temporal foci and return of the dorsolateral PFC activation. These data directly demonstrate that the hormonal milieu modulates cognition-related neural activity in humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing methods for assessing protein synthetic rates (PSRs) in human skeletal muscle are invasive and do not readily provide information about individual muscle groups. Recent studies in canine skeletal muscle yielded PSRs similar to results of simultaneous stable isotope measurements using l-[1-13C, methyl-2H3]methionine, suggesting that positron-emission tomography (PET) with l-[methyl-11C]methionine could be used along with blood sampling and a kinetic model to provide a less invasive, regional assessment of PSR. We have extended and refined this method in an investigation with healthy volunteers studied in the postabsorptive state. They received ≈25 mCi of l-[methyl-11C]methionine with serial PET imaging of the thighs and arterial blood sampling for a period of 90 min. Tissue and metabolite-corrected arterial blood time activity curves were fitted to a three-compartment model. PSR (nmol methionine⋅min−1⋅g muscle tissue−1) was calculated from the fitted parameter values and the plasma methionine concentrations, assuming equal rates of protein synthesis and degradation. Pooled mean PSR for the anterior and posterior sites was 0.50 ± 0.040. When converted to a fractional synthesis rate for mixed proteins in muscle, assuming a protein-bound methionine content of muscle tissue, the value of 0.125 ± 0.01%⋅h−1 compares well with estimates from direct tracer incorporation studies, which generally range from ≈0.05 to 0.09%⋅h−1. We conclude that PET can be used to estimate skeletal muscle PSR in healthy human subjects and that it holds promise for future in vivo, noninvasive studies of the influences of physiological factors, pharmacological manipulations, and disease states on this important component of muscle protein turnover and balance.