175 resultados para Readout
Resumo:
Measurement of perfusion in longitudinal studies allows for the assessment of tissue integrity and the detection of subtle pathologies. In this work, the feasibility of measuring brain perfusion in rats with high spatial resolution using arterial spin labeling is reported. A flow-sensitive alternating recovery sequence, coupled with a balanced gradient fast imaging with steady-state precession readout section was used to minimize ghosting and geometric distortions, while achieving high signal-to-noise ratio. The quantitative imaging of perfusion using a single subtraction method was implemented to address the effects of variable transit delays between the labeling of spins and their arrival at the imaging slice. Studies in six rats at 7 T showed good perfusion contrast with minimal geometric distortion. The measured blood flow values of 152.5+/-6.3 ml/100 g per minute in gray matter and 72.3+/-14.0 ml/100 g per minute in white matter are in good agreement with previously reported values based on autoradiography, considered to be the gold standard.
Resumo:
The geometric characterization of low-voltage dielectric electro-active polymer (EAP) structures, comprised of nanometer thickness but areas of square centimeters, for applications such as artificial sphincters requires methods with nanometer precision. Direct optical detection is usually restricted to sub-micrometer resolution because of the wavelength of the light applied. Therefore, we propose to take advantage of the cantilever bending system with optical readout revealing a sub-micrometer resolution at the deflection of the free end. It is demonstrated that this approach allows us to detect bending of rather conventional planar asymmetric, dielectric EAP-structures applying voltages well below 10 V. For this purpose, we built 100 μm-thin silicone films between 50 nm-thin silver layers on a 25 μm-thin polyetheretherketone (PEEK) substrate. The increase of the applied voltage in steps of 50 V until 1 kV resulted in a cantilever bending that exhibits only in restricted ranges the expected square dependence. The mean laser beam displacement on the detector corresponded to 6 nm per volt. The apparatus will therefore become a powerful mean to analyze and thereby improve low-voltage dielectric EAP-structures to realize nanometer-thin layers for stack actuators to be incorporated into artificial sphincter systems for treating severe urinary and fecal incontinence.
Resumo:
PLATO 2.0 has recently been selected for ESA’s M3 launch opportunity (2022/24). Providing accurate key planet parameters (radius, mass, density and age) in statistical numbers, it addresses fundamental questions such as: How do planetary systems form and evolve? Are there other systems with planets like ours, including potentially habitable planets? The PLATO 2.0 instrument consists of 34 small aperture telescopes (32 with 25 s readout cadence and 2 with 2.5 s candence) providing a wide field-of-view (2232 deg 2) and a large photometric magnitude range (4–16 mag). It focusses on bright (4–11 mag) stars in wide fields to detect and characterize planets down to Earth-size by photometric transits, whose masses can then be determined by ground-based radial-velocity follow-up measurements. Asteroseismology will be performed for these bright stars to obtain highly accurate stellar parameters, including masses and ages. The combination of bright targets and asteroseismology results in high accuracy for the bulk planet parameters: 2 %, 4–10 % and 10 % for planet radii, masses and ages, respectively. The planned baseline observing strategy includes two long pointings (2–3 years) to detect and bulk characterize planets reaching into the habitable zone (HZ) of solar-like stars and an additional step-and-stare phase to cover in total about 50 % of the sky. PLATO 2.0 will observe up to 1,000,000 stars and detect and characterize hundreds of small planets, and thousands of planets in the Neptune to gas giant regime out to the HZ. It will therefore provide the first large-scale catalogue of bulk characterized planets with accurate radii, masses, mean densities and ages. This catalogue will include terrestrial planets at intermediate orbital distances, where surface temperatures are moderate. Coverage of this parameter range with statistical numbers of bulk characterized planets is unique to PLATO 2.0. The PLATO 2.0 catalogue allows us to e.g.: - complete our knowledge of planet diversity for low-mass objects, - correlate the planet mean density-orbital distance distribution with predictions from planet formation theories,- constrain the influence of planet migration and scattering on the architecture of multiple systems, and - specify how planet and system parameters change with host star characteristics, such as type, metallicity and age. The catalogue will allow us to study planets and planetary systems at different evolutionary phases. It will further provide a census for small, low-mass planets. This will serve to identify objects which retained their primordial hydrogen atmosphere and in general the typical characteristics of planets in such low-mass, low-density range. Planets detected by PLATO 2.0 will orbit bright stars and many of them will be targets for future atmosphere spectroscopy exploring their atmosphere. Furthermore, the mission has the potential to detect exomoons, planetary rings, binary and Trojan planets. The planetary science possible with PLATO 2.0 is complemented by its impact on stellar and galactic science via asteroseismology as well as light curves of all kinds of variable stars, together with observations of stellar clusters of different ages. This will allow us to improve stellar models and study stellar activity. A large number of well-known ages from red giant stars will probe the structure and evolution of our Galaxy. Asteroseismic ages of bright stars for different phases of stellar evolution allow calibrating stellar age-rotation relationships. Together with the results of ESA’s Gaia mission, the results of PLATO 2.0 will provide a huge legacy to planetary, stellar and galactic science.
Resumo:
CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contains their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration,and the potential to perform image processing operations on-chip and in real-time. Here, the major challenges and design drivers for ground-based and space-based optical observation strategies for objects in Earth orbit have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Finally, we simulated several observation scenarios for ground- and space-based sensor by assuming different observation and sensor properties. We will introduce the analyzed end-to-end simulations of the ground- and spacebased strategies in order to investigate the orbit determination accuracy and its sensitivity which may result from different values for the frame-rate, pixel scale, astrometric and epoch registration accuracies. Two cases were simulated, a survey assuming a ground-based sensor to observe objects in LEO for surveillance applications, and a statistical survey with a space-based sensor orbiting in LEO observing small-size debris in LEO. The ground-based LEO survey uses a dynamical fence close to the Earth shadow a few hours after sunset. For the space-based scenario a sensor in a sun-synchronous LEO orbit, always pointing in the anti-sun direction to achieve optimum illumination conditions for small LEO debris was simulated.
Resumo:
Many studies in the field of cell-based cartilage repair have focused on identifying markers associated with the differentiation status of human articular chondrocytes (HAC) that could predict their chondrogenic potency. A previous study from our group showed a correlation between the expression of S100 protein in HAC and their chondrogenic potential. The aims of the current study were to clarify which S100 proteins are associated with HAC differentiation status and to provide an S100-based assay for measuring HAC chondrogenic potential. The expression patterns of S100A1 and S100B were investigated in cartilage and in HAC cultured under conditions promoting dedifferentiation (monolayer culture) or redifferentiation (pellet culture or BMP4 treatment in monolayer culture), using characterized antibodies specifically recognizing S100A1 and S100B, by immunohistochemistry, immunocytochemistry, Western blot, and gene expression analysis. S100A1 and S100B were expressed homogeneously in all cartilage zones, and decreased during dedifferentiation. S100A1, but not S100B, was re-expressed in pellets and co-localized with collagen II. Gene expression analysis revealed concomitant modulation of S100A1, S100B, collagen type II, and aggrecan: down-regulation during monolayer culture and up-regulation upon BMP4 treatment. These results strongly support an association of S100A1, and to a lesser extent S100B, with the HAC differentiated phenotype. To facilitate their potential application, we established an S100A1/B-based flow cytometry assay for accurate assessment of HAC differentiation status. We propose S100A1 and S100B expression as a marker to develop potency assays for cartilage regeneration cell therapies, and as a redifferentiation readout in monolayer cultures aiming to investigate stimuli for chondrogenic induction.
Resumo:
The currently proposed space debris remediation measures include the active removal of large objects and “just in time” collision avoidance by deviating the objects using, e.g., ground-based lasers. Both techniques require precise knowledge of the attitude state and state changes of the target objects. In the former case, to devise methods to grapple the target by a tug spacecraft, in the latter, to precisely propagate the orbits of potential collision partners as disturbing forces like air drag and solar radiation pressure depend on the attitude of the objects. Non-resolving optical observations of the magnitude variations, so-called light curves, are a promising technique to determine rotation or tumbling rates and the orientations of the actual rotation axis of objects, as well as their temporal changes. The 1-meter telescope ZIMLAT of the Astronomical Institute of the University of Bern has been used to collect light curves of MEO and GEO objects for a considerable period of time. Recently, light curves of Low Earth Orbit (LEO) targets were acquired as well. We present different observation methods, including active tracking using a CCD subframe readout technique, and the use of a high-speed scientific CMOS camera. Technical challenges when tracking objects with poor orbit redictions, as well as different data reduction methods are addressed. Results from a survey of abandoned rocket upper stages in LEO, examples of abandoned payloads and observations of high area-to-mass ratio debris will be resented. Eventually, first results of the analysis of these light curves are provided.
Resumo:
The present study examined the impact of implant surface modifications on osseointegration in an osteoporotic rodent model. Sandblasted, acid-etched titanium implants were either used directly (control) or were further modified by surface conditioning with NaOH or by coating with one of the following active agents: collagen/chondroitin sulphate, simvastatin, or zoledronic acid. Control and modified implants were inserted into the proximal tibia of aged ovariectomised (OVX) osteoporotic rats (n = 32/group). In addition, aged oestrogen competent animals received either control or NaOH conditioned implants. Animals were sacrificed 2 and 4 weeks post-implantation. The excised tibiae were utilised for biomechanical and morphometric readouts (n = 8/group/readout). Biomechanical testing revealed at both time points dramatically reduced osseointegration in the tibia of oestrogen deprived osteoporotic animals compared to intact controls irrespective of NaOH exposure. Consistently, histomorphometric and microCT analyses demonstrated diminished bone-implant contact (BIC), peri-implant bone area (BA), bone volume/tissue volume (BV/TV) and bone-mineral density (BMD) in OVX animals. Surface coating with collagen/chondroitin sulphate had no detectable impact on osseointegration. Interestingly, statin coating resulted in a transient increase in BIC 2 weeks post-implantation; which, however, did not correspond to improvement of biomechanical readouts. Local exposure to zoledronic acid increased BIC, BA, BV/TV and BMD at 4 weeks. Yet this translated only into a non-significant improvement of biomechanical properties. In conclusion, this study presents a rodent model mimicking severely osteoporotic bone. Contrary to the other bioactive agents, locally released zoledronic acid had a positive impact on osseointegration albeit to a lesser extent than reported in less challenging models.
Resumo:
OBJECTIVES/HYPOTHESIS Assess the diagnostic and prognostic relevance of intraglandular lymph node (IGLN) metastases in primary parotid gland carcinomas (PGCs). STUDY DESIGN Retrospective study at a tertiary referral university hospital. METHODS We reviewed the records of 95 patients with primary PGCs, treated at least surgically, between 1997 and 2010. We assessed the clinicopathological associations of IGLN metastases, their prognostic significance, and predictive value in the diagnosis of occult neck lymph node metastases RESULTS Twenty-four (25.26%) patients had IGLN metastases. This feature was significantly more prevalent in patients with advanced pT status (P = .01), pN status (P < .01), and overall stage (P < .001); high-risk carcinomas (P = .01); as well as in patients with treatment failures (P < .01). IGLN involvement was significantly associated with decreased univariate disease-free survival (P < .001). Positive and negative predictive values and accuracy for IGLN involvement in the detection of occult neck lymph node metastases were 63.64%, 90.48%, and 84.91%, respectively. The diagnostic values were generally higher in patients with low-risk subtype of PGCs. CONCLUSIONS IGLN involvement provides prognostic information and is associated with advanced tumoral stage and higher risk of recurrence. This feature could be used as a potential readout to determine whether a neck dissection in clinically negative neck lymph nodes is needed or not. LEVEL OF EVIDENCE 4.
Resumo:
OBJECTIVE The aim of this study was to investigate the performance of the arterial enhancement fraction (AEF) in multiphasic computed tomography (CT) acquisitions to detect hepatocellular carcinoma (HCC) in liver transplant recipients in correlation with the pathologic analysis of the corresponding liver explants. MATERIALS AND METHODS Fifty-five transplant recipients were analyzed: 35 patients with 108 histologically proven HCC lesions and 20 patients with end-stage liver disease without HCC. Six radiologists looked at the triphasic CT acquisitions with the AEF maps in a first readout. For the second readout without the AEF maps, 3 radiologists analyzed triphasic CT acquisitions (group 1), whereas the other 3 readers had 4 contrast acquisitions available (group 2). A jackknife free-response reader receiver operating characteristic analysis was used to compare the readout performance of the readers. Receiver operating characteristic analysis was used to determine the optimal cutoff value of the AEF. RESULTS The figure of merit (θ = 0.6935) for the conventional triphasic readout was significantly inferior compared with the triphasic readout with additional use of the AEF (θ = 0.7478, P < 0.0001) in group 1. There was no significant difference between the fourphasic conventional readout (θ = 0.7569) and the triphasic readout (θ = 0.7615, P = 0.7541) with the AEF in group 2. Without the AEF, HCC lesions were detected with a sensitivity of 30.7% (95% confidence interval [CI], 25.5%-36.4%) and a specificity of 97.1% (96.0%-98.0%) by group 1 looking at 3 CT acquisition phases and with a sensitivity of 42.1% (36.2%-48.1%) and a specificity of 97.5% (96.4%-98.3%) in group 2 looking at 4 CT acquisition phases. Using the AEF maps, both groups looking at the same 3 acquisition phases, the sensitivity was 47.7% (95% CI, 41.9%-53.5%) with a specificity of 97.4% (96.4%-98.3%) in group 1 and 49.8% (95% CI, 43.9%-55.8%)/97.6% (96.6%-98.4%) in group 2. The optimal cutoff for the AEF was 50%. CONCLUSION The AEF is a helpful tool to screen for HCC with CT. The use of the AEF maps may significantly improve HCC detection, which allows omitting the fourth CT acquisition phase and thus making a 25% reduction of radiation dose possible.
Resumo:
Purpose To investigate whether nonhemodynamic resonant saturation effects can be detected in patients with focal epilepsy by using a phase-cycled stimulus-induced rotary saturation (PC-SIRS) approach with spin-lock (SL) preparation and whether they colocalize with the seizure onset zone and surface interictal epileptiform discharges (IED). Materials and Methods The study was approved by the local ethics committee, and all subjects gave written informed consent. Eight patients with focal epilepsy undergoing presurgical surface and intracranial electroencephalography (EEG) underwent magnetic resonance (MR) imaging at 3 T with a whole-brain PC-SIRS imaging sequence with alternating SL-on and SL-off and two-dimensional echo-planar readout. The power of the SL radiofrequency pulse was set to 120 Hz to sensitize the sequence to high gamma oscillations present in epileptogenic tissue. Phase cycling was applied to capture distributed current orientations. Voxel-wise subtraction of SL-off from SL-on images enabled the separation of T2* effects from rotary saturation effects. The topography of PC-SIRS effects was compared with the seizure onset zone at intracranial EEG and with surface IED-related potentials. Bayesian statistics were used to test whether prior PC-SIRS information could improve IED source reconstruction. Results Nonhemodynamic resonant saturation effects ipsilateral to the seizure onset zone were detected in six of eight patients (concordance rate, 0.75; 95% confidence interval: 0.40, 0.94) by means of the PC-SIRS technique. They were concordant with IED surface negativity in seven of eight patients (0.88; 95% confidence interval: 0.51, 1.00). Including PC-SIRS as prior information improved the evidence of the standard EEG source models compared with the use of uninformed reconstructions (exceedance probability, 0.77 vs 0.12; Wilcoxon test of model evidence, P < .05). Nonhemodynamic resonant saturation effects resolved in patients with favorable postsurgical outcomes, but persisted in patients with postsurgical seizure recurrence. Conclusion Nonhemodynamic resonant saturation effects are detectable during interictal periods with the PC-SIRS approach in patients with epilepsy. The method may be useful for MR imaging-based detection of neuronal currents in a clinical environment. (©) RSNA, 2016 Online supplemental material is available for this article.
Resumo:
The molecular complex of sensory rhodopsin I (SRI) and its transducer HtrI mediate color-sensitive phototaxis in the archaeon Halobacterium salinarum. Orange light causes an attractant response by a one-photon reaction and white light causes a repellent response by a two-photon reaction. Three aspects of this molecular complex were explored: (i) We determined the stoichiometry of SRI and HtrI to be 2:2 by gene fusion analysis. A SRI-HtrI fusion protein was expressed in H. salinarum and shown to mediate 1-photon and 2-photon phototaxis responses comparable to wild-type complex. Disulfide crosslinking demonstrated that the fusion protein is a homodimer in the membrane. Measurement of photochemical reaction kinetics and pH titration of absorption spectra established that both SRI domains are complexed to HtrI in the fusion protein, and therefore the stoichiometry is 2:2. (ii) Cytoplasmic channel closure of SRI by HtrI, an important aspect of their interaction, was investigated by incremental HtrI truncation. We found that binding of the membrane-embedded portion of HtrI is insufficient for channel closure, whereas cytoplasmic extension of the second HtrI transmembrane helix by 13 residues blocks proton conduction through the channel as well as full-length HtrI. The closure activity is localized to 5 specific residues, each of which incrementally contributes to reduction of proton conductivity. Moreover, these same residues in the dark incrementally and proportionally increase the pKa of the Asp76 counterion to the protonated Schiff base chromophore. We conclude that this critical region of HtrI alters the dark conformation of SRI as well as light-induced channel opening. (iii) We developed a procedure for reconstituting HtrI-free SRI and the SRI/HtrI complex into liposomes, which exhibit photocycles with opened and closed cytoplasmic channels, respectively, as in the membrane. This opens the way for study of the light-induced conformational change and the interaction in vitro by fluorescence and spin-labeling. Single-cysteine mutations were introduced into helix F of SRI, labeled with a nitroxide spin probe and a fluorescence probe, reconstituted into proteoliposomes, and light-induced conformational changes detected in the complex. The probe signals can now be used as the readout of signaling to analyze mutants and the kinetics of signal relay. ^
Resumo:
High-resolution, small-bore PET systems suffer from a tradeoff between system sensitivity, and image quality degradation. In these systems long crystals allow mispositioning of the line of response due to parallax error and this mispositioning causes resolution blurring, but long crystals are necessary for high system sensitivity. One means to allow long crystals without introducing parallax errors is to determine the depth of interaction (DOI) of the gamma ray interaction within the detector module. While DOI has been investigated previously, newly available solid state photomultipliers (SSPMs) well-suited to PET applications and allow new modules for investigation. Depth of interaction in full modules is a relatively new field, and so even if high performance DOI capable modules were available, the appropriate means to characterize and calibrate the modules are not. This work presents an investigation of DOI capable arrays and techniques for characterizing and calibrating those modules. The methods introduced here accurately and reliably characterize and calibrate energy, timing, and event interaction positioning. Additionally presented is a characterization of the spatial resolution of DOI capable modules and a measurement of DOI effects for different angles between detector modules. These arrays have been built into a prototype PET system that delivers better than 2.0 mm resolution with a single-sided-stopping-power in excess of 95% for 511 keV g's. The noise properties of SSPMs scale with the active area of the detector face, and so the best signal-to-noise ratio is possible with parallel readout of each SSPM photodetector pixel rather than multiplexing signals together. This work additionally investigates several algorithms for improving timing performance using timing information from multiple SSPM pixels when light is distributed among several photodetectors.
Resumo:
To ensure the integrity of an intensity modulated radiation therapy (IMRT) treatment, each plan must be validated through a measurement-based quality assurance (QA) procedure, known as patient specific IMRT QA. Many methods of measurement and analysis have evolved for this QA. There is not a standard among clinical institutions, and many devices and action levels are used. Since the acceptance criteria determines if the dosimetric tools’ output passes the patient plan, it is important to see how these parameters influence the performance of the QA device. While analyzing the results of IMRT QA, it is important to understand the variability in the measurements. Due to the different form factors of the many QA methods, this reproducibility can be device dependent. These questions of patient-specific IMRT QA reproducibility and performance were investigated across five dosimeter systems: a helical diode array, radiographic film, ion chamber, diode array (AP field-by-field, AP composite, and rotational composite), and an in-house designed multiple ion chamber phantom. The reproducibility was gauged for each device by comparing the coefficients of variation (CV) across six patient plans. The performance of each device was determined by comparing each one’s ability to accurately label a plan as acceptable or unacceptable compared to a gold standard. All methods demonstrated a CV of less than 4%. Film proved to have the highest variability in QA measurement, likely due to the high level of user involvement in the readout and analysis. This is further shown by how the setup contributed more variation than the readout and analysis for all of the methods, except film. When evaluated for ability to correctly label acceptable and unacceptable plans, two distinct performance groups emerged with the helical diode array, AP composite diode array, film, and ion chamber in the better group; and the rotational composite and AP field-by-field diode array in the poorer group. Additionally, optimal threshold cutoffs were determined for each of the dosimetry systems. These findings, combined with practical considerations for factors such as labor and cost, can aid a clinic in its choice of an effective and safe patient-specific IMRT QA implementation.
Resumo:
The AEgIS experiment is an interdisciplinary collaboration between atomic, plasma and particle physicists, with the scientific goal of performing the first precision measurement of the Earth's gravitational acceleration on antimatter. The principle of the experiment is as follows: cold antihydrogen atoms are synthesized in a Penning-Malmberg trap and are Stark accelerated towards a moiré deflectometer, the classical counterpart of an atom interferometer, and annihilate on a position sensitive detector. Crucial to the success of the experiment is an antihydrogen detector that will be used to demonstrate the production of antihydrogen and also to measure the temperature of the anti-atoms and the creation of a beam. The operating requirements for the detector are very challenging: it must operate at close to 4 K inside a 1 T solenoid magnetic field and identify the annihilation of the antihydrogen atoms that are produced during the 1 μs period of antihydrogen production. Our solution—called the FACT detector—is based on a novel multi-layer scintillating fiber tracker with SiPM readout and off the shelf FPGA based readout system. This talk will present the design of the FACT detector and detail the operation of the detector in the context of the AEgIS experiment.
Resumo:
La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.