967 resultados para detectors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detectors designing is a key aspect for the development of the new millimeter wave systems. In this paper two detectors in microstrip technology are presented. They use zero bias Schottky diodes to detect signals from low frequency to 40 GHz. High sensibility, flat frequency response and ultrabroadband are the main features of these designs. They are also cheap and easy to mount because they have been built using microstrip technology. This paper explains most technological questions which must be taken into account to design such detectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the framework of the ITER Control Breakdown Structure (CBS), Plant System Instrumentation & Control (I&C) defines the hardware and software required to control one or more plant systems [1]. For diagnostics, most of the complex Plant System I&C are to be delivered by ITER Domestic Agencies (DAs). As an example for the DAs, ITER Organization (IO) has developed several use cases for diagnostics Plant System I&C that fully comply with guidelines presented in the Plant Control Design Handbook (PCDH) [2]. One such use case is for neutron diagnostics, specifically the Fission Chamber (FC), which is responsible for delivering time-resolved measurements of neutron source strength and fusion power to aid in assessing the functional performance of ITER [3]. ITER will deploy four Fission Chamber units, each consisting of three individual FC detectors. Two of these detectors contain Uranium 235 for Neutron detection, while a third "dummy" detector will provide gamma and noise detection. The neutron flux from each MFC is measured by the three methods: . Counting Mode: measures the number of individual pulses and their location in the record. Pulse parameters (threshold and width) are user configurable. . Campbelling Mode (Mean Square Voltage): measures the RMS deviation in signal amplitude from its average value. .Current Mode: integrates the signal amplitude over the measurement period

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emission of different harmful gases during the storage of solid fuels is a common phenomenon. The gases emitted during the heating process of those combustibles are the same as those emitted during combustion, mainly CO and CO2[1]. Nowadays, measurement of these emissions is mandatory. That is why in many industrial facilities different gas detectors are located to measure these gases. But it should be also useful if emissions could be predicted and the temperatures at the beginning of the emission process could be determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In classical distributed systems, each process has a unique identity. Today, new distributed systems have emerged where a unique identity is not always possible to be assigned to each process. For example, in many sensor networks a unique identity is not possible to be included in each device due to its small storage capacity, reduced computational power, or the huge number of devices to be identified. In these cases, we have to work with anonymous distributed systems where processes cannot be identified. Consensus cannot be solved in classical and anonymous asynchronous distributed systems where processes can crash. To bypass this impossibility result, failure detectors are added to these systems. It is known that ? is the weakest failure detector class for solving consensus in classical asynchronous systems when amajority of processes never crashes. Although A? was introduced as an anonymous version of ?, to find the weakest failure detector in anonymous systems to solve consensus when amajority of processes never crashes is nowadays an open question. Furthermore, A? has the important drawback that it is not implementable. Very recently, A? has been introduced as a counterpart of ? for anonymous systems. In this paper, we show that the A? failure detector class is strictly weaker than A? (i.e., A? provides less information about process crashes than A?). We also present in this paper the first implementation of A? (hence, we also show that A? is implementable), and, finally, we include the first implementation of consensus in anonymous asynchronous systems augmented with A? and where a majority of processes does not crash.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis recoje un trabajo experimental centrado en profundizar sobre el conocimiento de los bloques detectores monolíticos como alternativa a los detectores segmentados para tomografía por emisión de positrones (Positron Emission Tomography, PET). El trabajo llevado a cabo incluye el desarrollo, la caracterización, la puesta a punto y la evaluación de prototipos demostradores PET utilizando bloques monolíticos de ortosilicato de lutecio ytrio dopado con cerio (Cerium-Doped Lutetium Yttrium Orthosilicate, LYSO:Ce) usando sensores compatibles con altos campos magnéticos, tanto fotodiodos de avalancha (Avalanche Photodiodes, APDs) como fotomultiplicadores de silicio (Silicon Photomultipliers, SiPMs). Los prototipos implementados con APDs se construyeron para estudiar la viabilidad de un prototipo PET de alta sensibilidad previamente simulado, denominado BrainPET. En esta memoria se describe y caracteriza la electrónica frontal integrada utilizada en estos prototipos junto con la electrónica de lectura desarrollada específicamente para los mismos. Se muestran los montajes experimentales para la obtención de las imágenes tomográficas PET y para el entrenamiento de los algoritmos de red neuronal utilizados para la estimación de las posiciones de incidencia de los fotones γ sobre la superficie de los bloques monolíticos. Con el prototipo BrainPET se obtuvieron resultados satisfactorios de resolución energética (13 % FWHM), precisión espacial de los bloques monolíticos (~ 2 mm FWHM) y resolución espacial de la imagen PET de 1,5 - 1,7 mm FWHM. Además se demostró una capacidad resolutiva en la imagen PET de ~ 2 mm al adquirir simultáneamente imágenes de fuentes radiactivas separadas a distancias conocidas. Sin embargo, con este prototipo se detectaron también dos limitaciones importantes. En primer lugar, se constató una falta de flexibilidad a la hora de trabajar con un circuito integrado de aplicación específica (Application Specific Integrated Circuit, ASIC) cuyo diseño electrónico no era propio sino comercial, unido al elevado coste que requieren las modificaciones del diseño de un ASIC con tales características. Por otra parte, la caracterización final de la electrónica integrada del BrainPET mostró una resolución temporal con amplio margen de mejora (~ 13 ns FWHM). Tomando en cuenta estas limitaciones obtenidas con los prototipos BrainPET, junto con la evolución tecnológica hacia matrices de SiPM, el conocimiento adquirido con los bloques monolíticos se trasladó a la nueva tecnología de sensores disponible, los SiPMs. A su vez se inició una nueva estrategia para la electrónica frontal, con el ASIC FlexToT, un ASIC de diseño propio basado en un esquema de medida del tiempo sobre umbral (Time over Threshold, ToT), en donde la duración del pulso de salida es proporcional a la energía depositada. Una de las características más interesantes de este esquema es la posibilidad de manejar directamente señales de pulsos digitales, en lugar de procesar la amplitud de las señales analógicas. Con esta arquitectura electrónica se sustituyen los conversores analógicos digitales (Analog to Digital Converter, ADCs) por conversores de tiempo digitales (Time to Digital Converter, TDCs), pudiendo implementar éstos de forma sencilla en matrices de puertas programmable ‘in situ’ (Field Programmable Gate Array, FPGA), reduciendo con ello el consumo y la complejidad del diseño. Se construyó un nuevo prototipo demostrador FlexToT para validar dicho ASIC para bloques monolíticos o segmentados. Se ha llevado a cabo el diseño y caracterización de la electrónica frontal necesaria para la lectura del ASIC FlexToT, evaluando su linealidad y rango dinámico, el comportamiento frente a ruido así como la no linealidad diferencial obtenida con los TDCs implementados en la FPGA. Además, la electrónica presentada en este trabajo es capaz de trabajar con altas tasas de actividad y de discriminar diferentes centelleadores para aplicaciones phoswich. El ASIC FlexToT proporciona una excelente resolución temporal en coincidencia para los eventos correspondientes con el fotopico de 511 keV (128 ps FWHM), solventando las limitaciones de resolución temporal del prototipo BrainPET. Por otra parte, la resolución energética con bloques monolíticos leidos por ASICs FlexToT proporciona una resolución energética de 15,4 % FWHM a 511 keV. Finalmente, se obtuvieron buenos resultados en la calidad de la imagen PET y en la capacidad resolutiva del demostrador FlexToT, proporcionando resoluciones espaciales en el centro del FoV en torno a 1,4 mm FWHM. ABSTRACT This thesis is focused on the development of experimental activities used to deepen the knowledge of monolithic detector blocks as an alternative to segmented detectors for Positron Emission Tomography (PET). It includes the development, characterization, setting up, running and evaluation of PET demonstrator prototypes with monolithic detector blocks of Cerium-doped Lutetium Yttrium Orthosilicate (LYSO:Ce) using magnetically compatible sensors such as Avalanche Photodiodes (APDs) and Silicon Photomultipliers (SiPMs). The prototypes implemented with APDs were constructed to validate the viability of a high-sensitivity PET prototype that had previously been simulated, denominated BrainPET. This work describes and characterizes the integrated front-end electronics used in these prototypes, as well as the electronic readout system developed especially for them. It shows the experimental set-ups to obtain the tomographic PET images and to train neural networks algorithms used for position estimation of photons impinging on the surface of monolithic blocks. Using the BrainPET prototype, satisfactory energy resolution (13 % FWHM), spatial precision of monolithic blocks (~ 2 mm FWHM) and spatial resolution of the PET image (1.5 – 1.7 mm FWHM) in the center of the Field of View (FoV) were obtained. Moreover, we proved the imaging capabilities of this demonstrator with extended sources, considering the acquisition of two simultaneous sources of 1 mm diameter placed at known distances. However, some important limitations were also detected with the BrainPET prototype. In the first place, it was confirmed that there was a lack of flexibility working with an Application Specific Integrated Circuit (ASIC) whose electronic design was not own but commercial, along with the high cost required to modify an ASIC design with such features. Furthermore, the final characterization of the BrainPET ASIC showed a timing resolution with room for improvement (~ 13 ns FWHM). Taking into consideration the limitations obtained with the BrainPET prototype, along with the technological evolution in magnetically compatible devices, the knowledge acquired with the monolithic blocks were transferred to the new technology available, the SiPMs. Moreover, we opted for a new strategy in the front-end electronics, the FlexToT ASIC, an own design ASIC based on a Time over Threshold (ToT) scheme. One of the most interesting features underlying a ToT architecture is the encoding of the analog input signal amplitude information into the duration of the output signals, delivering directly digital pulses. The electronic architecture helps substitute the Analog to Digital Converters (ADCs) for Time to Digital Converters (TDCs), and they are easily implemented in Field Programmable Gate Arrays (FPGA), reducing the consumption and the complexity of the design. A new prototype demonstrator based on SiPMs was implemented to validate the FlexToT ASIC for monolithic or segmented blocks. The design and characterization of the necessary front-end electronic to read-out the signals from the ASIC was carried out by evaluating its linearity and dynamic range, its performance with an external noise signal, as well as the differential nonlinearity obtained with the TDCs implemented in the FPGA. Furthermore, the electronic presented in this work is capable of working at high count rates and discriminates different phoswich scintillators. The FlexToT ASIC provides an excellent coincidence time resolution for events that correspond to 511 keV photopeak (128 ps FWHM), resolving the limitations of the poor timing resolution of the BrainPET prototype. Furthermore, the energy resolution with monolithic blocks read by FlexToT ASICs provides an energy resolution of 15.4 % FWHM at 511 keV. Finally, good results were obtained in the quality of the PET image and the resolving power of the FlexToT demonstrator, providing spatial resolutions in the centre of the FoV at about 1.4 mm FWHM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Como contribución del estudio de medios heterogéneos, esta tesis recoge el trabajo llevado a cabo sobre modelado teórico y simulación del estudio de las propiedades ópticas de la piel y del agua del mar, como ejemplos paradigmáticos de medios heterogéneos. Se ha tomado como punto de partida el estudio de la propagación de la radiación óptica, más concretamente de la radiación láser, en un tejido biológico. La importancia de la caracterización óptica de un tejido es fundamental para manejar la interacción radiación-tejido que permite tanto el diagnóstico como la terapéutica de enfermedades y/o de disfunciones en las Ciencias de la Salud. Sin olvidar el objetivo de ofrecer una metodología de estudio, con un «enfoque ingenieril», de las propiedades ópticas en un medio heterogéneo, que no tiene por qué ser exclusivamente el tejido biológico. Como consecuencia de lo anterior y de la importancia que tiene el agua dentro de los tejidos biológicos se decide estudiar en otro capítulo las propiedades ópticas del agua dentro de un entorno heterogéneo como es el agua del mar. La selección del agua del mar, como objeto de estudio adicional, es motivada, principalmente, porque se trata de un sistema heterogéneo fácilmente descriptible en cada uno de sus elementos y permite evaluar una amplia bibliografía. Además se considera que los avances que han tenido lugar en los últimos años en las tecnologías fotónicas van a permitir su uso en los métodos experimentales de análisis de las aguas. El conocimiento de sus propiedades ópticas permite caracterizar los diferentes tipos de aguas de acuerdo con sus compuestos, así como poder identificar su presencia. Todo ello abre un amplio abanico de aplicaciones. En esta tesis doctoral, se ha conseguido de manera general: • Realizar un estudio del estado del arte del conocimiento de las propiedades ópticas de la piel y la identificación de sus elementos dispersores de la luz. • Establecer una metodología de estudio que nos permita obtener datos sobre posibles efectos de la radiación en los tejidos biológicos. •Usar distintas herramientas informáticas para simular el transporte de la radiación laser en tejidos biológicos. • Realizar experimentos mediante simulación de láser, tejidos biológicos y detectores. • Comparar los resultados conocidos experimentalmente con los simulados. • Estudiar los instrumentos de medida de la respuesta a la propagación de radiación laser en tejidos anisotrópicos. • Obtener resultados originales para el diagnóstico y tratamiento de pieles, considerando diferente razas y como alteración posible en la piel, se ha estudiado la presencia del basalioma. • Aplicación de la metodología de estudio realizada en la piel a la simulación de agua de mar. • Obtener resultados originales de simulación y análisis de cantidad de fitoplancton en agua; con el objetivo de facilitar la caracterización de diferentes tipos de aguas. La tesis doctoral se articula en 6 capítulos y 3 anexos perfectamente diferenciados con su propia bibliografía en cada uno de ellos. El primer capítulo está centrado en la problemática del difícil estudio y caracterización de los medios heterogéneos debidos a su comportamiento no homogéneo y anisotrópico ante las radiaciones ópticas. Así pues, presentaremos una breve introducción al comportamiento tanto de los tejidos como del océano ante radiaciones ópticas y definiremos sus principales propiedades: la absorción, el scattering, la anisotropía y los coeficientes de reflexión. Como continuación, un segundo capítulo trata de acercarnos a la resolución del problema de cómo caracterizar las propiedades ópticas descritas en el primer capítulo. Para ello, primero se introducen los modelos teóricos, en segundo lugar los métodos de simulación más empleados y, por último, enumerar las principales técnicas de medida de la propagación de la luz en los tejidos vivos. El tercer capítulo, centrado en la piel y sus propiedades, intenta realizar una síntesis de lo que se conoce sobre el comportamiento de la piel frente a la propagación de las radiaciones ópticas. Se estudian sus elementos constituyentes y los distintos tipos de pieles. Por último se describe un ejemplo de aplicación más inmediata que se beneficia de este conocimiento. Sabemos que el porcentaje de agua en el cuerpo humano es muy elevado, en concreto en la piel se considera de aproximadamente un 70%. Es obvio, por tanto, que conocer cómo afecta el agua en la propagación de una radiación óptica facilitaría el disponer de patrones de referencia; para ello, se realiza el estudio del agua del mar. En el cuarto capítulo se estudian las propiedades del agua del mar como medio heterogéneo de partículas. En este capítulo presentamos una síntesis de los elementos más significativos de dispersores en el océano, un estudio de su comportamiento individual frente a radiaciones ópticas y su contribución al océano en su conjunto. Finalmente, en el quinto capítulo se describen los resultados obtenidos en los distintos tipos de simulaciones realizadas. Las herramientas de simulación empleadas han sido las mismas tanto para el caso del estudio de la piel como para el agua del mar, por ello ambos resultados son expuestos en el mismo capítulo. En el primer caso se analizan diferentes tipos de agua oceánica, mediante la variación de las concentraciones de fitoplancton. El método empleado permite comprobar las diferencias que pueden encontrarse en la caracterización y diagnóstico de aguas. El segundo caso analizado es el de la piel; donde se estudia el comportamiento de distintos tipos de piel, se analizan para validar el método y se comprueba cómo el resultado es compatible con aplicaciones, actualmente comerciales, como la de la depilación con láser. Como resultado significativo se muestra la posible metodología a aplicar para el diagnóstico del cáncer de piel conocido como basalioma. Finalmente presentamos un capítulo dedicado a los trabajos futuros basados en experimentación real y el coste asociado que implicaría el llevarlo a cabo. Los anexos que concluyen la tesis doctoral versan por un lado sobre el funcionamiento del vector común de toda la tesis: el láser, sus aplicaciones y su control en la seguridad y por otro presentamos los coeficientes de absorción y scattering que hemos utilizado en nuestras simulaciones. El primero condensa las principales características de una radiación láser desde el punto de vista de su generación, el segundo presenta la seguridad en su uso y el tercero son tablas propias, cuyos parámetros son los utilizados en el apartado de experimentación. Aunque por el tipo de tesis que defiendo no se ajusta a los modelos canónicos de tesis doctoral, el lector podrá encontrar en esta tesis de forma imbricada, el modelo común a todas las tesis o proyectos de investigación con una sección dedicada al estado del arte con ejemplos pedagógicos para facilitar la compresión y se plantean unos objetivos (capítulos 1-4), y un capítulo que se subdivide en materiales y métodos y resultados y discusiones (capítulo 5 con sus subsecciones), para finalizar con una vista al futuro y los trabajos futuros que se desprenden de la tesis (capítulo 6). ABSTRACT As contribution to the study of heterogeneous media, this thesis covers the work carried out on theoretical modelling and simulation study of the optical properties of the skin and seawater, as paradigmatic examples of heterogeneous media. It is taken as a starting point the study of the propagation of optical radiation, in particular laser radiation in a biological tissue. The importance of optical characterization of a tissue is critical for managing the interaction between radiation and tissues that allows both diagnosis and therapy of diseases and / or dysfunctions in Health Sciences. Without forgetting the aim of providing a methodology of study, with "engineering approach" of the optical properties in a heterogeneous environment, which does not have to be exclusively biological tissue. As a result of this and the importance of water in biological tissues, we have decided to study the optical properties of water in a heterogeneous environment such as seawater in another chapter. The selection of sea water as an object of further study is motivated mainly because it is considered that the advances that have taken place in recent years in photonic technologies will allow its use in experimental methods of water analysis. Knowledge of the optical properties to characterize the different types of waters according to their compounds, as well as to identify its presence. All of this opens a wide range of applications. In this thesis, it has been generally achieved: • Conduct a study of the state of the art knowledge of the optical properties of the skin and identifying its light scattering elements. • Establish a study methodology that allows us to obtain data on possible effects of radiation on biological tissues. • Use different computer tools to simulate the transport of laser radiation in biological tissues. • Conduct experiments by simulating: laser, detectors, and biological tissues. • Compare the known results with our experimentally simulation. • Study the measuring instruments and its response to the propagation of laser radiation in anisotropic tissues. • Get innovative results for diagnosis and treatment of skin, considering different races and a possible alteration in the skin that we studied: the presence of basal cell carcinoma. • Application of the methodology of the study conducted in the skin to simulate seawater. • Get innovative results of simulation and analysis of amount of phytoplankton in water; in order to facilitate the characterization of different types of water. The dissertation is divided into six chapters and three annexes clearly distinguished by their own literature in each of them. The first chapter is focused on the problem of difficult study and characterization of heterogeneous media due to their inhomogeneous and anisotropic behaviour of optical radiation. So we present a brief introduction to the behaviour of both tissues at the cellular level as the ocean, to optical radiation and define the main optical properties: absorption, scattering, anisotropy and reflection coefficients. Following from this, a second chapter is an approach to solving the problem of how to characterize the optical properties described in the first chapter. For this, first the theoretical models are introduced, secondly simulation methods more used and, finally, the main techniques for measuring the propagation of light in living tissue. The third chapter is focused on the skin and its properties, tries to make a synthesis of what is known about the behaviour of the skin and its constituents tackle the spread of optical radiation. Different skin types are studied and an example of immediate application of this knowledge benefits described. We know that the percentage of water in the human body is very high, particularly in the skin is considered about 70%. It is obvious, therefore, that knowing how the water is affected by the propagation of an optical radiation facilitate to get reference patterns; For this, the study of seawater is performed. In the fourth chapter the properties of seawater as a heterogeneous component particles are studied. This chapter presents a summary of the scattering elements in the ocean, its individual response to optical radiation and its contribution to the ocean as a whole. In the fifth chapter the results of the different types of simulations are described. Simulation tools used were the same for the study of skin and seawater, so both results are presented in the chapter. In the first case different types of ocean water is analysed by varying the concentrations of phytoplankton. The method allows to check the differences that can be found in the characterization and diagnosis of water. The second case analysed is the skin; where the behaviour of different skin types are studied and checked how the result is compatible with applications currently trade, such as laser hair removal. As a significant result of the possible methodology to be applied for the diagnosis of skin cancer known as basal cell carcinoma is shown. Finally we present a chapter on future work based on actual experimentation and the associated cost which it would involve carrying out. The annexes conclude the thesis deal with one hand on the functioning of the common vector of the whole thesis: laser, control applications and safety and secondly we present the absorption and scattering coefficients we used in our simulations. The first condenses the main characteristics of laser radiation from the point of view of their generation, the second presents the safety in use and the third are own tables, whose parameters are used in the experimental section. Although the kind of view which I advocate does not meet the standard models doctoral thesis, the reader will find in this thesis so interwoven, the common model to all theses or research projects with a section on the state of the art pedagogical examples to facilitate the understanding and objectives (Chapters 1-4), and a chapter is divided into materials and methods and results and discussions (Chapter 5 subsections) arise, finishing with a view to the future and work future arising from the thesis (Chapter 6).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En los hospitales y residencias geriátricas de hoy en día es necesario que tengan un sistema asistencial paciente-enfermera. Este sistema debe ser capaz de controlar y gestionar cada una de las alarmas que se puedan generar en el menor tiempo posible y con la mayor eficacia. Para ello se ha diseñado una solución completa llamada ConnectCare. La arquitectura modular del sistema y la utilización de comunicación IP permiten adaptar el sistema a cada situación proporcionando soluciones específicas a medida. Este sistema se compone de un software llamado Buslogic que gestiona las alarmas en un servidor y de unos dispositivos llamados Fonet Control TCP/IP que posee una doble función: por una parte, sirve como dispositivo intercomunicador telefónico y por otra parte, sirve como dispositivo de gestión de alarmas y control de otros dispositivos externos. Como dispositivo intercomunicador telefónico, se integra en la red telefónica como un terminal de extensión analógica permitiendo la intercomunicación entre el paciente y el personal sanitario. Se hará una breve descripción de la parte intercomunicadora pero no es el objeto de este proyecto. En cambio, en la parte de control se hará más hincapié del diseño y su funcionamiento ya que sí es el objeto de este proyecto. La placa de control permite la recepción de señales provenientes de dispositivos de llamadas cableados, como son pulsadores asistenciales tipo “pera” o tiradores de baño. También es posible recibir señales de alerta de dispositivos no estrictamente asistenciales como detectores de humo o detectores de presencia. Además, permite controlar las luces de las habitaciones de los residentes y actuar sobre otros dispositivos externos. A continuación se mostrará un presupuesto para tener una idea del coste que supone. El presupuesto se divide en dos partes, la primera corresponde en el diseño de la placa de control y la segunda corresponde a la fabricación en serie de la misma. Después hablaremos sobre las conclusiones que hemos sacado tras la realización de este proyecto y sobre las posibles mejoras, terminando con una demostración del funcionamiento del equipo en la vida real. ABSTRACT. Nowadays, in hospitals and nursing homes it is required to have a patient-nurse care system. This system must be able to control and manage each one of the alarms, in the shortest possible time and with maximum efficiency. For this, we have designed a complete solution called ConnectCare. The system architecture is modular and the communication is by IP protocol. This allows the system to adapt to each situation and providing specific solutions. This system is composed by a software, called Buslogic, which it manages the alarms in the PC server and a hardware, called Fonet Control TCP / IP, which it has a dual role: the first role, it is a telephone intercom device and second role, it is a system alarm manager and it can control some external devices. As telephone intercom device, it is integrated into the telephone network and also it is an analog extension terminal allowing intercommunication between the patient and the health personnel. A short description of this intercommunication system will be made, because it is not the subject of this project. Otherwise, the control system will be described with more emphasis on the design and operation point of view, because this is the subject of this project. The control board allows the reception of signals from wired devices, such as pushbutton handset or bathroom pullcord. It is also possible to receive warning signals of non nurse call devices such as smoke detectors or motion detectors. Moreover, it allows to control the lights of the patients’ rooms and to act on other external devices. Then, a budget will be showed. The budget is divided into two parts, the first one is related with the design of the control board and the second one corresponds to the serial production of it. Then, it is discussed the conclusions of this project and the possible improvements, ending with a demonstration of the equipment in real life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta tesis doctoral se estudian las variaciones de radón en el interior de dos viviendas similares de construcción nueva en Madrid, una de ellas ocupada y la otra no, que forman parte del mismo edificio residencial. La concentración de radón y los parámetros ambientales (presión, temperatura y humedad) se midieron durante ocho meses. La monitorización del gas radón se realizó mediante detectores de estado sólido. Simultáneamente, se adquirieron algunas variables atmosféricas de un modelo atmosférico. En el análisis de los datos, se utilizó principalmente el método de la Transformada Wavelet. Los resultados muestran que el nivel de radón es ligeramente más alto en la vivienda ocupada que en la otra. A partir del análisis desarrollado en este estudio, se encontró que había un patrón específico estacional en la concentración de radón interior. Además, se analizó también la influencia antropogénica. Se pudieron observar patrones periódicos muy similares en intervalos concretos sin importar si la vivienda está ocupada o no. Por otra parte, los datos se almacenaron en cubos OLAP. El análisis se realizó usando unos algoritmos de agrupamiento (clustering) y de asociación. El objetivo es descubrir las relaciones entre el radón y las condiciones externas como la presión, estabilidad, etc. Además, la metodología aplicada puede ser útil para estudios ambientales en donde se mida radón en espacios interiores. ABSTRACT The present thesis studies the indoor radon variations in two similar new dwellings, one of them occupied and the other unoccupied, from the same residential building in Madrid. Radon concentration and ambient parameters were measured during eight months. Solid state detectors were used for the radon monitoring. Simultaneously, several atmospheric variables were acquired from an atmospheric model. In the data analysis, the Wavelet Transform Method was mainly used. The results show that radon level is slightly higher in the unoccupied dwelling than in the other one. From the analysis developed in this study, it is found that a specific seasonal pattern exists in the indoor radon concentration. Besides, the anthropogenic influence is also analysed. Nearly periodical patterns could be observed in specific periods whether dwelling is occupied or not. Otherwise, data were stored in cubes OLAP. Analysis was carried out using clustering and association algorithms. The aim is to find out the relationships among radon and external conditions like pressure, stability, etc. Besides, the methodology could be useful to assess environmental studies, where indoor radon is measured.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl’s inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2–3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

GeneSplicer is a new, flexible system for detecting splice sites in the genomic DNA of various eukaryotes. The system has been tested successfully using DNA from two reference organisms: the model plant Arabidopsis thaliana and human. It was compared to six programs representing the leading splice site detectors for each of these species: NetPlantGene, NetGene2, HSPL, NNSplice, GENIO and SpliceView. In each case GeneSplicer performed comparably to the best alternative, in terms of both accuracy and computational efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Establishing accurate extragalactic distances has provided an immense challenge to astronomers since the 1920s. The situation has improved dramatically as better detectors have become available, and as several new, promising techniques have been developed. For the first time in the history of this difficult field, relative distances to galaxies are being compared on a case-by-case basis, and their quantitative agreement is being established. New instrumentation, the development of new techniques for measuring distances, and recent measurements with the Hubble Space telescope all have resulted in new distances to galaxies with precision at the ±5–20% level. The current statistical uncertainty in some methods for measuring H0 is now only a few percent; with systematic errors, the total uncertainty is approaching ±10%. Hence, the historical factor-of-two uncertainty in the value of the H0 is now behind us.