35 resultados para TIME-DOMAIN TECHNIQUE

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time domain laser reflectance spectroscopy (TDRS) was applied for the first time to evaluate internal fruit quality. This technique, known in medicine-related knowledge areas, has not been used before in agricultural or food research. It allows the simultaneous non-destructive measuring of two optical characteristics of the tissues: light scattering and absorption. Models to measure firmness, sugar & acid contents in kiwifruit, tomato, apple, peach, nectarine and other fruits were built using sequential statistical techniques: principal component analysis, multiple stepwise linear regression, clustering and discriminant analysis. Consistent correlations were established between the two parameters measured with TDRS, i.e. absorption & transport scattering coefficients, with chemical constituents (sugars and acids) and firmness, respectively. Classification models were built to sort fruits into three quality grades, according to their firmness, soluble solids and acidity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phase-sensitive optical time-domain reflectometry (?OTDR) is a simple and effective tool allowing the distributed monitoring of vibrations along single-mode fibers. We show in this Letter that modulation instability (MI) can induce a position-dependent signal fading in long-range ?OTDR over conventional optical fibers. This fading leads to a complete masking of the interference signal recorded at certain positions and therefore to a sensitivity loss at these positions. We illustrate this effect both theoretically and experimentally. While this effect is detrimental in the context of distributed vibration analysis using ?OTDR, we also believe that the technique provides a clear and insightful way to evidence the Fermi?Pasta?Ulam recurrence associated with the MI process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An electrically tunable system for the control of optical pulse sequences is proposed and demonstrated. It is based on the use of an electrooptic modulator for periodic phase modulation followed by a dispersive device to obtain the temporal Talbot effect. The proposed configuration allows for repetition rate multiplication with different multiplication factors and with the simultaneous control of the pulse train envelope by simply changing the electrical signal driving the modulator. Simulated and experimental results for an input optical pulse train of 10 GHz are shown for different multiplication factors and envelope shapes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The understanding of the structure and dynamics of the intricate network of connections among people that consumes products through Internet appears as an extremely useful asset in order to study emergent properties related to social behavior. This knowledge could be useful, for example, to improve the performance of personal recommendation algorithms. In this contribution, we analyzed five-year records of movie-rating transactions provided by Netflix, a movie rental platform where users rate movies from an online catalog. This dataset can be studied as a bipartite user-item network whose structure evolves in time. Even though several topological properties from subsets of this bipartite network have been reported with a model that combines random and preferential attachment mechanisms [Beguerisse Díaz et al., 2010], there are still many aspects worth to be explored, as they are connected to relevant phenomena underlying the evolution of the network. In this work, we test the hypothesis that bursty human behavior is essential in order to describe how a bipartite user-item network evolves in time. To that end, we propose a novel model that combines, for user nodes, a network growth prescription based on a preferential attachment mechanism acting not only in the topological domain (i.e. based on node degrees) but also in time domain. In the case of items, the model mixes degree preferential attachment and random selection. With these ingredients, the model is not only able to reproduce the asymptotic degree distribution, but also shows an excellent agreement with the Netflix data in several time-dependent topological properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-destructive measurement of fruit quality has been an important objective through recent years (Abbott, 1999). Near infrared spectroscopy (NIR) is applicable to the cuantification of chemicals in foods and NIK "laser spectroscopy" can be used to estimate the firmness of fruits. However, die main limitation of current optical techniques that measure light transmission is that they do not account for the coupling between absorption and scattering inside the tissue, when quantifying the intensity o f reemitted light. The solution o f this l i m i t a t i o n was the goal o f the present work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La telepesencia combina diferentes modalidades sensoriales, incluyendo, entre otras, la visual y la del tacto, para producir una sensación de presencia remota en el operador. Un elemento clave en la implementación de sistemas de telepresencia para permitir una telemanipulación del entorno remoto es el retorno de fuerza. Durante una telemanipulación, la energía mecánica es transferida entre el operador humano y el entorno remoto. En general, la energía es una propiedad de los objetos físicos, fundamental en su mutual interacción. En esta interacción, la energía se puede transmitir entre los objetos, puede cambiar de forma pero no puede crearse ni destruirse. En esta tesis, se aplica este principio fundamental para derivar un nuevo método de control bilateral que permite el diseño de sistemas de teleoperación estables para cualquier arquitectura concebible. El razonamiento parte del hecho de que la energía mecánica insertada por el operador humano en el sistema debe transferirse hacia el entorno remoto y viceversa. Tal como se verá, el uso de la energía como variable de control permite un tratamiento más general del sistema que el control convencional basado en variables específicas del sistema. Mediante el concepto de Red de Potencia de Retardo Temporal (RPRT), el problema de definir los flujos de energía en un sistema de teleoperación es solucionado con independencia de la arquitectura de comunicación. Como se verá, los retardos temporales son la principal causa de generación de energía virtual. Este hecho se observa con retardos a partir de 1 milisegundo. Esta energía virtual es añadida al sistema de forma intrínseca y representa la causa principal de inestabilidad. Se demuestra que las RPRTs son transportadoras de la energía deseada intercambiada entre maestro y esclavo pero a la vez generadoras de energía virtual debido al retardo temporal. Una vez estas redes son identificadas, el método de Control de Pasividad en el Dominio Temporal para RPRTs se propone como mecanismo de control para asegurar la pasividad del sistema, y as__ la estabilidad. El método se basa en el simple hecho de que esta energía virtual debido al retardo debe transformarse en disipación. As__ el sistema se aproxima al sistema deseado, donde solo la energía insertada desde un extremo es transferida hacia el otro. El sistema resultante presenta dos cualidades: por un lado la estabilidad del sistema queda garantizada con independencia de la arquitectura del sistema y del canal de comunicación; por el otro, el rendimiento es maximizado en términos de fidelidad de transmisión energética. Los métodos propuestos se sustentan con sistemas experimentales con diferentes arquitecturas de control y retardos entre 2 y 900 ms. La tesis concluye con un experimento que incluye una comunicación espacial basada en el satélite geoestacionario ASTRA. ABSTRACT Telepresence combines different sensorial modalities, including vision and touch, to produce a feeling of being present in a remote location. The key element to successfully implement a telepresence system and thus to allow telemanipulation of a remote environment is force feedback. In a telemanipulation, mechanical energy must convey from the human operator to the manipulated object found in the remote environment. In general, energy is a property of all physical objects, fundamental to their mutual interactions in which the energy can be transferred among the objects and can change form but cannot be created or destroyed. In this thesis, we exploit this fundamental principle to derive a novel bilateral control mechanism that allows designing stable teleoperation systems with any conceivable communication architecture. The rationale starts from the fact that the mechanical energy injected by a human operator into the system must be conveyed to the remote environment and Vice Versa. As will be seen, setting energy as the control variable allows a more general treatment of the controlled system in contrast to the more conventional control of specific systems variables. Through the Time Delay Power Network (TDPN) concept, the issue of defining the energy flows involved in a teleoperation system is solved with independence of the communication architecture. In particular, communication time delays are found to be a source of virtual energy. This fact is observed with delays starting from 1 millisecond. Since this energy is added, the resulting teleoperation system can be non-passive and thus become unstable. The Time Delay Power Networks are found to be carriers of the desired exchanged energy but also generators of virtual energy due to the time delay. Once these networks are identified, the Time Domain Passivity Control approach for TDPNs is proposed as a control mechanism to ensure system passivity and therefore, system stability. The proposed method is based on the simple fact that this intrinsically added energy due to the communication must be transformed into dissipation. Then the system becomes closer to the ambitioned one, where only the energy injected from one end of the system is conveyed to the other one. The resulting system presents two benefits: On one hand, system stability is guaranteed through passivity independently from the chosen control architecture and communication channel; on the other, performance is maximized in terms of energy transfer faithfulness. The proposed methods are sustained with a set of experimental implementations using different control architectures and communication delays ranging from 2 to 900 milliseconds. An experiment that includes a communication Space link based on the geostationary satellite ASTRA concludes this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de la tesis es la investigación de algoritmos numéricos para el desarrollo de herramientas numéricas para la simulación de problemas tanto de comportamiento en la mar como de resistencia al avance de buques y estructuras flotantes. La primera herramienta desarrollada resuelve el problema de difracción y radiación de olas. Se basan en el método de los elementos finitos (MEF) para la resolución de la ecuación de Laplace, así como en esquemas basados en MEF, integración a lo largo de líneas de corriente, y en diferencias finitas desarrollados para la condición de superficie libre. Se han desarrollado herramientas numéricas para la resolución de la dinámica de sólido rígido en sistemas multicuerpos con ligaduras. Estas herramientas han sido integradas junto con la herramienta de resolución de olas difractadas y radiadas para la resolución de problemas de interacción de cuerpos con olas. También se han diseñado algoritmos de acoplamientos con otras herramientas numéricas para la resolución de problemas multifísica. En particular, se han realizado acoplamientos con una herramienta numérica basada de cálculo de estructuras con MEF para problemas de interacción fluido-estructura, otra de cálculo de líneas de fondeo, y con una herramienta numérica de cálculo de flujos en tanques internos para problemas acoplados de comportamiento en la mar con “sloshing”. Se han realizado simulaciones numéricas para la validación y verificación de los algoritmos desarrollados, así como para el análisis de diferentes casos de estudio con aplicaciones diversas en los campos de la ingeniería naval, oceánica, y energías renovables marinas. ABSTRACT The objective of this thesis is the research on numerical algorithms to develop numerical tools to simulate seakeeping problems as well as wave resistance problems of ships and floating structures. The first tool developed is a wave diffraction-radiation solver. It is based on the finite element method (FEM) in order to solve the Laplace equation, as well as numerical schemes based on FEM, streamline integration, and finite difference method tailored for solving the free surface boundary condition. It has been developed numerical tools to solve solid body dynamics of multibody systems with body links across them. This tool has been integrated with the wave diffraction-radiation solver to solve wave-body interaction problems. Also it has been tailored coupling algorithms with other numerical tools in order to solve multi-physics problems. In particular, it has been performed coupling with a MEF structural solver to solve fluid-structure interaction problems, with a mooring solver, and with a solver capable of simulating internal flows in tanks to solve couple seakeeping-sloshing problems. Numerical simulations have been carried out to validate and verify the developed algorithms, as well as to analyze case studies in the areas of marine engineering, offshore engineering, and offshore renewable energy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una estructura vibra con la suma de sus infinitos modos de vibración, definidos por sus parámetros modales (frecuencias naturales, formas modales y coeficientes de amortiguamiento). Estos parámetros se pueden identificar a través del Análisis Modal Operacional (OMA). Así, un equipo de investigación de la Universidad Politécnica de Madrid ha identificado las propiedades modales de un edificio de hormigón armado en Madrid con el método Identificación de los sub-espacios estocásticos (SSI). Para completar el estudio dinámico de este edificio, se ha desarrollado un modelo de elementos finitos (FE) de este edificio de 19 plantas. Este modelo se ha calibrado a partir de su comportamiento dinámico obtenido experimentalmente a través del OMA. Los objetivos de esta tesis son; (i) identificar la estructura con varios métodos de SSI y el uso de diferentes ventanas de tiempo de tal manera que se cuantifican incertidumbres de los parámetros modales debidos al proceso de estimación, (ii) desarrollar FEM de este edificio y calibrar este modelo a partir de su comportamiento dinámico, y (iii) valorar la bondad del modelo. Los parámetros modales utilizados en esta calibración han sido; espesor de las losas, densidades de los materiales, módulos de elasticidad, dimensiones de las columnas y las condiciones de contorno de la cimentación. Se ha visto que el modelo actualizado representa el comportamiento dinámico de la estructura con una buena precisión. Por lo tanto, este modelo puede utilizarse dentro de un sistema de monitorización estructural (SHM) y para la detección de daños. En el futuro, podrá estudiar la influencia de los agentes medioambientales, tales como la temperatura o el viento, en los parámetros modales. A structure vibrates according to the sum of its vibration modes, defined by their modal parameters (natural frequencies, damping ratios and modal shapes). These parameters can be identified through Operational Modal Analysis (OMA). Thus, a research team of the Technical University of Madrid has identified the modal properties of a reinforced-concrete-frame building in Madrid using the Stochastic Subspace Identification (SSI) method and a time domain technique for the OMA. To complete the dynamic study of this building, a finite element model (FE) of this 19-floor building has been developed throughout this thesis. This model has been updated from its dynamic behavior identified by the OMA. The objectives of this thesis are to; (i) identify the structure with several SSI methods and using different time blocks in such a way that uncertainties due to the modal parameter estimation are quantified, (ii) develop a FEM of this building and tune this model from its dynamic behavior, and (iii) Assess the quality of the model, the modal parameters used in this updating process have been; thickness of slabs, material densities, modulus of elasticity, column dimensions and foundation boundary conditions. It has been shown that the final updated model represents the structure with a very good accuracy. Thus, this model might be used within a structural health monitoring framework (SHM). The study of the influence of changing environmental factors (such as temperature or wind) on the model parameters might be considered as a future work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En este Proyecto se pretende establecer la forma de realizar un análisis correcto y ajustado de las redes SMATV (Satellite Master Antenna Television), incluidas dentro de las ICT (Infraestructura Común de Telecomunicaciones), mediante el método de análisis TDA (Time Domain Analysis). Para ello, en primer lugar se procederá a hacer un estudio teórico sobre las ICT’s y sobre las bases en las que se sustenta el método de análisis TDA que sirva como puente introductorio al tema principal de este proyecto. Este tema es el de, mediante el programa de simulación AWR, caracterizar la señal más adecuada para realizar medidas de calidad en las redes SMATV mediante la técnica del TDA y ser capaz de realizar un estudio conciso de estas. Esto se pretende conseguir mediante la definición más correcta de los parámetros de la señal de entrada que se introduciría en la red en futuras medidas de prueba. Una vez conseguida una señal "tipo", se caracterizarán diferentes dispositivos o elementos que forman las redes SMATV para comprobar que la medida realizada con el método del TDA es igual de válida que realizada con el método de análisis vectorial de redes (VNA). ABSTRACT This project aims to establish how to perform a proper analysis and set of SMATV networks (Satellite Master Antenna Television), included within the ICT (Common Telecommunications Infrastructure) by the method of analysis TDA (Time Domain Analysis). To do this, first it will proceed to make a theoretical study on the ICT's and the basis on which the method of analysis TDA is based, introduction that serve as a bridge to the main issue of this project. This issue is about characterizing the most appropriate signal quality measurements in SMATV networks using the technique of AD through the AWR simulation program, and be able to make a concise study of these. This is intended to achieve through the proper definition of the parameters of the input signal, that would be introduced into the network in future test measures. Once achieved a signal "type", will be characterized different devices or elements forming SMATV networks to check that the measure on the TDA method is as valid as on the method of vector network analysis (VNA) .

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En este proyecto se van a aplicar las técnicas de análisis de ruido para caracterizar la respuesta dinámica de varios sensores de temperatura, tanto termorresistencias de platino como de termopares. Estos sensores son imprescindibles para él correcto funcionamiento de las centrales nucleares y requieren vigilancia para garantizar la exactitud de las medidas. Las técnicas de análisis de ruido son técnicas pasivas, es decir, no afectan a la operación de la planta y permiten realizar una vigilancia in situ de los sensores. Para el caso de los sensores de temperatura, dado que se pueden asimilar a sistemas de primer orden, el parámetro fundamental a vigilar es el tiempo de respuesta. Éste puede obtenerse para cada una de las sondas por medio de técnicas en el dominio de la frecuencia (análisis espectral) o por medio de técnicas en el dominio del tiempo (modelos autorregresivos). Además de la estimación del tiempo de respuesta, se realizará una caracterización estadística de las sondas. El objetivo es conocer el comportamiento de los sensores y vigilarlos de manera que se puedan diagnosticar las averías aunque éstas estén en una etapa incipiente. ABSTRACT In this project we use noise analysis technique to study the dynamic response of RTDs (Resistant temperature detectors) and thermocouples. These sensors are essential for the proper functioning of nuclear power plants and therefore need to be monitored to guarantee accurate measurements. The noise analysis techniques do not affect plant operation and allow in situ monitoring of the sensors. Temperature sensors are equivalent to first order systems. In these systems the main parameter to monitor is the response time which can be obtained by means of techniques in the frequency domain (spectral analysis) as well as time domain (autoregressive models). Besides response time estimation the project will also include a statistical study of the probes. The goal is to understand the behavior of the sensors and monitor them in order to detect any anomalies or malfunctions even if they occur in an early stage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper shows that today’s modelling of electrical noise as coming from noisy resistances is a non sense one contradicting their nature as systems bearing an electrical noise. We present a new model for electrical noise that including Johnson and Nyquist work also agrees with the Quantum Mechanical description of noisy systems done by Callen and Welton, where electrical energy fluctuates and is dissipated with time. By the two currents the Admittance function links in frequency domain with their common voltage, this new model shows the connection Cause-Effect that exists between Fluctuation and Dissipation of energy in time domain. In spite of its radical departure from today’s belief on electrical noise in resistors, this Complex model for electrical noise is obtained from Nyquist result by basic concepts of Circuit Theory and Thermo- dynamics that also apply to capacitors and inductors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Application of the spectrum analyzer for illustrating several concepts associated with mobile communications is discussed. Specifically, two groups of observable features are described. First, time variation and frequency selectivity of multipath propagation can be revealed by carrying out simple measurements on commercial-network GSM and UMTS signals. Second, the main time-domain and frequency-domain features of GSM and UMTS radio signals can be observed. This constitutes a valuable tool for teaching mobile communication courses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.