24 resultados para Analysis of principal component
Resumo:
The use of a common environment for processing different powder foods in the industry has increased the risk of finding peanut traces in powder foods. The analytical methods commonly used for detection of peanut such as enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (RT-PCR) represent high specificity and sensitivity but are destructive and time-consuming, and require highly skilled experimenters. The feasibility of NIR hyperspectral imaging (HSI) is studied for the detection of peanut traces down to 0.01% by weight. A principal-component analysis (PCA) was carried out on a dataset of peanut and flour spectra. The obtained loadings were applied to the HSI images of adulterated wheat flour samples with peanut traces. As a result, HSI images were reduced to score images with enhanced contrast between peanut and flour particles. Finally, a threshold was fixed in score images to obtain a binary classification image, and the percentage of peanut adulteration was compared with the percentage of pixels identified as peanut particles. This study allowed the detection of traces of peanut down to 0.01% and quantification of peanut adulteration from 10% to 0.1% with a coefficient of determination (r2) of 0.946. These results show the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA to facilitate enhanced quality-control surveillance on food-product processing lines.
Resumo:
FBGs are excellent strain sensors, because of its low size and multiplexing capability. Tens to hundred of sensors may be embedded into a structure, as it has already been demonstrated. Nevertheless, they only afford strain measurements at local points, so unless the damage affects the strain readings in a distinguishable manner, damage will go undetected. This paper show the experimental results obtained on the wing of a UAV, instrumented with 32 FBGs, before and after small damages were introduced. The PCA algorithm was able to distinguish the damage cases, even for small cracks. Principal Component Analysis (PCA) is a technique of multivariable analysis to reduce a complex data set to a lower dimension and reveal some hidden patterns that underlie.
Resumo:
Este trabajo presenta una solución al problema del reconocimiento del género de un rostro humano a partir de una imagen. Adoptamos una aproximación que utiliza la cara completa a través de la textura de la cara normalizada y redimensionada como entrada a un clasificador Näive Bayes. Presentamos la técnica de Análisis de Componentes Principales Probabilístico Condicionado-a-la-Clase (CC-PPCA) para reducir la dimensionalidad de los vectores de características para la clasificación y asegurar la asunción de independencia para el clasificador. Esta nueva aproximación tiene la deseable propiedad de presentar un modelo paramétrico sencillo para las marginales. Además, este modelo puede estimarse con muy pocos datos. En los experimentos que hemos desarrollados mostramos que CC-PPCA obtiene un 90% de acierto en la clasificación, resultado muy similar al mejor presentado en la literatura---ABSTRACT---This paper presents a solution to the problem of recognizing the gender of a human face from an image. We adopt a holistic approach by using the cropped and normalized texture of the face as input to a Naïve Bayes classifier. First it is introduced the Class-Conditional Probabilistic Principal Component Analysis (CC-PPCA) technique to reduce the dimensionality of the classification attribute vector and enforce the independence assumption of the classifier. This new approach has the desirable property of a simple parametric model for the marginals. Moreover this model can be estimated with very few data. In the experiments conducted we show that using CCPPCA we get 90% classification accuracy, which is similar result to the best in the literature. The proposed method is very simple to train and implement.
Application of the agency theory for the analysis of performance-based mechanisms in road management
Resumo:
El WCTR es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte, y aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper develops a model based on agency theory to analyze road management systems (under the different contract forms available today) that employ a mechanism of performance indicators to establish the payment of the agent. The base assumption is that of asymmetric information between the principal (Public Authorities) and the agent (contractor) and the risk aversion of this latter. It is assumed that the principal may only measure the agent?s performance indirectly and by means of certain performance indicators that may be verified by the authorities. In this model there is presumed to be a relation between the efforts made by the agent and the performance level measured by the corresponding indicators, though it is also considered that there may be dispersion between both variables that gives rise to a certain degree of randomness in the contract. An analysis of the optimal contract has been made on the basis of this model and in accordance with a series of parameters that characterize the economic environment and the particular conditions of road infrastructure. As a result of the analysis made, it is considered that an optimal contract should generally combine a fixed component and a payment in accordance with the performance level obtained. The higher the risk aversion of the agent and the greater the marginal cost of public funds, the lower the impact of this performance-based payment. By way of conclusion, the system of performance indicators should be as broad as possible but should not overweight those indicators that encompass greater randomness in their results.
Resumo:
In the present paper, 1-year PM10 and PM 2.5 data from roadside and urban background monitoring stations in Athens (Greece), Madrid (Spain) and London (UK) are analysed in relation to other air pollutants (NO,NO2,NOx,CO,O3 and SO2)and several meteorological parameters (wind velocity, temperature, relative humidity, precipitation, solar radiation and atmospheric pressure), in order to investigate the sources and factors affecting particulate pollution in large European cities. Principal component and regression analyses are therefore used to quantify the contribution of both combustion and non-combustion sources to the PM10 and PM 2.5 levels observed. The analysis reveals that the EU legislated PM 10 and PM2.5 limit values are frequently breached, forming a potential public health hazard in the areas studied. The seasonal variability patterns of particulates varies among cities and sites, with Athens and Madrid presenting higher PM10 concentrations during the warm period and suggesting the larger relative contribution of secondary and natural particles during hot and dry days. It is estimated that the contribution of non-combustion sources varies substantially among cities, sites and seasons and ranges between 38-67% and 40-62% in London, 26-50% and 20-62% in Athens, and 31-58% and 33-68% in Madrid, for both PM10 and PM 2.5. Higher contributions from non-combustion sources are found at urban background sites in all three cities, whereas in the traffic sites the seasonal differences are smaller. In addition, the non-combustion fraction of both particle metrics is higher during the warm season at all sites. On the whole, the analysis provides evidence of the substantial impact of non-combustion sources on local air quality in all three cities. While vehicular exhaust emissions carry a large part of the risk posed on human health by particle exposure, it is most likely that mitigation measures designed for their reduction will have a major effect only at traffic sites and additional measures will be necessary for the control of background levels. However, efforts in mitigation strategies should always focus on optimal health effects.
Resumo:
This paper discusses a model based on the agency theory to analyze the optimal transfer of construction risk in public works contracts. The base assumption is that of a contract between a principal (public authority) and an agent (firm), where the payment mechanism is linear and contains an incentive mechanism to enhance the effort of the agent to reduce construction costs. A theoretical model is proposed starting from a cost function with a random component and assuming that both the public authority and the firm are risk averse. The main outcome of the paper is that the optimal transfer of construction risk will be lower when the variance of errors in cost forecast, the risk aversion of the firm and the marginal cost of public funds are larger, while the optimal transfer of construction risk will grow when the variance of errors in cost monitoring and the risk aversion of the public authority are larger
Resumo:
The pararotor is a decelerator device based on the autorotation of a rotating wing. When it is dropped, it generates an aerodynamic force parallel to the main motion direction, acting as a decelerating force. In this paper, the rotational motion equations are shown for the vertical flight without any lateral wind component and some simplifying assumptions are introduced to obtain analytic solutions of the motion. First, the equilibrium state is obtained as a function of the main parameters. Then the equilibrium stability is analyzed. The motion stability depends on two nondimensional parameters, which contain geometric, inertia, and aerodynamic characteristics of the device. Based on these two parameters a stability diagram can be defined. Some stability regions with different types of stability trajectories (nodes, spirals, focuses) can be identified for spinning motion around axes close to the major, minor, and intermediate principal axes. It is found that the blades contribute to stability in a case of spin around the intermediate principal inertia axis, which is otherwise unstable. Subsequently, the equations for determining the angles of nutation and spin of the body are obtained, thus defining the orientation of the body for a stationary motion and the parameters on which that position depends.
Resumo:
Artículo internacional que aplica los criterios de NVA (Natural Variability Approach) a cinco ríos de la cuenca del Ebro
Resumo:
El principal objetivo de este estudio es evaluar la influencia de las fendas de secado en las propiedades mecánicas de vigas de madera. Para esto, se utilizan 40 vigas de Pino silvestre (Pinus sylvestris L) de 4200 mm de longitud y 150x200 mm de sección que fueron ensayadas según norma EN 408. Las fendas se registran detalladamente atendiendo a su longitud y posición en cada cara de la viga, y midiendo el espesor y la profundidad cada 100mm a lo largo de la viga. Solo el 10% de la muestra es rechazada por las fendas, según los criterios establecidos por la norma española de clasificación visual UNE 56544. Para evaluar la influencia de las fendas en las propiedades mecánicas, se usan tres parámetros globales basados en el área, el volumen o la profundad de la fenda, y dos locales basados en la profundidad máxima y la profundidad en la zona de rotura. Además se determina la densidad de las piezas. Estos parámetros se comparan con las propiedades mecánicas (tensión de rotura, módulo de elasticidad y energía de rotura) y se encuentra escasa relación entre ellos. Las mejores correlaciones se encuentran entre los parámetros relacionados con la profundidad de las fendas, tanto con el módulo de elasticidad como con la tensión de rotura. The aim of this study is the evaluation of the influence of drying fissures on the mechanical properties of timber beams. For that purpose, 40 sawn timber pieces of Scots pine (Pinus sylvestris L.) with 150x200 mm in cross-section and 4200 mm in length have been tested according to EN 408, obtaining MOR and MOE. The fissures were registered in detail measuring their length and position in each face of the beam, and the thickness and depth every 100 mm in length. Only 10 % of the pieces were rejected because fissures, according to UNE 56544 Spanish visual grading standard. To evaluate the influence of fissures in mechanical properties three global parameters: Fissures Area Ratio or ratio between the area occupied by fissures and the total area in the neutral axis plane of the beam; Fissures Volume Ratio or ratio between volume of fissures and the total volume of the beam; Fissures Average Depth and two local parameters were used: Fissures Maximum Depth in the beam, and Fissures Depth in the broken zone of the beam. Also the density of the beams was registered. These parameters were compared with mechanical properties (tensile strength, elasticity modulus, and rupture energy) and the relationship between them had not been founded. The best relationship was founded between the elasticity modulus y the tensile strength with the parameters which included the depth of the fissures.
Resumo:
The fundamental objective of this Ph. D. dissertation is to demonstrate that, under particular circumstances which cover most of the structures with practical interest, periodic structures can be understood and analyzed by means of closed waveguide theories and techniques. To that aim, in the first place a transversely periodic cylindrical structure is considered and the wave equation, under a combination of perfectly conducting and periodic boundary conditions, is studied. This theoretical study runs parallel to the classic analysis of perfectly conducting closed waveguides. Under the light shed by the aforementioned study it is clear that, under certain very common periodicity conditions, transversely periodic cylindrical structures share a lot of properties with closed waveguides. Particularly, they can be characterized by a complete set of TEM, TE and TM modes. As a result, this Ph. D. dissertation introduces the transversely periodic waveguide concept. Once the analogies between the modes of a transversely periodic waveguide and the ones of a closed waveguide have been established, a generalization of a well-known closed waveguide characterization method, the generalized Transverse Resonance Technique, is developed for the obtention of transversely periodic modes. At this point, all the necessary elements for the consideration of discontinuities between two different transversely periodic waveguides are at our disposal. The analysis of this type of discontinuities will be carried out by means of another well known closed waveguide method, the Mode Matching technique. This Ph. D. dissertation contains a sufficient number of examples, including the analysis of a wire-medium slab, a cross-shaped patches periodic surface and a parallel plate waveguide with a textured surface, that demonstrate that the Transverse Resonance Technique - Mode Matching hybrid is highly precise, efficient and versatile. Thus, the initial statement: ”periodic structures can be understood and analyzed by means of closed waveguide theories and techniques”, will be corroborated. Finally, this Ph. D. dissertation contains an adaptation of the aforementioned generalized Transverse Resonance Technique by means of which the analysis of laterally open periodic waveguides, such as the well known Substrate Integrated Waveguides, can be carried out without any approximation. The analysis of this type of structures has suscitated a lot of interest in the recent past and the previous analysis techniques proposed always resorted to some kind of fictitious wall to close the structure. vii Resumen El principal objetivo de esta tesis doctoral es demostrar que, bajo ciertas circunstancias que se cumplen para la gran mayoría de estructuras con interés práctico, las estructuras periódicas se pueden analizar y entender con conceptos y técnicas propias de las guías de onda cerradas. Para ello, en un primer lugar se considera una estructura cilíndrical transversalmente periódica y se estudia la ecuación de onda bajo una combinación de condiciones de contorno periódicas y de conductor perfecto. Este estudio teórico y de caracter general, sigue el análisis clásico de las guías de onda cerradas por conductor eléctrico perfecto. A la luz de los resultados queda claro que, bajo ciertas condiciones de periodicidad (muy comunes en la práctica) las estructuras cilíndricas transversalmente periódicas guardan multitud de analogías con las guías de onda cerradas. En particular, pueden ser descritas mediante un conjunto completo de modos TEM, TE y TM. Por ello, ésta tesis introduce el concepto de guía de onda transversalmente periódica. Una vez establecidas las similitudes entre las soluciones de la ecuación de onda, bajo una combinación de condiciones de contorno periódicas y de conductor perfecto, y los modos de guías de onda cerradas, se lleva a cabo, con éxito, la adaptación de un conocido método de caracterización de guías de onda cerradas, la técnica de la Resonancia Transversal Generalizada, para la obtención de los modos de guías transversalmente periódicas. En este punto, se tienen todos los elementos necesarios para considerar discontinuidades entre guías de onda transversalmente periódicas. El analisis de este tipo de discontinuidades se llevará a cabo mediante otro conocido método de análisis de estructuras cerradas, el Ajuste Modal. Esta tesis muestra multitud de ejemplos, como por ejemplo el análisis de un wire-medium slab, una superficie de parches con forma de cruz o una guía de placas paralelas donde una de dichas placas tiene cierta textura, en los que se demuestra que el método híbrido formado por la Resonancia Transversal Generalizada y el Ajuste Modal, es tremendamente preciso, eficiente y versátil y confirmará la validez de el enunciado inicial: ”las estructuras periódicas se pueden analizar y entender con conceptos y técnicas propias de las guías de onda cerradas” Para terminar, esta tésis doctoral incluye también una modificación de la técnica de la Resonancia Transversal Generalizada mediante la cual es posible abordar el análisis de estructuras periódica abiertas en los laterales, como por ejemplo las famosas guías de onda integradas en sustrato, sin ninguna aproximación. El análisis de este tipo de estructuras ha despertado mucho interés en los últimos años y las técnicas de análisis propuestas hasta ix el momento acostumbran a recurrir a algún tipo de pared ficticia para simular el carácter abierto de la estructura.
Resumo:
Providing QoS in the context of Ad Hoc networks includes a very wide field of application from the perspective of every level of the architecture in the network. Saying It in another way, It is possible to speak about QoS when a network is capable of guaranteeing a trustworthy communication in both extremes, between any couple of the network nodes by means of an efficient Management and administration of the resources that allows a suitable differentiation of services in agreement with the characteristics and demands of every single application.The principal objective of this article is the analysis of the quality parameters of service that protocols of routering reagents such as AODV and DSR give in the Ad Hoc mobile Networks; all of this is supported by the simulator ns-2. Here were going to analyze the behavior of some other parameters like effective channel, loss of packages and latency in the protocols of routering. Were going to show you which protocol presents better characteristics of Quality of Service (QoS) in the MANET networks.
Resumo:
INTRODUCTION: Objective assessment of motor skills has become an important challenge in minimally invasive surgery (MIS) training.Currently, there is no gold standard defining and determining the residents' surgical competence.To aid in the decision process, we analyze the validity of a supervised classifier to determine the degree of MIS competence based on assessment of psychomotor skills METHODOLOGY: The ANFIS is trained to classify performance in a box trainer peg transfer task performed by two groups (expert/non expert). There were 42 participants included in the study: the non-expert group consisted of 16 medical students and 8 residents (< 10 MIS procedures performed), whereas the expert group consisted of 14 residents (> 10 MIS procedures performed) and 4 experienced surgeons. Instrument movements were captured by means of the Endoscopic Video Analysis (EVA) tracking system. Nine motion analysis parameters (MAPs) were analyzed, including time, path length, depth, average speed, average acceleration, economy of area, economy of volume, idle time and motion smoothness. Data reduction was performed by means of principal component analysis, and then used to train the ANFIS net. Performance was measured by leave one out cross validation. RESULTS: The ANFIS presented an accuracy of 80.95%, where 13 experts and 21 non-experts were correctly classified. Total root mean square error was 0.88, while the area under the classifiers' ROC curve (AUC) was measured at 0.81. DISCUSSION: We have shown the usefulness of ANFIS for classification of MIS competence in a simple box trainer exercise. The main advantage of using ANFIS resides in its continuous output, which allows fine discrimination of surgical competence. There are, however, challenges that must be taken into account when considering use of ANFIS (e.g. training time, architecture modeling). Despite this, we have shown discriminative power of ANFIS for a low-difficulty box trainer task, regardless of the individual significances between MAPs. Future studies are required to confirm the findings, inclusion of new tasks, conditions and sample population.
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
El futuro de la energía nuclear de fisión dependerá, entre otros factores, de la capacidad que las nuevas tecnologías demuestren para solventar los principales retos a largo plazo que se plantean. Los principales retos se pueden resumir en los siguientes aspectos: la capacidad de proporcionar una solución final, segura y fiable a los residuos radiactivos; así como dar solución a la limitación de recursos naturales necesarios para alimentar los reactores nucleares; y por último, una mejora robusta en la seguridad de las centrales que en definitiva evite cualquier daño potencial tanto en la población como en el medio ambiente como consecuencia de cualquier escenario imaginable o más allá de lo imaginable. Siguiendo estas motivaciones, la Generación IV de reactores nucleares surge con el compromiso de proporcionar electricidad de forma sostenible, segura, económica y evitando la proliferación de material fisible. Entre los sistemas conceptuales que se consideran para la Gen IV, los reactores rápidos destacan por su capacidad potencial de transmutar actínidos a la vez que permiten una utilización óptima de los recursos naturales. Entre los refrigerantes que se plantean, el sodio parece una de las soluciones más prometedoras. Como consecuencia, esta tesis surgió dentro del marco del proyecto europeo CP-ESFR con el principal objetivo de evaluar la física de núcleo y seguridad de los reactores rápidos refrigerados por sodio, al tiempo que se desarrollaron herramientas apropiadas para dichos análisis. Efectivamente, en una primera parte de la tesis, se abarca el estudio de la física del núcleo de un reactor rápido representativo, incluyendo el análisis detallado de la capacidad de transmutar actínidos minoritarios. Como resultado de dichos análisis, se publicó un artículo en la revista Annals of Nuclear Energy [96]. Por otra parte, a través de un análisis de un hipotético escenario nuclear español, se evalúo la disponibilidad de recursos naturales necesarios en el caso particular de España para alimentar una flota específica de reactores rápidos, siguiendo varios escenarios de demanda, y teniendo en cuenta la capacidad de reproducción de plutonio que tienen estos sistemas. Como resultado de este trabajo también surgió una publicación en otra revista científica de prestigio internacional como es Energy Conversion and Management [97]. Con objeto de realizar esos y otros análisis, se desarrollaron diversos modelos del núcleo del ESFR siguiendo varias configuraciones, y para diferentes códigos. Por otro lado, con objeto de poder realizar análisis de seguridad de reactores rápidos, son necesarias herramientas multidimensionales de alta fidelidad específicas para reactores rápidos. Dichas herramientas deben integrar fenómenos relacionados con la neutrónica y con la termo-hidráulica, entre otros, mediante una aproximación multi-física. Siguiendo este objetivo, se evalúo el código de difusión neutrónica ANDES para su aplicación a reactores rápidos. ANDES es un código de resolución nodal que se encuentra implementado dentro del sistema COBAYA3 y está basado en el método ACMFD. Por lo tanto, el método ACMFD fue sometido a una revisión en profundidad para evaluar su aptitud para la aplicación a reactores rápidos. Durante ese proceso, se identificaron determinadas limitaciones que se discutirán a lo largo de este trabajo, junto con los desarrollos que se han elaborado e implementado para la resolución de dichas dificultades. Por otra parte, se desarrolló satisfactoriamente el acomplamiento del código neutrónico ANDES con un código termo-hidráulico de subcanales llamado SUBCHANFLOW, desarrollado recientemente en el KIT. Como conclusión de esta parte, todos los desarrollos implementados son evaluados y verificados. En paralelo con esos desarrollos, se calcularon para el núcleo del ESFR las secciones eficaces en multigrupos homogeneizadas a nivel nodal, así como otros parámetros neutrónicos, mediante los códigos ERANOS, primero, y SERPENT, después. Dichos parámetros se utilizaron más adelante para realizar cálculos estacionarios con ANDES. Además, como consecuencia de la contribución de la UPM al paquete de seguridad del proyecto CP-ESFR, se calcularon mediante el código SERPENT los parámetros de cinética puntual que se necesitan introducir en los típicos códigos termo-hidráulicos de planta, para estudios de seguridad. En concreto, dichos parámetros sirvieron para el análisis del impacto que tienen los actínidos minoritarios en el comportamiento de transitorios. Concluyendo, la tesis presenta una aproximación sistemática y multidisciplinar aplicada al análisis de seguridad y comportamiento neutrónico de los reactores rápidos de sodio de la Gen-IV, usando herramientas de cálculo existentes y recién desarrolladas ad' hoc para tal aplicación. Se ha empleado una cantidad importante de tiempo en identificar limitaciones de los métodos nodales analíticos en su aplicación en multigrupos a reactores rápidos, y se proponen interesantes soluciones para abordarlas. ABSTRACT The future of nuclear reactors will depend, among other aspects, on the capability to solve the long-term challenges linked to this technology. These are the capability to provide a definite, safe and reliable solution to the nuclear wastes; the limitation of natural resources, needed to fuel the reactors; and last but not least, the improved safety, which would avoid any potential damage on the public and or environment as a consequence of any imaginable and beyond imaginable circumstance. Following these motivations, the IV Generation of nuclear reactors arises, with the aim to provide sustainable, safe, economic and proliferationresistant electricity. Among the systems considered for the Gen IV, fast reactors have a representative role thanks to their potential capacity to transmute actinides together with the optimal usage of natural resources, being the sodium fast reactors the most promising concept. As a consequence, this thesis was born in the framework of the CP-ESFR project with the generic aim of evaluating the core physics and safety of sodium fast reactors, as well as the development of the approppriated tools to perform such analyses. Indeed, in a first part of this thesis work, the main core physics of the representative sodium fast reactor are assessed, including a detailed analysis of the capability to transmute minor actinides. A part of the results obtained have been published in Annals of Nuclear Energy [96]. Moreover, by means of the analysis of a hypothetical Spanish nuclear scenario, the availability of natural resources required to deploy an specific fleet of fast reactor is assessed, taking into account the breeding properties of such systems. This work also led to a publication in Energy Conversion and Management [97]. In order to perform those and other analyses, several models of the ESFR core were created for different codes. On the other hand, in order to perform safety studies of sodium fast reactors, high fidelity multidimensional analysis tools for sodium fast reactors are required. Such tools should integrate neutronic and thermal-hydraulic phenomena in a multi-physics approach. Following this motivation, the neutron diffusion code ANDES is assessed for sodium fast reactor applications. ANDES is the nodal solver implemented inside the multigroup pin-by-pin diffusion COBAYA3 code, and is based on the analytical method ACMFD. Thus, the ACMFD was verified for SFR applications and while doing so, some limitations were encountered, which are discussed through this work. In order to solve those, some new developments are proposed and implemented in ANDES. Moreover, the code was satisfactorily coupled with the thermal-hydraulic code SUBCHANFLOW, recently developed at KIT. Finally, the different implementations are verified. In addition to those developments, the node homogenized multigroup cross sections and other neutron parameters were obtained for the ESFR core using ERANOS and SERPENT codes, and employed afterwards by ANDES to perform steady state calculations. Moreover, as a result of the UPM contribution to the safety package of the CP-ESFR project, the point kinetic parameters required by the typical plant thermal-hydraulic codes were computed for the ESFR core using SERPENT, which final aim was the assessment of the impact of minor actinides in transient behaviour. All in all, the thesis provides a systematic and multi-purpose approach applied to the assessment of safety and performance parameters of Generation-IV SFR, using existing and newly developed analytical tools. An important amount of time was employed in identifying the limitations that the analytical nodal diffusion methods present when applied to fast reactors following a multigroup approach, and interesting solutions are proposed in order to overcome them.
Resumo:
Atrial fibrillation (AF) is a common heart disorder. One of the most prominent hypothesis about its initiation and maintenance considers multiple uncoordinated activation foci inside the atrium. However, the implicit assumption behind all the signal processing techniques used for AF, such as dominant frequency and organization analysis, is the existence of a single regular component in the observed signals. In this paper we take into account the existence of multiple foci, performing a spectral analysis to detect their number and frequencies. In order to obtain a cleaner signal on which the spectral analysis can be performed, we introduce sparsity-aware learning techniques to infer the spike trains corresponding to the activations. The good performance of the proposed algorithm is demonstrated both on synthetic and real data. RESUMEN. Algoritmo basado en técnicas de regresión dispersa para la extracción de las señales cardiacas en pacientes con fibrilación atrial (AF).