83 resultados para Deterministic imputation
em Universidad Politécnica de Madrid
Resumo:
The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.
Resumo:
There are many situations where input feature vectors are incomplete and methods to tackle the problem have been studied for a long time. A commonly used procedure is to replace each missing value with an imputation. This paper presents a method to perform categorical missing data imputation from numerical and categorical variables. The imputations are based on Simpson’s fuzzy min-max neural networks where the input variables for learning and classification are just numerical. The proposed method extends the input to categorical variables by introducing new fuzzy sets, a new operation and a new architecture. The procedure is tested and compared with others using opinion poll data.
Resumo:
Abstract is not available.
Resumo:
This paper presents a deterministic continuous model of proliferative cell activity. The classical series of connected compartments is revisited along with a simple mathematical treatment of two hypotheses: constant transit times and harmonic Ts. Several examples are presented to support these ideas, both taken from previous literature and recent experiences with the fish Carassius auratus, developed at the Junta de Energía Nuclear, Madrid, Spain.
Resumo:
In this paper, we examine the issue of memory management in the parallel execution of logic programs. We concentrate on non-deterministic and-parallel schemes which we believe present a relatively general set of problems to be solved, including most of those encountered in the memory management of or-parallel systems. We present a distributed stack memory management model which allows flexible scheduling of goals. Previously proposed models (based on the "Marker model") are lacking in that they impose restrictions on the selection of goals to be executed or they may require consume a large amount of virtual memory. This paper first presents results which imply that the above mentioned shortcomings can have significant performance impacts. An extension of the Marker Model is then proposed which allows flexible scheduling of goals while keeping (virtual) memory consumption down. Measurements are presented which show the advantage of this solution. Methods for handling forward and backward execution, cut and roll back are discussed in the context of the proposed scheme. In addition, the paper shows how the same mechanism for flexible scheduling can be applied to allow the efficient handling of the very general form of suspension that can occur in systems which combine several types of and-parallelism and more sophisticated methods of executing logic programs. We believe that the results are applicable to many and- and or-parallel systems.
Resumo:
This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.
Resumo:
(ENG) IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses (SPA)DPSA (Metodologías Integradas de Análisis Determinista-Probabilista de Seguridad) es un conjunto de métodos que utilizan métodos probabilistas y deterministas estrechamente acoplados para abordar las respectivas fuentes de incertidumbre, permitiendo la toma de decisiones Informada por el Riesgo de forma consistente. El punto de inicio del marco IDPSA es que la justificación de seguridad debe estar basada en el acoplamiento entre consideraciones deterministas (consecuencias) y probabilistas (frecuencia) para abordar la interacción mutua entre perturbaciones estocásticas (como por ejemplo fallos de los equipos, acciones humanas, fenómenos físicos estocásticos) y la respuesta determinista de la planta (como por ejemplo los transitorios). Este artículo da una visión general de algunos métodos IDSPA así como posibles aplicaciones al análisis de seguridad de los PWR.
Resumo:
This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.
Resumo:
In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.
Resumo:
Within the framework of the Collaborative Project for a European Sodium Fast Reactor, the reactor physics group at UPM is working on the extension of its in-house multi-scale advanced deterministic code COBAYA3 to Sodium Fast Reactors (SFR). COBAYA3 is a 3D multigroup neutron kinetics diffusion code that can be used either as a pin-by-pin code or as a stand-alone nodal code by using the analytic nodal diffusion solver ANDES. It is coupled with thermalhydraulics codes such as COBRA-TF and FLICA, allowing transient analysis of LWR at both fine-mesh and coarse-mesh scales. In order to enable also 3D pin-by-pin and nodal coupled NK-TH simulations of SFR, different developments are in progress. This paper presents the first steps towards the application of COBAYA3 to this type of reactors. ANDES solver, already extended to triangular-Z geometry, has been applied to fast reactor steady-state calculations. The required cross section libraries were generated with ERANOS code for several configurations. The limitations encountered in the application of the Analytic Coarse Mesh Finite Difference (ACMFD) method –implemented inside ANDES– to fast reactors are presented and the sensitivity of the method when using a high number of energy groups is studied. ANDES performance is assessed by comparison with the results provided by ERANOS, using a mini-core model in 33 energy groups. Furthermore, a benchmark from the NEA for a small 3D FBR in hexagonal-Z geometry and 4 energy groups is also employed to verify the behavior of the code with few energy groups.
Resumo:
Tras el devastador terremoto del 12 de enero de 2010 en Puerto Príncipe, Haití, las autoridades locales, numerosas ONGs y organismos nacionales e internacionales están trabajando en el desarrollo de estrategias para minimizar el elevado riesgo sísmico existente en el país. Para ello es necesario, en primer lugar, estimar dicho riesgo asociado a eventuales terremotos futuros que puedan producirse, evaluando el grado de pérdidas que podrían generar, para dimensionar la catástrofe y actuar en consecuencia, tanto en lo referente a medidas preventivas como a adopción de planes de emergencia. En ese sentido, este Trabajo Fin de Master aporta un análisis detallado del riesgo sísmico asociado a un futuro terremoto que podría producirse con probabilidad razonable, causando importantes daños en Puerto Príncipe. Se propone para ello una metodología de cálculo del riesgo adaptada a los condicionantes de la zona, con modelos calibrados empleando datos del sismo de 2010. Se ha desarrollado en el marco del proyecto de cooperación Sismo-Haití, financiado por la Universidad Politécnica de Madrid, que comenzó diez meses después del terremoto de 2010 como respuesta a una petición de ayuda del gobierno haitiano. El cálculo del riesgo requiere la consideración de dos inputs: la amenaza sísmica o movimiento esperado por el escenario definido (sismo de cierta magnitud y localización) y los elementos expuestos a esta amenaza (una clasificación del parque inmobiliario en diferentes tipologías constructivas, así como su vulnerabilidad). La vulnerabilidad de estas tipologías se describe por medio de funciones de daño: espectros de capacidad, que representan su comportamiento ante las fuerzas horizontales motivadas por los sismos, y curvas de fragilidad, que representan la probabilidad de que las estructuras sufran daños al alcanzar el máximo desplazamiento horizontal entre plantas debido a la mencionada fuerza horizontal. La metodología que se propone especifica determinadas pautas y criterios para estimar el movimiento, asignar la vulnerabilidad y evaluar el daño, cubriendo los tres estados del proceso. Por una parte, se consideran diferentes modelos de movimiento fuerte incluyendo el efecto local, y se identifican los que mejor ajustan a las observaciones de 2010. Por otra se clasifica el parque inmobiliario en diferentes tipologías constructivas, en base a la información extraída en una campaña de campo y utilizando además una base de datos aportada por el Ministerio de Obras Públicas de Haití. Ésta contiene información relevante de todos los edificios de la ciudad, resultando un total de 6 tipologías. Finalmente, para la estimación del daño se aplica el método capacidad-demanda implementado en el programa SELENA (Molina et al., 2010). En primer lugar, utilizado los datos de daño del terremoto de 2010, se ha calibrado el modelo propuesto de cálculo de riesgo sísmico: cuatro modelos de movimiento fuerte, tres modelos de tipo de suelo y un conjunto de funciones de daño. Finalmente, con el modelo calibrado, se ha simulado un escenario sísmico determinista correspondiente a un posible terremoto con epicentro próximo a Puerto Príncipe. Los resultados muestran que los daños estructurales serán considerables y podrán llevar a pérdidas económicas y humanas que causen un gran impacto en el país, lo que pone de manifiesto la alta vulnerabilidad estructural existente. Este resultado será facilitado a las autoridades locales, constituyendo una base sólida para toma de decisiones y adopción de políticas de prevención y mitigación del riesgo. Se recomienda dirigir esfuerzos hacia la reducción de la vulnerabilidad estructural - mediante refuerzo de edificios vulnerables y adopción de una normativa sismorresistente- y hacia el desarrollo de planes de emergencia. Abstract After the devastating 12 January 2010 earthquake that hit the city of Port-au-Prince, Haiti, strategies to minimize the high seismic risk are being developed by local authorities, NGOs, and national and international institutions. Two important tasks to reach this objective are, on the one hand, the evaluation of the seismic risk associated to possible future earthquakes in order to know the dimensions of the catastrophe; on the other hand, the design of preventive measures and emergency plans to minimize the consequences of such events. In this sense, this Master Thesis provides a detailed estimation of the damage that a possible future earthquake will cause in Port-au-Prince. A methodology to calculate the seismic risk is proposed, adapted to the study area conditions. This methodology has been calibrated using data from the 2010 earthquake. It has been conducted in the frame of the Sismo-Haiti cooperative project, supported by the Technical University of Madrid, which started ten months after the 2010 earthquake as an answer to an aid call of the Haitian government. The seismic risk calculation requires two inputs: the seismic hazard (expected ground motion due to a scenario earthquake given by magnitude and location) and the elements exposed to the hazard (classification of the building stock into building typologies, as well as their vulnerability). This vulnerability is described through the damage functions: capacity curves, which represent the structure performance against the horizontal forces caused by the seisms; and fragility curves, which represent the probability of damage as the structure reaches the maximum spectral displacement due to the horizontal force. The proposed methodology specifies certain guidelines and criteria to estimate the ground motion, assign the vulnerability, and evaluate the damage, covering the whole process. Firstly, different ground motion prediction equations including the local effect are considered, and the ones that have the best correlation with the observations of the 2010 earthquake, are identified. Secondly, the classification of building typologies is made by using the information collected during a field campaign, as well as a data base provided by the Ministry of Public Works of Haiti. This data base contains relevant information about all the buildings in the city, leading to a total of 6 different typologies. Finally, the damage is estimated using the capacity-spectrum method as implemented in the software SELENA (Molina et al., 2010). Data about the damage caused by the 2010 earthquake have been used to calibrate the proposed calculation model: different choices of ground motion relationships, soil models, and damage functions. Then, with the calibrated model, a deterministic scenario corresponding to an epicenter close to Port-au-Prince has been simulated. The results show high structural damage, and therefore, they point out the high structural vulnerability in the city. Besides, the economic and human losses associated to the damage would cause a great impact in the country. This result will be provided to the Haitian Government, constituting a scientific base for decision making and for the adoption of measures to prevent and mitigate the seismic risk. It is highly recommended to drive efforts towards the quality control of the new buildings -through reinforcement and construction according to a seismic code- and the development of emergency planning.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
La mayoría de las estructuras de hormigón pretensadas construidas en los últimos 50 años han demostrado una excelente durabilidad cuando su construcción se realiza atendiendo las recomendaciones de un buen diseño así como una buena ejecución y puesta en obra de la estructura. Este hecho se debe en gran parte al temor que despierta el fenómeno de la corrosión bajo tensión típico de las armaduras de acero de alta resistencia. Menos atención se ha prestado a la susceptibilidad a la corrosión bajo tensión de los anclajes de postensado, posiblemente debido a que se han reportado pocos casos de fallos catastróficos. El concepto de Tolerancia al Daño y la Mecánica de la Fractura en estructuras de Ingeniería Civil ha empezado a incorporarse recientemente en algunas normas de diseño y cálculo de estructuras metálicas, sin embargo, aún está lejos de ser asimilado y empleado habitualmente por los ingenieros en sus cálculos cuando la ocasión lo requiere. Este desconocimiento de los aspectos relacionados con la Tolerancia al Daño genera importantes gastos de mantenimiento y reparación. En este trabajo se ha estudiado la aplicabilidad de los conceptos de la Mecánica de la Fractura a los componentes de los sistemas de postensado empleados en ingeniería civil, empleándolo para analizar la susceptibilidad de las armaduras activas frente a la corrosión bajo tensiones y a la pérdida de capacidad portante de las cabezas de anclajes de postensado debido a la presencia de defectos. Con este objeto se han combinado tanto técnicas experimentales como numéricas. Los defectos superficiales en los alambres de pretensado no se presentan de manera aislada si no que existe una cierta continuidad en la dirección axial así como un elevado número de defectos. Por este motivo se ha optado por un enfoque estadístico, que es más apropiado que el determinístico. El empleo de modelos estadísticos basados en la teoría de valores extremos ha permitido caracterizar el estado superficial en alambres de 5,2 mm de diámetro. Por otro lado la susceptibilidad del alambre frente a la corrosión bajo tensión ha sido evaluada mediante la realización de una campaña de ensayos de acuerdo con la actual normativa que ha permitido caracterizar estadísticamente su comportamiento. A la vista de los resultados ha sido posible evaluar como los parámetros que definen el estado superficial del alambre pueden determinar la durabilidad de la armadura atendiendo a su resistencia frente a la corrosión bajo tensión, evaluada mediante los ensayos que especifica la normativa. En el caso de las cabezas de anclaje de tendones de pretensado, los defectos se presentan de manera aislada y tienen su origen en marcas, arañazos o picaduras de corrosión que pueden producirse durante el proceso de fabricación, transporte, manipulación o puesta en obra. Dada la naturaleza de los defectos, el enfoque determinístico es más apropiado que el estadístico. La evaluación de la importancia de un defecto en un elemento estructural requiere la estimación de la solicitación local que genera el defecto, que permite conocer si el defecto es crítico o si puede llegar a serlo, si es que progresa con el tiempo (por fatiga, corrosión, una combinación de ambas, etc.). En este trabajo los defectos han sido idealizados como grietas, de manera que el análisis quedara del lado de la seguridad. La evaluación de la solicitación local del defecto ha sido calculada mediante el empleo de modelos de elementos finitos de la cabeza de anclaje que simulan las condiciones de trabajo reales de la cabeza de anclaje durante su vida útil. A partir de estos modelos numéricos se ha analizado la influencia en la carga de rotura del anclaje de diversos factores como la geometría del anclaje, las condiciones del apoyo, el material del anclaje, el tamaño del defecto su forma y su posición. Los resultados del análisis numérico han sido contrastados satisfactoriamente mediante la realización de una campaña experimental de modelos a escala de cabezas de anclaje de Polimetil-metacrilato en los que artificialmente se han introducido defectos de diversos tamaños y en distintas posiciones. ABSTRACT Most of the prestressed concrete structures built in the last 50 years have demonstrated an excellent durability when they are constructed in accordance with the rules of good design, detailing and execution. This is particularly true with respect to the feared stress corrosion cracking, which is typical of high strength prestressing steel wires. Less attention, however, has been paid to the stress corrosion cracking susceptibility of anchorages for steel tendons for prestressing concrete, probably due to the low number of reported failure cases. Damage tolerance and fracture mechanics concepts in civil engineering structures have recently started to be incorporated in some design and calculation rules for metallic structures, however it is still far from being assimilated and used by civil engineers in their calculations on a regular basis. This limited knowledge of the damage tolerance basis could lead to significant repair and maintenance costs. This work deals with the applicability of fracture mechanics and damage tolerance concepts to the components of prestressed systems, which are used in civil engineering. Such concepts have been applied to assess the susceptibility of the prestressing steel wires to stress corrosion cracking and the reduction of load bearing capability of anchorage devices due to the presence of defects. For this purpose a combination of experimental work and numerical techniques have been performed. Surface defects in prestressing steel wires are not shown alone, though a certain degree of continuity in the axial direction exist. A significant number of such defects is also observed. Hence a statistical approach was used, which is assumed to be more appropriate than the deterministic approach. The use of statistical methods based in extreme value theories has allowed the characterising of the surface condition of 5.2 mm-diameter wires. On the other hand the stress corrosion cracking susceptibility of the wire has been assessed by means of an experimental testing program in line with the current regulations, which has allowed statistical characterisasion of their performances against stress corrosion cracking. In the light of the test results, it has been possible to evaluate how the surface condition parameters could determine the durability of the active metal armour regarding to its resistance against stress corrosion cracking assessed by means of the current testing regulations. In the case of anchorage devices for steel tendons for prestressing concrete, the damage is presented as point defects originating from dents, scratches or corrosion pits that could be produced during the manufacturing proccess, transport, handling, assembly or use. Due to the nature of these defects, in this case the deterministic approach is more appropriate than the statistical approach. The assessment of the relevancy of defect in a structural component requires the computation of the stress intensity factors, which in turn allow the evaluation of whether the size defect is critical or could become critical with the progress of time (due to fatigue, corrosion or a combination of both effects). In this work the damage is idealised as tiny cracks, a conservative hypothesis. The stress intensity factors have been calculated by means of finite element models of the anchorage representing the real working conditions during its service life. These numeric models were used to assess the impact of some factors on the rupture load of the anchorage, such the anchorage geometry, material, support conditions, defect size, shape and its location. The results from the numerical analysis have been succesfully correlated against the results of the experimental testing program of scaled models of the anchorages in poly-methil methacrylate in which artificial damage in several sizes and locations were introduced.
Resumo:
This work is based on the prototype High Engineering Test Reactor (HTTR) of the Japan Agency of Energy Atomic (JAEA). Its objective is to describe an adequate deterministic model to be used in the assessment of its design safety margins via damage domains. The concept of damage domain is defined and it is shown its relevance in the ongoing effort to apply dynamic risk assessment methods and tools based on the Theory of Stimulated Dynamics (TSD). To illustrate, we present results of an abnormal control rod (CR) withdrawal during subcritical condition and its comparison with results obtained by JAEA. No attempt is made yet to actually assess the detailed scenarios, rather to show how the approach may handle events of its kind
Resumo:
Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)