14 resultados para model determination
em Universidad Politécnica de Madrid
Resumo:
Discrete element method (DEM) is a numerical technique widely used for simulating the mechanical behavior of granular materials involved in many food and agricultural industry processes. Additionally, this technique is also a powerful tool to understand many complex phenomena related to the mechanics of granular materials. However, to make use of the potential of this technique it is necessary to develop DEM models capable of representing accurately the reality. For that, among some other questions, it is essential that the values of the microscopic material properties used to define the numerical model are accurately determined.
Resumo:
Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.
Resumo:
A novel methodology based on instrumented indentation is developed to determine the mechanical properties of amorphous materials which present cohesive-frictional behaviour. The approach is based on the concept of a universal hardness equation, which results from the assumption of a characteristic indentation pressure proportional to the hardness. The actual universal hardness equation is obtained from a detailed finite element analysis of the process of sharp indentation for a very wide range of material properties, and the inverse problem (i.e. how to extract the elastic modulus, the compressive yield strength and the friction angle) from instrumented indentation is solved. The applicability and limitations of the novel approach are highlighted. Finally, the model is validated against experimental data in metallic and ceramic glasses as well as polymers, covering a wide range of amorphous materials in terms of elastic modulus, yield strength and friction angle.
Resumo:
Monte Carlo (MC) method can accurately compute the dose produced by medical linear accelerators. However, these calculations require a reliable description of the electron and/or photon beams delivering the dose, the phase space (PHSP), which is not usually available. A method to derive a phase space model from reference measurements that does not heavily rely on a detailed model of the accelerator head is presented. The iterative optimization process extracts the characteristics of the particle beams which best explains the reference dose measurements in water and air, given a set of constrains
Resumo:
The determination of the plasma potential Vpl of unmagnetized plasmas by using the floating potential of emissive Langmuir probes operated in the strong emission regime is investigated. The experiments evidence that, for most cases, the electron thermionic emission is orders of magnitude larger than the plasma thermal electron current. The temperature-dependent floating potentials of negatively biased Vpmenor queVpl emissive probes are in agreement with the predictions of a simple phenomenological model that considers, in addition to the plasma electrons, an ad-ditional electron group that contributes to the probe current. The latter would be constituted by a fraction of the repelled electron thermionic current, which might return back to the probe with a different energy spectrum. Its origin would be a plasma potential well formed in the plasma sheath around the probe, acting as a virtual cathode or by collisions and electron thermalization pro-cesses. These results suggest that, for probe bias voltages close to the plasma potential Vp?Vpl, two electron populations coexist, i.e., the electrons from the plasma with temperatureTeand a large group of returned thermionic electrons. These results question the theoretical possibility of measuring the electron temperature by using emissive probes biased to potentials Vp about lower equal than ?Vpl.
Resumo:
In this work, a new methodology is devised to obtain the fracture properties of nuclear fuel cladding in the hoop direction. The proposed method combines ring compression tests and a finite element method that includes a damage model based on cohesive crack theory, applied to unirradiated hydrogen-charged ZIRLOTM nuclear fuel cladding. Samples with hydrogen concentrations from 0 to 2000 ppm were tested at 20 �C. Agreement between the finite element simulations and the experimental results is excellent in all cases. The parameters of the cohesive crack model are obtained from the simulations, with the fracture energy and fracture toughness being calculated in turn. The evolution of fracture toughness in the hoop direction with the hydrogen concentration (up to 2000 ppm) is reported for the first time for ZIRLOTM cladding. Additionally, the fracture micromechanisms are examined as a function of the hydrogen concentration. In the as-received samples, the micromechanism is the nucleation, growth and coalescence of voids, whereas in the samples with 2000 ppm, a combination of cuasicleavage and plastic deformation, along with secondary microcracking is observed.
Resumo:
El estudio del comportamiento de la atmósfera ha resultado de especial importancia tanto en el programa SESAR como en NextGen, en los que la gestión actual del tránsito aéreo (ATM) está experimentando una profunda transformación hacia nuevos paradigmas tanto en Europa como en los EE.UU., respectivamente, para el guiado y seguimiento de las aeronaves en la realización de rutas más eficientes y con mayor precisión. La incertidumbre es una característica fundamental de los fenómenos meteorológicos que se transfiere a la separación de las aeronaves, las trayectorias de vuelo libres de conflictos y a la planificación de vuelos. En este sentido, el viento es un factor clave en cuanto a la predicción de la futura posición de la aeronave, por lo que tener un conocimiento más profundo y preciso de campo de viento reducirá las incertidumbres del ATC. El objetivo de esta tesis es el desarrollo de una nueva técnica operativa y útil destinada a proporcionar de forma adecuada y directa el campo de viento atmosférico en tiempo real, basada en datos de a bordo de la aeronave, con el fin de mejorar la predicción de las trayectorias de las aeronaves. Para lograr este objetivo se ha realizado el siguiente trabajo. Se han descrito y analizado los diferentes sistemas de la aeronave que proporcionan las variables necesarias para obtener la velocidad del viento, así como de las capacidades que permiten la presentación de esta información para sus aplicaciones en la gestión del tráfico aéreo. Se ha explorado el uso de aeronaves como los sensores de viento en un área terminal para la estimación del viento en tiempo real con el fin de mejorar la predicción de las trayectorias de aeronaves. Se han desarrollado métodos computacionalmente eficientes para estimar las componentes horizontales de la velocidad del viento a partir de las velocidades de las aeronaves (VGS, VCAS/VTAS), la presión y datos de temperatura. Estos datos de viento se han utilizado para estimar el campo de viento en tiempo real utilizando un sistema de procesamiento de datos a través de un método de mínima varianza. Por último, se ha evaluado la exactitud de este procedimiento para que esta información sea útil para el control del tráfico aéreo. La información inicial proviene de una muestra de datos de Registradores de Datos de Vuelo (FDR) de aviones que aterrizaron en el aeropuerto Madrid-Barajas. Se dispuso de datos de ciertas aeronaves durante un periodo de más de tres meses que se emplearon para calcular el vector viento en cada punto del espacio aéreo. Se utilizó un modelo matemático basado en diferentes métodos de interpolación para obtener los vectores de viento en áreas sin datos disponibles. Se han utilizado tres escenarios concretos para validar dos métodos de interpolación: uno de dos dimensiones que trabaja con ambas componentes horizontales de forma independiente, y otro basado en el uso de una variable compleja que relaciona ambas componentes. Esos métodos se han probado en diferentes escenarios con resultados dispares. Esta metodología se ha aplicado en un prototipo de herramienta en MATLAB © para analizar automáticamente los datos de FDR y determinar el campo vectorial del viento que encuentra la aeronave al volar en el espacio aéreo en estudio. Finalmente se han obtenido las condiciones requeridas y la precisión de los resultados para este modelo. El método desarrollado podría utilizar los datos de los aviones comerciales como inputs utilizando los datos actualmente disponibles y la capacidad computacional, para proporcionárselos a los sistemas ATM donde se podría ejecutar el método propuesto. Estas velocidades del viento calculadas, o bien la velocidad respecto al suelo y la velocidad verdadera, se podrían difundir, por ejemplo, a través del sistema de direccionamiento e informe para comunicaciones de aeronaves (ACARS), mensajes de ADS-B o Modo S. Esta nueva fuente ayudaría a actualizar la información del viento suministrada en los productos aeronáuticos meteorológicos (PAM), informes meteorológicos de aeródromos (AIRMET), e información meteorológica significativa (SIGMET). ABSTRACT The study of the atmosphere behaviour is been of particular importance both in SESAR and NextGen programs, where the current air traffic management (ATM) system is undergoing a profound transformation to the new paradigms both in Europe and the USA, respectively, to guide and track aircraft more precisely on more efficient routes. Uncertainty is a fundamental characteristic of weather phenomena which is transferred to separation assurance, flight path de-confliction and flight planning applications. In this respect, the wind is a key factor regarding the prediction of the future position of the aircraft, so that having a deeper and accurate knowledge of wind field will reduce ATC uncertainties. The purpose of this thesis is to develop a new and operationally useful technique intended to provide adequate and direct real-time atmospheric winds fields based on on-board aircraft data, in order to improve aircraft trajectory prediction. In order to achieve this objective the following work has been accomplished. The different sources in the aircraft systems that provide the variables needed to derivate the wind velocity have been described and analysed, as well as the capabilities which allow presenting this information for air traffic management applications. The use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction has been explored. Computationally efficient methods have been developed to estimate horizontal wind components from aircraft velocities (VGS, VCAS/VTAS), pressure, and temperature data. These wind data were utilized to estimate a real-time wind field using a data processing approach through a minimum variance method. Finally, the accuracy of this procedure has been evaluated for this information to be useful to air traffic control. The initial information comes from a Flight Data Recorder (FDR) sample of aircraft landing in Madrid-Barajas Airport. Data available for more than three months were exploited in order to derive the wind vector field in each point of the airspace. Mathematical model based on different interpolation methods were used in order to obtain wind vectors in void areas. Three particular scenarios were employed to test two interpolation methods: a two-dimensional one that works with both horizontal components in an independent way, and also a complex variable formulation that links both components. Those methods were tested using various scenarios with dissimilar results. This methodology has been implemented in a prototype tool in MATLAB © in order to automatically analyse FDR and determine the wind vector field that aircraft encounter when flying in the studied airspace. Required conditions and accuracy of the results were derived for this model. The method developed could be fed by commercial aircraft utilizing their currently available data sources and computational capabilities, and providing them to ATM system where the proposed method could be run. Computed wind velocities, or ground and true airspeeds, would then be broadcasted, for example, via the Aircraft Communication Addressing and Reporting System (ACARS), ADS-B out messages, or Mode S. This new source would help updating the wind information furnished in meteorological aeronautical products (PAM), meteorological aerodrome reports (AIRMET), and significant meteorological information (SIGMET).
Resumo:
Models for prediction of oil content as percentage of dried weight in olive fruits were comput- ed through PLS regression on NIR spectra. Spectral preprocessing was carried out by apply- ing multiplicative signal correction (MSC), Sa vitzky–Golay algorithm, standard normal variate correction (SNV), and detrending (D) to NIR spectra. MSC was the preprocessing technique showing the best performance. Further reduction of variability was performed by applying the Wold method of orthogonal signal correction (OSC). The calibration model achieved a R 2 of 0.93, a SEPc of 1.42, and a RPD of 3.8. The R 2 obtained with the validation set remained 0.93, and the SEPc was 1.41.
Resumo:
The purpose of this paper is to expose the importance of observing cultural systems present in a territory as a reference for the design of urban infrastructures in the new cities and regions of rapid development. If we accept the idea that architecture is an instrument or cultural system developed by man to act as an intermediary to the environment, it is necessary to understand the elemental interaction between man and his environment to meet a satisfactory design. To illustrate this purpose, we present the case of the Eurasian Mediterranean region, where the architectural culture acts as a cultural system of adaptation to the environment and it is formed by an ancient process of selection. From simple observation of architectural types, construction systems and environmental mechanisms treasured in mediterranean historical heritage we can extract crucial information about this elemental interaction. Mediterranean architectural culture has environmental mechanisms responding to the needs of basics habitability, ethnics and passive conditioning. These mechanisms can be basis of an innovative design without compromising the diversity and lifestyles of human groups in the region. The main fundament of our investigation is the determination of the historical heritage of domestic architecture as holder of the formation process of these mechanisms. The result allows us to affirm that the successful introduction of new urban infrastructures in an area need a reliable reference and it must be a cultural system that entailing in essence the environmental conditioning of human existence. The urban infrastructures must be sustainable, understood and accepted by the inhabitants. The last condition is more important when the urban infrastructures are implemented in areas that are developing rapidly or when there is no architectural culture.
Resumo:
Since the memristor was first built in 2008 at HP Labs, no end of devices and models have been presented. Also, new applications appear frequently. However, the integration of the device at the circuit level is not straightforward, because available models are still immature and/or suppose high computational loads, making their simulation long and cumbersome. This study assists circuit/systems designers in the integration of memristors in their applications, while aiding model developers in the validation of their proposals. We introduce the use of a memristor application framework to support the work of both the model developer and the circuit designer. First, the framework includes a library with the best-known memristor models, being easily extensible with upcoming models. Systematic modifications have been applied to these models to provide better convergence and significant simulations speedups. Second, a quick device simulator allows the study of the response of the models under different scenarios, helping the designer with the stimuli and operation time selection. Third, fine tuning of the device including parameters variations and threshold determination is also supported. Finally, SPICE/Spectre subcircuit generation is provided to ease the integration of the devices in application circuits. The framework provides the designer with total control overconvergence, computational load, and the evolution of system variables, overcoming usual problems in the integration of memristive devices.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
The annual energy conversion efficiency is calculated for a four junction inverted metamorphic solar cell that has been completely characterized in the laboratory at room temperature using measurements fit to a comprehensive optoelectronic model of the multijunction solar cells. A simple model of the temperature dependence is used redict the performance of the solar cell under varying temperature and spectra characteristic of Golden, CO for an entire year. The annual energy conversion efficiency is calculated by integrating the predicted cell performance over the entire year. The effects of geometric concentration, CPV system thermal characteristics, and luminescent coupling are ighlighted.
Resumo:
The refractive index and extinction coefficient of chemical vapour deposition grown graphene are determined by ellipsometry analysis. Graphene films were grown on copper substrates and transferred as both monolayers and bilayers onto SiO2/Si substrates by using standard manufacturing procedures. The chemical nature and thickness of residual debris formed after the transfer process were elucidated using photoelectron spectroscopy. The real layered structure so deduced has been used instead of the nominal one as the input in the ellipsometry analysis of monolayer and bilayer graphene, transferred onto both native and thermal silicon oxide. The effect of these contamination layers on the optical properties of the stacked structure is noticeable both in the visible and the ultraviolet spectral regions, thus masking the graphene optical response. Finally, the use of heat treatment under a nitrogen atmosphere of the graphene-based stacked structures, as a method to reduce the water content of the sample, and its effect on the optical response of both graphene and the residual debris layer are presented. The Lorentz-Drude model proposed for the optical response of graphene fits fairly well the experimental ellipsometric data for all the analysed graphene-based stacked structures.
Resumo:
The energy spectrum of the confined states of a quantum dot intermediate band (IB) solar cell is calculated with a simplified model. Two peaks are usually visible at the lowest energy side of the subbandgap quantum-efficiency spectrum in these solar cells. They can be attributed to photon absorption between well-defined states. As a consequence, the horizontal size of the quantum dots can be determined, and the conduction (valence) band offset is also determined if the valence (conduction) offset is known.