17 resultados para Mixed model under selection

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of climate on forest stand composition, development and growth is undeniable. Many studies have tried to quantify the effect of climatic variables on forest growth and yield. These works become especially important because there is a need to predict the effects of climate change on the development of forest ecosystems. One of the ways of facing this problem is the inclusion of climatic variables into the classic empirical growth models. The work has a double objective: (i) to identify the indicators which best describe the effect of climate on Pinus halepensis growth and (ii) to quantify such effect in several scenarios of rainfall decrease which are likely to occur in the Mediterranean area. A growth mixed model for P. halepensis including climatic variables is presented in this work. Growth estimates are based on data from the Spanish National Forest Inventory (SNFI). The best results are obtained for the indices including rainfall, or rainfall and temperature together, with annual precipitation, precipitation effectiveness, Emberger?s index or free bioclimatic intensity standing out among them. The final model includes Emberger?s index, free bioclimatic intensity and interactions between competition and climate indices. The results obtained show that a rainfall decrease about 5% leads to a decrease in volume growth of 5.5?7.5% depending on site quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural regeneration in Pinus pinea stands commonly fails throughout the Spanish Northern Plateau under current intensive regeneration treatments. As a result, extensive direct seeding is commonly conducted to guarantee regeneration occurrence. In a period of rationalization of the resources devoted to forest management, this kind of techniques may become unaffordable. Given that the climatic and stand factors driving germination remain unknown, tools are required to understand the process and temper the use of direct seeding. In this study, the spatio-temporal pattern of germination of P. pinea was modelled with those purposes. The resulting findings will allow us to (1) determine the main ecological variables involved in germination in the species and (2) infer adequate silvicultural alternatives. The modelling approach focuses on covariates which are readily available to forest managers. A two-step nonlinear mixed model was fitted to predict germination occurrence and abundance in P. pinea under varying climatic, environmental and stand conditions, based on a germination data set covering a 5-year period. The results obtained reveal that the process is primarily driven by climate variables. Favourable conditions for germination commonly occur in fall although the optimum window is often narrow and may not occur at all in some years. At spatial level, it would appear that germination is facilitated by high stand densities, suggesting that current felling intensity should be reduced. In accordance with other studies on P. pinea dispersal, it seems that denser stands during the regeneration period will reduce the present dependence on direct seeding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short-run forecasting of electricity prices has become necessary for power generation unit schedule, since it is the basis of every profit maximization strategy. In this article a new and very easy method to compute accurate forecasts for electricity prices using mixed models is proposed. The main idea is to develop an efficient tool for one-step-ahead forecasting in the future, combining several prediction methods for which forecasting performance has been checked and compared for a span of several years. Also as a novelty, the 24 hourly time series has been modelled separately, instead of the complete time series of the prices. This allows one to take advantage of the homogeneity of these 24 time series. The purpose of this paper is to select the model that leads to smaller prediction errors and to obtain the appropriate length of time to use for forecasting. These results have been obtained by means of a computational experiment. A mixed model which combines the advantages of the two new models discussed is proposed. Some numerical results for the Spanish market are shown, but this new methodology can be applied to other electricity markets as well

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El tema central de investigación en esta Tesis es el estudio del comportamientodinámico de una estructura mediante modelos que describen la distribución deenergía entre los componentes de la misma y la aplicación de estos modelos parala detección de daños incipientes.Los ensayos dinámicos son un modo de extraer información sobre las propiedadesde una estructura. Si tenemos un modelo de la estructura se podría ajustar éstepara que, con determinado grado de precisión, tenga la misma respuesta que elsistema real ensayado. Después de que se produjese un daño en la estructura,la respuesta al mismo ensayo variará en cierta medida; actualizando el modelo alas nuevas condiciones podemos detectar cambios en la configuración del modeloestructural que nos condujeran a la conclusión de que en la estructura se haproducido un daño.De este modo, la detección de un daño incipiente es posible si somos capacesde distinguir una pequeña variación en los parámetros que definen el modelo. Unrégimen muy apropiado para realizar este tipo de detección es a altas frecuencias,ya que la respuesta es muy dependiente de los pequeños detalles geométricos,dado que el tamaño característico en la estructura asociado a la respuesta esdirectamente proporcional a la velocidad de propagación de las ondas acústicas enel sólido, que para una estructura dada es inalterable, e inversamente proporcionala la frecuencia de la excitación. Al mismo tiempo, esta característica de la respuestaa altas frecuencias hace que un modelo de Elementos Finitos no sea aplicable enla práctica, debido al alto coste computacional.Un modelo ampliamente utilizado en el cálculo de la respuesta de estructurasa altas frecuencias en ingeniería es el SEA (Statistical Energy Analysis). El SEAaplica el balance energético a cada componente estructural, relacionando la energíade vibración de estos con la potencia disipada por cada uno de ellos y la potenciatransmitida entre ellos, cuya suma debe ser igual a la potencia inyectada a cadacomponente estructural. Esta relación es lineal y viene caracterizada por los factoresde pérdidas. Las magnitudes que intervienen en la respuesta se consideranpromediadas en la geometría, la frecuencia y el tiempo.Actualizar el modelo SEA a datos de ensayo es, por lo tanto, calcular losfactores de pérdidas que reproduzcan la respuesta obtenida en éste. Esta actualización,si se hace de manera directa, supone la resolución de un problema inversoque tiene la característica de estar mal condicionado. En la Tesis se propone actualizarel modelo SEA, no en término de los factores de pérdidas, sino en términos deparámetros estructurales que tienen sentido físico cuando se trata de la respuestaa altas frecuencias, como son los factores de disipación de cada componente, susdensidades modales y las rigideces características de los elementos de acoplamiento.Los factores de pérdidas se calculan como función de estos parámetros. Estaformulación es desarrollada de manera original en esta Tesis y principalmente sefunda en la hipótesis de alta densidad modal, es decir, que en la respuesta participanun gran número de modos de cada componente estructural.La teoría general del método SEA, establece que el modelo es válido bajounas hipótesis sobre la naturaleza de las excitaciones externas muy restrictivas,como que éstas deben ser de tipo ruido blanco local. Este tipo de carga es difícil dereproducir en condiciones de ensayo. En la Tesis mostramos con casos prácticos queesta restricción se puede relajar y, en particular, los resultados son suficientementebuenos cuando la estructura se somete a una carga armónica en escalón.Bajo estas aproximaciones se desarrolla un algoritmo de optimización por pasosque permite actualizar un modelo SEA a un ensayo transitorio cuando la carga esde tipo armónica en escalón. Este algoritmo actualiza el modelo no solamente parauna banda de frecuencia en particular sino para diversas bandas de frecuencia demanera simultánea, con el objetivo de plantear un problema mejor condicionado.Por último, se define un índice de daño que mide el cambio en la matriz depérdidas cuando se produce un daño estructural en una localización concreta deun componente. Se simula numéricamente la respuesta de una estructura formadapor vigas donde producimos un daño en la sección de una de ellas; como se tratade un cálculo a altas frecuencias, la simulación se hace mediante el Método delos Elementos Espectrales para lo que ha sido necesario desarrollar dentro de laTesis un elemento espectral de tipo viga dañada en una sección determinada. Losresultados obtenidos permiten localizar el componente estructural en que se haproducido el daño y la sección en que éste se encuentra con determinado grado deconfianza.AbstractThe main subject under research in this Thesis is the study of the dynamic behaviourof a structure using models that describe the energy distribution betweenthe components of the structure and the applicability of these models to incipientdamage detection.Dynamic tests are a way to extract information about the properties of astructure. If we have a model of the structure, it can be updated in order toreproduce the same response as in experimental tests, within a certain degree ofaccuracy. After damage occurs, the response will change to some extent; modelupdating to the new test conditions can help to detect changes in the structuralmodel leading to the conclusión that damage has occurred.In this way incipient damage detection is possible if we are able to detect srnallvariations in the model parameters. It turns out that the high frequency regimeis highly relevant for incipient damage detection, because the response is verysensitive to small structural geometric details. The characteristic length associatedwith the response is proportional to the propagation speed of acoustic waves insidethe solid, but inversely proportional to the excitation frequency. At the same time,this fact makes the application of a Finite Element Method impractical due to thehigh computational cost.A widely used model in engineering when dealing with the high frequencyresponse is SEA (Statistical Energy Analysis). SEA applies the energy balance toeach structural component, relating their vibrational energy with the dissipatedpower and the transmitted power between the different components; their summust be equal to the input power to each of them. This relationship is linear andcharacterized by loss factors. The magnitudes considered in the response shouldbe averaged in geometry, frequency and time.SEA model updating to test data is equivalent to calculating the loss factorsthat provide a better fit to the experimental response. This is formulated as an illconditionedinverse problem. In this Thesis a new updating algorithm is proposedfor the study of the high frequency response regime in terms of parameters withphysical meaning such as the internal dissipation factors, modal densities andcharacteristic coupling stiffness. The loss factors are then calculated from theseparameters. The approach is developed entirely in this Thesis and is mainlybased on a high modal density asumption, that is to say, a large number of modescontributes to the response.General SEA theory establishes the validity of the model under the asumptionof very restrictive external excitations. These should behave as a local white noise.This kind of excitation is difficult to reproduce in an experimental environment.In this Thesis we show that in practical cases this assumption can be relaxed, inparticular, results are good enough when the structure is excited with a harmonicstep function.Under these assumptions an optimization algorithm is developed for SEAmodel updating to a transient test when external loads are harmonic step functions.This algorithm considers the response not only in a single frequency band,but also for several of them simultaneously.A damage index is defined that measures the change in the loss factor matrixwhen a damage has occurred at a certain location in the structure. The structuresconsidered in this study are built with damaged beam elements; as we are dealingwith the high frequency response, the numerical simulation is implemented witha Spectral Element Method. It has therefore been necessary to develop a spectralbeam damaged element as well. The reported results show that damage detectionis possible with this algorithm, moreover, damage location is also possible withina certain degree of accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates the ultimate earthquake resistance of typical RC moment resisting frames designed accordingly to current standards, in terms of ultimate energy absorption/dissipation capacity. Shake table test of a 2/5 scale model, under several intensities of ground motion, are carried out. The loading effect of the earthquake is expressed as the total energy that the quake inputs to the structure, and the seismic resistance is interpreted as the amount of energy that the structure dissipates in terms of cumulative inelastic strain energy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A disruption predictor based on support vector machines (SVM) has been developed to be used in JET. The training process uses thousands of discharges and, therefore, high performance computing has been necessary to obtain the models. To this respect, several models have been generated with data from different JET campaigns. In addition, various kernels (mainly linear and RBF) and parameters have been tested. The main objective of this work has been the implementation of the predictor model under real-time constraints. A “C-code” software application has been developed to simulate the real-time behavior of the predictor. The application reads the signals from the JET database and simulates the real-time data processing, in particular, the specific data hold method to be developed when reading data from the JET ATM real time network. The simulator is fully configurable by means of text files to select models, signal thresholds, sampling rates, etc. Results with data between campaigns C23and C28 will be shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta comunicación se presenta el método para obtener modelos equivalentes eléctricos de materiales piezoeléctricos utilizados en entornos con tráfico vial para aplicaciones "Energy Harvesting". Los resultados experimentales se procesan para determinar la estructura topológica óptima y la tecnología de los elementos semiconductores utilizados en la etapa de entrada del sistema de alimentación "harvesting". Asimismo se presenta el modelo de la fuente de alimentación no regulada bajo demanda variable de corriente. Abstract: The method to obtain electrical equivalent models of piezoelectric materials used in energy harvesting road traffic environment is presented in this paper. The experimental results are processed in order to determine the optimal topological structure and technology of the semiconductor elements used in the input stage of the power harvesting system. The non regulated power supply model under variable current demand is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En los modelos promovidos por las normativas internacionales de análisis de riesgos en los sistemas de información, los activos están interrelacionados entre sí, de modo que un ataque sobre uno de ellos se puede transmitir a lo largo de toda la red, llegando a alcanzar a los activos más valiosos para la organización. Es necesario entonces asignar el valor de todos los activos, así como las relaciones de dependencia directas e indirectas entre estos, o la probabilidad de materialización de una amenaza y la degradación que ésta puede provocar sobre los activos. Sin embargo, los expertos encargados de asignar tales valores, a menudo aportan información vaga e incierta, de modo que las técnicas difusas pueden ser muy útiles en este ámbito. Pero estas técnicas no están libres de ciertas dificultades, como la necesidad de uso de una aritmética adecuada al modelo o el establecimiento de medidas de similitud apropiadas. En este documento proponemos un tratamiento difuso para los modelos de análisis de riesgos promovidos por las metodologías internacionales, mediante el establecimiento de tales elementos.Abstract— Assets are interrelated in risk analysis methodologies for information systems promoted by international standards. This means that an attack on one asset can be propagated through the network and threaten an organization’s most valuable assets. It is necessary to valuate all assets, the direct and indirect asset dependencies, as well as the probability of threats and the resulting asset degradation. However, the experts in charge to assign such values often provide only vague and uncertain information. Fuzzy logic can be very helpful in such situation, but it is not free of some difficulties, such as the need of a proper arithmetic to the model under consideration or the establishment of appropriate similarity measures. Throughout this paper we propose a fuzzy treatment for risk analysis models promoted by international methodologies through the establishment of such elements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives The study sought to evaluate the ability of cardiac magnetic resonance (CMR) to monitor acute and long-term changes in pulmonary vascular resistance (PVR) noninvasively. Background PVR monitoring during the follow-up of patients with pulmonary hypertension (PH) and the response to vasodilator testing require invasive right heart catheterization. Methods An experimental study in pigs was designed to evaluate the ability of CMR to monitor: 1) an acute increase in PVR generated by acute pulmonary embolization (n = 10); 2) serial changes in PVR in chronic PH (n = 22); and 3) changes in PVR during vasodilator testing in chronic PH (n = 10). CMR studies were performed with simultaneous hemodynamic assessment using a CMR-compatible Swan-Ganz catheter. Average flow velocity in the main pulmonary artery (PA) was quantified with phase contrast imaging. Pearson correlation and mixed model analysis were used to correlate changes in PVR with changes in CMR-quantified PA velocity. Additionally, PVR was estimated from CMR data (PA velocity and right ventricular ejection fraction) using a formula previously validated. Results Changes in PA velocity strongly and inversely correlated with acute increases in PVR induced by pulmonary embolization (r = –0.92), serial PVR fluctuations in chronic PH (r = –0.89), and acute reductions during vasodilator testing (r = –0.89, p ≤ 0.01 for all). CMR-estimated PVR showed adequate agreement with invasive PVR (mean bias –1.1 Wood units,; 95% confidence interval: –5.9 to 3.7) and changes in both indices correlated strongly (r = 0.86, p < 0.01). Conclusions CMR allows for noninvasive monitoring of acute and chronic changes in PVR in PH. This capability may be valuable in the evaluation and follow-up of patients with PH.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

- Context: Pinus pinea L. presents serious problems of natural regeneration in managed forest of Central Spain. The species exhibits specific traits linked to frugivore activity. Therefore, information on plant–animal interactions may be crucial to understand regeneration failure. - Aims: Determining the spatio-temporal pattern of P. pinea seed predation by Apodemus sylvaticus L. and the factors involved. Exploring the importance of A. sylvaticus L. as a disperser of P. pinea. Identifying other frugivores and their seasonal patterns. - Methods: An intensive 24-month seed predation trial was carried out. The probability of seeds escaping predation was modelled through a zero-inflated binomial mixed model. Experiments on seed dispersal by A. sylvaticus were conducted. Cameras were set up to identify other potential frugivores. - Results: Decreasing rodent population in summer and masting enhances seed survival. Seeds were exploited more rapidly nearby parent trees and shelters. A. sylvaticus dispersal activity was found to be scarce. Corvids marginally preyed upon P. pinea seeds. - Conclusions: Survival of P. pinea seeds is climate-controlled through the timing of the dry period together with masting occurrence. Should germination not take place during the survival period, establishment may be limited. A. sylvaticus mediated dispersal does not modify the seed shadow. Seasonality of corvid activity points to a role of corvids in dispersal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las transiciones en guía de onda son las estructuras utilizadas en microondas para transferir y adaptar la señal que viaja en un determinado sistema de transmisión (por ejemplo, un cable coaxial) a otro sistema de transmisión o a un sistema radiante (por ejemplo, una antena de bocina). Los dos sistemas de transmisión entre los que la transición adapta la señal pueden ser distintos o del mismo tipo pero con alguna de sus dimensiones diferente. Existen diferentes transiciones de guía de onda que dependiendo de su utilidad son diseñadas y construidas con diferentes secciones: circular, rectangular, elíptica o incluso combinaciones de éstas. No es necesario que la sección de la guía presente una forma geométrica conocida pero los estándares que se van a seguir hacen referencia en concreto a secciones rectangulares y circulares. En el trabajo que aquí se desarrolla se pretende optimizar por medio de simulaciones paramétricas una transición entre cable coaxial con conector tipo K y una guía de onda de sección circular que sigue el estándar presentado por Flann, Millitech y Antarfs para la banda WR34. La transición que va a ser objeto de este estudio se denomina transición tapered o transición conformada. Este tipo de transiciones se caracterizan por variar una de sus dimensiones progresivamente hasta llegar al tamaño definido en el estándar correspondiente. La manera de realizar la optimización de la transición se basará en el estudio del parámetro S11 que presente la estructura a lo largo de la banda de trabajo. Ya que se sigue el estándar WR34 la banda de trabajo que éste comprende va de 21,7 a 33 GHz. Se pretende conseguir que la respuesta del parámetro S11 se encuentre por debajo de -20 dB en la banda de WR34 como resultado del diseño para poder contar de esta manera con una buena adaptación. Finalmente se propondrá un criterio a seguir para optimizar este tipo de transiciones siguiendo el objetivo de mejor adaptación teniendo en cuenta el impacto de cada tramo sobre el rango de frecuencias en el que influye y se presentarán las características finales que presenta la transición bajo estudio. En este documento se introduce de manera breve la utilidad de los transformadores de impedancias lambda cuartos en líneas de transmisión, el estado del arte de las diferentes técnicas para su diseño, y la propuesta de diseño y caracterización objeto del presente trabajo. Posteriormente, se presenta el caso de estudio para el diseño de la transición para ser integrada a una bocina de choke. Luego, se introduce el marco teórico del trabajo presentando algunos ejemplos ilustrativos de tramos de guía de onda rectangular y guía de onda circular, introduciendo adaptadores de λ/4 en simulaciones. A continuación, se explica la implementación del modelo bajo estudio en CST (Computer Simulation Technology) Studio Suite. Finalmente se presenta la discusión de los resultados obtenidos, las conclusiones y líneas futuras de trabajo. ABSTRACT. Waveguide transitions are structures used in microwave in order to transfer and adapt the signal that travels from a certain transmission system (e.g. coaxial cable) to other transmission system or to a radiant system (e.g. horn antenna). Both transmission systems between which the transition adapts the signal can be different or from the same type, but with differences in some of their dimensions. There are different waveguide transitions that, depending on their utility, are designed and constructed in different sections: circular, rectangular, elliptic or combinations of the former. The section of the guide does not have to have a known geometric shape, although the standards to be followed in this thesis apply to rectangular and circular sections. In the work here presented, we aim to optimize by means of parametric simulations, a transition between a coaxial cable with a K-type connector and a waveguide with circular section following the standard presented by Flann, Millitech y Antarfs for the band WR34. The transition studied is called tapered transition, which is characterized by the progressive variation of its dimensions, until reaching the defined size of the corresponding standard. The way of optimizing the transition will be based in the study of the parameter S11 presented by the structure along the bandwidth. Since the standard used is the WR34, the bandwidth can be defined from 21.7 up to 33 GHz. It is aimed that the response of the S11 parameter be lower equal than -20dB in the frequency band under study according to the design in order to be well matched. Finally, a criterion to follow is proposed in order to optimize these transitions type, following the better-match objective. That will be done taking into account the impact of each section on the frequency range in which influences and the final characteristics of the studied transition will be presented. In this document, it is briefly introduced the utility of quarter-wave impedance transformers in transmission lines, the state-of-art of the different techniques for their design, and the proposal of design and characterization aimed with this work. Afterwards, the study case for the design of the transition to be integrated with a choke horn antenna will be presented. Later, the theoretical frame work is introduced, giving some illustrative examples of rectangular and circular waveguide sections, and also introducing λ/4 adaptors in the simulations. Subsequently, the implementation of the model under study in CST (Computer Simulation Technology) Studio Suite will be explained. Finally, the discussion of the results, conclusion and future lines of the work are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La mecanización de las labores del suelo es la causa, por su consumo energético e impacto directo sobre el medio ambiente, que más afecta a la degradación y pérdida de productividad de los suelos. Entre los factores de disminución de la productividad se deben considerar la compactación, la erosión, el encostramiento y la pérdida de estructura. Todo esto obliga a cuidar el manejo agrícola de los suelos tratando de mejorar las condiciones del suelo y elevar sus rendimientos sin comprometer aspectos económicos, ecológicos y ambientales. En el presente trabajo se adecuan los parámetros constitutivos del modelo de Drucker Prager Extendido (DPE) que definen la fricción y la dilatancia del suelo en la fase de deformación plástica, para minimizar los errores en las predicciones durante la simulación de la respuesta mecánica de un Vertisol mediante el Método de Elementos Finitos. Para lo cual inicialmente se analizaron las bases teóricas que soportan este modelo, se determinaron las propiedades y parámetros físico-mecánicos del suelo requeridos como datos de entrada por el modelo, se determinó la exactitud de este modelo en las predicciones de la respuesta mecánica del suelo, se estimaron mediante el método de aproximación de funciones de Levenberg-Marquardt los parámetros constitutivos que definen la trayectoria de la curva esfuerzo-deformación plástica. Finalmente se comprobó la exactitud de las predicciones a partir de las adecuaciones realizadas al modelo. Los resultados permitieron determinar las propiedades y parámetros del suelo, requeridos como datos de entrada por el modelo, mostrando que su magnitud está en función su estado de humedad y densidad, además se obtuvieron los modelos empíricos de estas relaciones exhibiendo un R2>94%. Se definieron las variables que provocan las inexactitudes del modelo constitutivo (ángulo de fricción y dilatancia), mostrando que las mismas están relacionadas con la etapa de falla y deformación plástica. Finalmente se estimaron los valores óptimos de estos ángulos, disminuyendo los errores en las predicciones del modelo DPE por debajo del 4,35% haciéndelo adecuado para la simulación de la respuesta mecánica del suelo investigado. ABSTRACT The mechanization using farming techniques is one of the main factors that affects the most the soil, causing its degradation and loss of productivity, because of its energy consumption and direct impact on the environment. Compaction, erosion, crusting and loss of structure should be considered among the factors that decrease productivity. All this forces the necessity to take care of the agricultural-land management trying to improve soil conditions and increase yields without compromising economic, ecological and environmental aspects. The present study was aimed to adjust the parameters of the Drucker-Prager Extended Model (DPE), defining friction and dilation of soil in plastic deformation phase, in order to minimize the error of prediction when simulating the mechanical response of a Vertisol through the fine element method. First of all the theoretic fundamentals that withstand the model were analyzed. The properties and physical-mechanical parameters of the soil needed as input data to initialize the model, were established. And the precision of the predictions for the mechanical response of the soil was assessed. Then the constitutive parameters which define the path of the plastic stress-strain curve were estimated through Levenberg-Marquardt method of function approximations. Lastly the accuracy of the predictions from the adequacies made to the model was tested. The results permitted to determine those properties and parameters of the soil, needed in order to initialize the model. It showed that their magnitude is in function of density and humidity. Moreover, the empirical models from these relations were obtained: R2>94%. The variables producing inaccuracies in the constitutive model (angle of repose and dilation) were defined, and there was showed that they are linked with the plastic deformation and rupture point. Finally the optimal values of these angles were established, obtaining thereafter error values for the DPE model under 4, 35%, and making it suitable for the simulation of the mechanical response of the soil under study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los procesos de diseño y construcción en Arquitectura han mostrado un desarrollo de optimización históricamente muy deficiente cuando se compara con las restantes actividades típicamente industriales. La aspiración constante a una industrialización efectiva, tanto en aras de alcanzar mayores cotas de calidad así como de ahorro de recursos, recibe hoy una oportunidad inmejorable desde el ámbito informático: el Building Information Modelling o BIM. Lo que en un inicio puede parecer meramente un determinado tipo de programa informático, en realidad supone un concepto de “proceso” que subvierte muchas rutinas hoy habituales en el desarrollo de proyectos y construcciones arquitectónicas. La inclusión y desarrollo de datos ligados al proyecto, desde su inicio hasta el fin de su ciclo de vida, conlleva la oportunidad de crear una realidad virtual dinámica y actualizable, que por añadidura posibilita su ensayo y optimización en todos sus aspectos: antes y durante su ejecución, así como vida útil. A ello se suma la oportunidad de transmitir eficientemente los datos completos de proyecto, sin apenas pérdidas o reelaboración, a la cadena de fabricación, lo que facilita el paso a una industrialización verdaderamente significativa en edificación. Ante una llamada mundial a la optimización de recursos y el interés indudable de aumentar beneficios económicos por medio de la reducción del factor de incertidumbre de los procesos, BIM supone un opción de mejora indudable, y así ha sido reconocido a través de la inminente implantación obligatoria por parte de los gobiernos (p. ej. Gran Bretaña en 2016 y España en 2018). La modificación de procesos y roles profesionales que conlleva la incorporación de BIM resulta muy significativa y marcará el ejercicio profesional de los futuros graduados en las disciplinas de Arquitectura, Ingeniería y Construcción (AEC por sus siglas en inglés). La universidad debe responder ágilmente a estas nuevas necesidades incorporando esta metodología en la enseñanza reglada y aportando una visión sinérgica que permita extraer los beneficios formativos subyacentes en el propio marco BIM. En este sentido BIM, al aglutinar el conjunto de datos sobre un único modelo virtual, ofrece un potencial singularmente interesante. La realidad tridimensional del modelo, desarrollada y actualizada continuamente, ofrece al estudiante una gestión radicalmente distinta de la representación gráfica, en la que las vistas parciales de secciones y plantas, tan complejas de asimilar en los inicios de la formación universitaria, resultan en una mera petición a posteriori, para ser extraída según necesidad del modelo virtual. El diseño se realiza siempre sobre el propio modelo único, independientemente de la vista de trabajo elegida en cada momento, permaneciendo los datos y sus relaciones constructivas siempre actualizados y plenamente coherentes. Esta descripción condensada de características de BIM preconfiguran gran parte de las beneficios formativos que ofrecen los procesos BIM, en especial, en referencia al desarrollo del diseño integrado y la gestión de la información (incluyendo TIC). Destacan a su vez las facilidades en comprensión visual de elementos arquitectónicos, sistemas técnicos, sus relaciones intrínsecas así como procesos constructivos. A ello se une el desarrollo experimental que la plataforma BIM ofrece a través de sus software colaborativos: la simulación del comportamiento estructural, energético, económico, entre otros muchos, del modelo virtual en base a los datos inherentes del proyecto. En la presente tesis se describe un estudio de conjunto para explicitar tanto las cualidades como posibles reservas en el uso de procesos BIM, en el marco de una disciplina concreta: la docencia de la Arquitectura. Para ello se ha realizado una revisión bibliográfica general sobre BIM y específica sobre docencia en Arquitectura, así como analizado las experiencias de distintos grupos de interés en el marco concreto de la enseñanza de la en Arquitectura en la Universidad Europea de Madrid. El análisis de beneficios o reservas respecto al uso de BIM se ha enfocado a través de la encuesta a estudiantes y la entrevista a profesionales AEC relacionados o no con BIM. Las conclusiones del estudio permiten sintetizar una implantación de metodología BIM que para mayor claridad y facilidad de comunicación y manejo, se ha volcado en un Marco de Implantación eminentemente gráfico. En él se orienta sobre las acciones docentes para el desarrollo de competencias concretas, valiéndose de la flexibilidad conceptual de los Planes de Estudio en el contexto del Espacio Europeo de Educación Superior (Declaración de Bolonia) para incorporar con naturalidad la nueva herramienta docente al servicio de los objetivos formativo legalmente establecidos. El enfoque global del Marco de Implementación propuesto facilita la planificación de acciones formativas con perspectiva de conjunto: combinar los formatos puntuales o vehiculares BIM, establecer sinergias transversales y armonizar recursos, de modo que la metodología pueda beneficiar tanto la asimilación de conocimientos y habilidades establecidas para el título, como el propio flujo de aprendizaje o learn flow BIM. Del mismo modo reserva, incluso visualmente, aquellas áreas de conocimiento en las que, al menos en la planificación actual, la inclusión de procesos BIM no se considera ventajosa respecto a otras metodologías, o incluso inadecuadas para los objetivos docentes establecidos. Y es esta última categorización la que caracteriza el conjunto de conclusiones de esta investigación, centrada en: 1. la incuestionable necesidad de formar en conceptos y procesos BIM desde etapas muy iniciales de la formación universitaria en Arquitectura, 2. los beneficios formativos adicionales que aporta BIM en el desarrollo de competencias muy diversas contempladas en el currículum académico y 3. la especificidad del rol profesional del arquitecto que exigirá una implantación cuidadosa y ponderada de BIM que respete las metodologías de desarrollo creativo tradicionalmente efectivas, y aporte valor en una reorientación simbiótica con el diseño paramétrico y fabricación digital que permita un diseño finalmente generativo. ABSTRACT The traditional architectural design and construction procedures have proven to be deficient where process optimization is concerned, particularly when compared to other common industrial activities. The ever‐growing strife to achieve effective industrialization, both in favor of reaching greater quality levels as well as sustainable management of resources, has a better chance today than ever through a mean out of the realm of information technology, the Building Information Modelling o BIM. What may initially seem to be merely another computer program, in reality turns out to be a “process” concept that subverts many of today’s routines in architectural design and construction. Including and working with project data from the very beginning to the end of its full life cycle allows for creating a dynamic and updatable virtual reality, enabling data testing and optimizing throughout: before and during execution, all the way to the end of its lifespan. In addition, there is an opportunity to transmit complete project data efficiently, with hardly any loss or redeveloping of the manufacture chain required, which facilitates attaining a truly significant industrialization within the construction industry. In the presence of a world‐wide call for optimizing resources, along with an undeniable interest in increasing economic benefits through reducing uncertainty factors in its processes, BIM undoubtedly offers a chance for improvement as acknowledged by its imminent and mandatory implementation on the part of governments (for example United Kingdom in 2016 and Spain in 2018). The changes involved in professional roles and procedures upon incorporating BIM are highly significant and will set the course for future graduates of Architecture, Engineering and Construction disciplines (AEC) within their professions. Higher Education must respond to such needs with swiftness by incorporating this methodology into their educational standards and providing a synergetic vision that focuses on the underlying educational benefits inherent in the BIM framework. In this respect, BIM, in gathering data set under one single virtual model, offers a uniquely interesting potential. The three‐dimensional reality of the model, under continuous development and updating, provides students with a radically different graphic environment, in which partial views of elevation, section or plan that tend characteristically to be difficult to assimilate at the beginning of their studies, become mere post hoc requests to be ordered when needed directly out the virtual model. The design is always carried out on the sole model itself, independently of the working view chosen at any particular moment, with all data and data relations within construction permanently updated and fully coherent. This condensed description of the features of BIM begin to shape an important part of the educational benefits posed by BIM processes, particularly in reference to integrated design development and information management (including ITC). At the same time, it highlights the ease with which visual understanding is achieved regarding architectural elements, technology systems, their intrinsic relationships, and construction processes. In addition to this, there is the experimental development the BIM platform grants through its collaborative software: simulation of structural, energetic, and economic behavior, among others, of the virtual model according to the data inherent to the project. This doctoral dissertation presents a broad study including a wide array of research methods and issues in order to specify both the virtues and possible reservations in the use of BIM processes within the framework of a specific discipline: teaching Architecture. To do so, a literature review on BIM has been carried out, specifically concerning teaching in the discipline of Architecture, as well as an analysis of the experience of different groups of interest delimited to Universidad Europea de Madrid. The analysis of the benefits and/or limitations of using BIM has been approached through student surveys and interviews with professionals from the AEC sector, associated or not, with BIM. Various diverse educational experiences are described and academic management for experimental implementation has been analyzed. The conclusions of this study offer a synthesis for a Framework of Implementation of BIM methodology, which in order to reach greater clarity, communication ease and user‐friendliness, have been posed in an eminently graphic manner. The proposed framework proffers guidance on teaching methods conducive to the development of specific skills, taking advantage of the conceptual flexibility of the European Higher Education Area guidelines based on competencies, which naturally facilitate for the incorporation of this new teaching tool to achieve the educational objectives established by law. The global approach of the Implementation Framework put forth in this study facilitates the planning of educational actions within a common perspective: combining exceptional or vehicular BIM formats, establishing cross‐disciplinary synergies, and sharing resources, so as to purport a methodology that contributes to the assimilation of knowledge and pre‐defined competencies within the degree program, and to the flow of learning itself. At the same time, it reserves, even visually, those areas of knowledge in which the use of BIM processes is not considered necessarily an advantage over other methodologies, or even inadequate for the learning outcomes established, at least where current planning is concerned. It is this last category which characterizes the research conclusions as a whole, centering on: 1. The unquestionable need for teaching BIM concepts and processes in Architecture very early on, in the initial stages of higher education; 2. The additional educational benefits that BIM offers in a varied array of competency development within the academic curriculum; and 3. The specific nature of the professional role of the Architect, which demands a careful and balanced implementation of BIM that respects the traditional teaching methodologies that have proven effective and creative, and adds value by a symbiotic reorientation merged with parametric design and digital manufacturing so to enable for a finally generative design.