947 resultados para modeling and model calibration
Resumo:
Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.
Resumo:
To study the fluid motion-vehicle dynamics interaction, a model of four, liquid filled two-axle container freight wagons was set up. The railway vehicle has been modelled as a multi-body system (MBS). To include fluid sloshing, an equivalent mechanical model has been developed and incorporated. The influence of several factors has been studied in computer simulations, such as track defects, curve negotiation, train velocity, wheel wear, liquid and solid wagonload, and container baffles. SIMPACK has been used for MBS analysis, and ANSYS for liquid sloshing modelling and equivalent mechanical systems validation. Acceleration and braking manoeuvres of the freight train set the liquid cargo into motion. This longitudinal sloshing motion of the fluid cargo inside the tanks initiated a swinging motion of some components of the coupling gear. The coupling gear consists of UIC standard traction hooks and coupling screws that are located between buffers. One of the coupling screws is placed in the traction hook of the opposite wagon thus joining the two wagons, whereas the unused coupling screw rests on a hanger. Simulation results showed that, for certain combinations of type of liquid, filling level and container dimensions, the liquid cargo could provoke an undesirable, although not hazardous, release of the unused coupling screw from its hanger. The coupling screw's release was especially obtained when a period of acceleration was followed by an abrupt braking manoeuvre at 1 m/s2. It was shown that a resonance effect between the liquid's oscillation and the coupling screw's rotary motion could be the reason for the coupling screw's undesired release. Possible solutions to avoid the phenomenon are given.Acceleration and braking manoeuvres of the freight train set the liquid cargo into motion. This longitudinal sloshing motion of the fluid cargo inside the tanks initiated a swinging motion of some components of the coupling gear. The coupling gear consists of UIC standard traction hooks and coupling screws that are located between buffers. One of the coupling screws is placed in the traction hook of the opposite wagon thus joining the two wagons, whereas the unused coupling screw rests on a hanger. This paper reports on a study of the fluid motion-train vehicle dynamics interaction. In the study, a model of four, liquid-filled two-axle container freight wagons was developed. The railway vehicle has been modeled as a multi-body system (MBS). To include fluid sloshing, an equivalent mechanical model has been developed and incorporated. The influence of several factors has been studied in computer simulations, such as track defects, curve negotiation, train velocity, wheel wear, liquid and solid wagonload, and container baffles. A simulation program was used for MBS analysis, and a finite element analysis program was used for liquid sloshing modeling and equivalent mechanical systems validation. Acceleration and braking maneuvers of the freight train set the liquid cargo into motion. This longitudinal sloshing motion of the fluid cargo inside the tanks initiated a swinging motion of some components of the coupling gear. Simulation results showed that, for certain combinations of type of liquid, filling level and container dimensions, the liquid cargo could provoke an undesirable, although not hazardous, release of an unused coupling screw from its hanger. It was shown that a resonance effect between the liquid's oscillation and the coupling screw's rotary motion could be the reason for the coupling screw's undesired release. Solutions are suggested to avoid the resonance problem, and directions for future research are given.
Resumo:
This paper presents a new methodology to build parametric models to estimate global solar irradiation adjusted to specific on-site characteristics based on the evaluation of variable im- portance. Thus, those variables higly correlated to solar irradiation on a site are implemented in the model and therefore, different models might be proposed under different climates. This methodology is applied in a study case in La Rioja region (northern Spain). A new model is proposed and evaluated on stability and accuracy against a review of twenty-two already exist- ing parametric models based on temperatures and rainfall in seventeen meteorological stations in La Rioja. The methodology of model evaluation is based on bootstrapping, which leads to achieve a high level of confidence in model calibration and validation from short time series (in this case five years, from 2007 to 2011). The model proposed improves the estimates of the other twenty-two models with average mean absolute error (MAE) of 2.195 MJ/m2 day and average confidence interval width (95% C.I., n=100) of 0.261 MJ/m2 day. 41.65% of the daily residuals in the case of SIAR and 20.12% in that of SOS Rioja fall within the uncertainty tolerance of the pyranometers of the two networks (10% and 5%, respectively). Relative differences between measured and estimated irradiation on an annual cumulative basis are below 4.82%. Thus, the proposed model might be useful to estimate annual sums of global solar irradiation, reaching insignificant differences between measurements from pyranometers.
Resumo:
Nowadays, Software Product Line (SPL) engineering [1] has been widely-adopted in software development due to the significant improvements that has provided, such as reducing cost and time-to-market and providing flexibility to respond to planned changes [2]. SPL takes advantage of common features among the products of a family through the systematic reuse of the core-assets and the effective management of variabilities across the products. SPL features are realized at the architectural level in product-line architecture (PLA) models. Therefore, suitable modeling and specification techniques are required to model variability. In fact, architectural variability modeling has become a challenge for SPLE due to the fact that PLA modeling requires not only modeling variability at the level of the external architecture configuration (see [3,4] literature reviews), but also at the level of internal specification of components [5]. In addition, PLA modeling requires preserving the traceability between features and PLAs. Finally, it is important to take into account that PLA modeling should guide architects in modeling the PLA core assets and variability, and in deriving the customized products. To deal with these needs, we present in this demonstration the FPLA Modeling Framework.
Resumo:
Leaf nitrogen and leaf surface area influence the exchange of gases between terrestrial ecosystems and the atmosphere, and play a significant role in the global cycles of carbon, nitrogen and water. The purpose of this study is to use field-based and satellite remote-sensing-based methods to assess leaf nitrogen pools in five diverse European agricultural landscapes located in Denmark, Scotland (United Kingdom), Poland, the Netherlands and Italy. REGFLEC (REGularized canopy reFLECtance) is an advanced image-based inverse canopy radiative transfer modelling system which has shown proficiency for regional mapping of leaf area index (LAI) and leaf chlorophyll (CHLl) using remote sensing data. In this study, high spatial resolution (10–20 m) remote sensing images acquired from the multispectral sensors aboard the SPOT (Satellite For Observation of Earth) satellites were used to assess the capability of REGFLEC for mapping spatial variations in LAI, CHLland the relation to leaf nitrogen (Nl) data in five diverse European agricultural landscapes. REGFLEC is based on physical laws and includes an automatic model parameterization scheme which makes the tool independent of field data for model calibration. In this study, REGFLEC performance was evaluated using LAI measurements and non-destructive measurements (using a SPAD meter) of leaf-scale CHLl and Nl concentrations in 93 fields representing crop- and grasslands of the five landscapes. Furthermore, empirical relationships between field measurements (LAI, CHLl and Nl and five spectral vegetation indices (the Normalized Difference Vegetation Index, the Simple Ratio, the Enhanced Vegetation Index-2, the Green Normalized Difference Vegetation Index, and the green chlorophyll index) were used to assess field data coherence and to serve as a comparison basis for assessing REGFLEC model performance. The field measurements showed strong vertical CHLl gradient profiles in 26% of fields which affected REGFLEC performance as well as the relationships between spectral vegetation indices (SVIs) and field measurements. When the range of surface types increased, the REGFLEC results were in better agreement with field data than the empirical SVI regression models. Selecting only homogeneous canopies with uniform CHLl distributions as reference data for evaluation, REGFLEC was able to explain 69% of LAI observations (rmse = 0.76), 46% of measured canopy chlorophyll contents (rmse = 719 mg m−2) and 51% of measured canopy nitrogen contents (rmse = 2.7 g m−2). Better results were obtained for individual landscapes, except for Italy, where REGFLEC performed poorly due to a lack of dense vegetation canopies at the time of satellite recording. Presence of vegetation is needed to parameterize the REGFLEC model. Combining REGFLEC- and SVI-based model results to minimize errors for a "snap-shot" assessment of total leaf nitrogen pools in the five landscapes, results varied from 0.6 to 4.0 t km−2. Differences in leaf nitrogen pools between landscapes are attributed to seasonal variations, extents of agricultural area, species variations, and spatial variations in nutrient availability. In order to facilitate a substantial assessment of variations in Nl pools and their relation to landscape based nitrogen and carbon cycling processes, time series of satellite data are needed. The upcoming Sentinel-2 satellite mission will provide new multiple narrowband data opportunities at high spatio-temporal resolution which are expected to further improve remote sensing capabilities for mapping LAI, CHLl and Nl.
Resumo:
The road transportation sector is responsible for around 25% of total man-made CO2 emissions worldwide. Considerable efforts are therefore underway to reduce these emissions using several approaches, including improved vehicle technologies, traffic management and changing driving behaviour. Detailed traffic and emissions models are used extensively to assess the potential effects of these measures. However, if the input and calibration data are not sufficiently detailed there is an inherent risk that the results may be inaccurate. This article presents the use of Floating Car Data to derive useful speed and acceleration values in the process of traffic model calibration as a means of ensuring more accurate results when simulating the effects of particular measures. The data acquired includes instantaneous GPS coordinates to track and select the itineraries, and speed and engine performance extracted directly from the on-board diagnostics system. Once the data is processed, the variations in several calibration parameters can be analyzed by comparing the base case model with the measure application scenarios. Depending on the measure, the results show changes of up to 6.4% in maximum speed values, and reductions of nearly 15% in acceleration and braking levels, especially when eco-driving is applied.
Resumo:
Perceptual voice evaluation according to the GRBAS scale is modelled using a linear combination of acoustic parameters calculated after a filter-bank analysis of the recorded voice signals. Modelling results indicate that for breathiness and asthenia more than 55% of the variance of perceptual rates can be explained by such a model, with only 4 latent variables. Moreover, the greatest part of the explained variance can be attributed to only one or two latent variables similarly weighted by all 5 listeners involved in the experiment. Correlation factors between actual rates and model predictions around 0.6 are obtained.
Resumo:
La frecuencia con la que se producen explosiones sobre edificios, ya sean accidentales o intencionadas, es reducida, pero sus efectos pueden ser catastróficos. Es deseable poder predecir de forma suficientemente precisa las consecuencias de estas acciones dinámicas sobre edificaciones civiles, entre las cuales las estructuras reticuladas de hormigón armado son una tipología habitual. En esta tesis doctoral se exploran distintas opciones prácticas para el modelado y cálculo numérico por ordenador de estructuras de hormigón armado sometidas a explosiones. Se emplean modelos numéricos de elementos finitos con integración explícita en el tiempo, que demuestran su capacidad efectiva para simular los fenómenos físicos y estructurales de dinámica rápida y altamente no lineales que suceden, pudiendo predecir los daños ocasionados tanto por la propia explosión como por el posible colapso progresivo de la estructura. El trabajo se ha llevado a cabo empleando el código comercial de elementos finitos LS-DYNA (Hallquist, 2006), desarrollando en el mismo distintos tipos de modelos de cálculo que se pueden clasificar en dos tipos principales: 1) modelos basados en elementos finitos de continuo, en los que se discretiza directamente el medio continuo mediante grados de libertad nodales de desplazamientos; 2) modelos basados en elementos finitos estructurales, mediante vigas y láminas, que incluyen hipótesis cinemáticas para elementos lineales o superficiales. Estos modelos se desarrollan y discuten a varios niveles distintos: 1) a nivel del comportamiento de los materiales, 2) a nivel de la respuesta de elementos estructurales tales como columnas, vigas o losas, y 3) a nivel de la respuesta de edificios completos o de partes significativas de los mismos. Se desarrollan modelos de elementos finitos de continuo 3D muy detallados que modelizan el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con un modelo constitutivo del hormigón CSCM (Murray et al., 2007), que tiene un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura. El acero se representa con un modelo constitutivo elastoplástico bilineal con rotura. Se modeliza la geometría precisa del hormigón mediante elementos finitos de continuo 3D y cada una de las barras de armado mediante elementos finitos tipo viga, con su posición exacta dentro de la masa de hormigón. La malla del modelo se construye mediante la superposición de los elementos de continuo de hormigón y los elementos tipo viga de las armaduras segregadas, que son obligadas a seguir la deformación del sólido en cada punto mediante un algoritmo de penalización, simulando así el comportamiento del hormigón armado. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF de continuo. Con estos modelos de EF de continuo se analiza la respuesta estructural de elementos constructivos (columnas, losas y pórticos) frente a acciones explosivas. Asimismo se han comparado con resultados experimentales, de ensayos sobre vigas y losas con distintas cargas de explosivo, verificándose una coincidencia aceptable y permitiendo una calibración de los parámetros de cálculo. Sin embargo estos modelos tan detallados no son recomendables para analizar edificios completos, ya que el elevado número de elementos finitos que serían necesarios eleva su coste computacional hasta hacerlos inviables para los recursos de cálculo actuales. Adicionalmente, se desarrollan modelos de elementos finitos estructurales (vigas y láminas) que, con un coste computacional reducido, son capaces de reproducir el comportamiento global de la estructura con una precisión similar. Se modelizan igualmente el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con el modelo constitutivo del hormigón EC2 (Hallquist et al., 2013), que también presenta un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura, y se usa en elementos finitos tipo lámina. El acero se representa de nuevo con un modelo constitutivo elastoplástico bilineal con rotura, usando elementos finitos tipo viga. Se modeliza una geometría equivalente del hormigón y del armado, y se tiene en cuenta la posición relativa del acero dentro de la masa de hormigón. Las mallas de ambos se unen mediante nodos comunes, produciendo una respuesta conjunta. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF estructurales. Con estos modelos de EF estructurales se simulan los mismos elementos constructivos que con los modelos de EF de continuo, y comparando sus respuestas estructurales frente a explosión se realiza la calibración de los primeros, de forma que se obtiene un comportamiento estructural similar con un coste computacional reducido. Se comprueba que estos mismos modelos, tanto los modelos de EF de continuo como los modelos de EF estructurales, son precisos también para el análisis del fenómeno de colapso progresivo en una estructura, y que se pueden utilizar para el estudio simultáneo de los daños de una explosión y el posterior colapso. Para ello se incluyen formulaciones que permiten considerar las fuerzas debidas al peso propio, sobrecargas y los contactos de unas partes de la estructura sobre otras. Se validan ambos modelos con un ensayo a escala real en el que un módulo con seis columnas y dos plantas colapsa al eliminar una de sus columnas. El coste computacional del modelo de EF de continuo para la simulación de este ensayo es mucho mayor que el del modelo de EF estructurales, lo cual hace inviable su aplicación en edificios completos, mientras que el modelo de EF estructurales presenta una respuesta global suficientemente precisa con un coste asumible. Por último se utilizan los modelos de EF estructurales para analizar explosiones sobre edificios de varias plantas, y se simulan dos escenarios con cargas explosivas para un edificio completo, con un coste computacional moderado. The frequency of explosions on buildings whether they are intended or accidental is small, but they can have catastrophic effects. Being able to predict in a accurate enough manner the consequences of these dynamic actions on civil buildings, among which frame-type reinforced concrete buildings are a frequent typology is desirable. In this doctoral thesis different practical options for the modeling and computer assisted numerical calculation of reinforced concrete structures submitted to explosions are explored. Numerical finite elements models with explicit time-based integration are employed, demonstrating their effective capacity in the simulation of the occurring fast dynamic and highly nonlinear physical and structural phenomena, allowing to predict the damage caused by the explosion itself as well as by the possible progressive collapse of the structure. The work has been carried out with the commercial finite elements code LS-DYNA (Hallquist, 2006), developing several types of calculation model classified in two main types: 1) Models based in continuum finite elements in which the continuous medium is discretized directly by means of nodal displacement degrees of freedom; 2) Models based on structural finite elements, with beams and shells, including kinematic hypothesis for linear and superficial elements. These models are developed and discussed at different levels: 1) material behaviour, 2) response of structural elements such as columns, beams and slabs, and 3) response of complete buildings or significative parts of them. Very detailed 3D continuum finite element models are developed, modeling mass concrete and reinforcement steel in a segregated manner. Concrete is represented with a constitutive concrete model CSCM (Murray et al., 2007), that has an inelastic behaviour, with different tension and compression response, hardening, cracking and compression damage and failure. The steel is represented with an elastic-plastic bilinear model with failure. The actual geometry of the concrete is modeled with 3D continuum finite elements and every and each of the reinforcing bars with beam-type finite elements, with their exact position in the concrete mass. The mesh of the model is generated by the superposition of the concrete continuum elements and the beam-type elements of the segregated reinforcement, which are made to follow the deformation of the solid in each point by means of a penalty algorithm, reproducing the behaviour of reinforced concrete. In this work these models will be called continuum FE models as a simplification. With these continuum FE models the response of construction elements (columns, slabs and frames) under explosive actions are analysed. They have also been compared with experimental results of tests on beams and slabs with various explosive charges, verifying an acceptable coincidence and allowing a calibration of the calculation parameters. These detailed models are however not advised for the analysis of complete buildings, as the high number of finite elements necessary raises its computational cost, making them unreliable for the current calculation resources. In addition to that, structural finite elements (beams and shells) models are developed, which, while having a reduced computational cost, are able to reproduce the global behaviour of the structure with a similar accuracy. Mass concrete and reinforcing steel are also modeled segregated. Concrete is represented with the concrete constitutive model EC2 (Hallquist et al., 2013), which also presents an inelastic behaviour, with a different tension and compression response, hardening, compression and cracking damage and failure, and is used in shell-type finite elements. Steel is represented once again with an elastic-plastic bilineal with failure constitutive model, using beam-type finite elements. An equivalent geometry of the concrete and the steel is modeled, considering the relative position of the steel inside the concrete mass. The meshes of both sets of elements are bound with common nodes, therefore producing a joint response. These models will be called structural FE models as a simplification. With these structural FE models the same construction elements as with the continuum FE models are simulated, and by comparing their response under explosive actions a calibration of the former is carried out, resulting in a similar response with a reduced computational cost. It is verified that both the continuum FE models and the structural FE models are also accurate for the analysis of the phenomenon of progressive collapse of a structure, and that they can be employed for the simultaneous study of an explosion damage and the resulting collapse. Both models are validated with an experimental full-scale test in which a six column, two floors module collapses after the removal of one of its columns. The computational cost of the continuum FE model for the simulation of this test is a lot higher than that of the structural FE model, making it non-viable for its application to full buildings, while the structural FE model presents a global response accurate enough with an admissible cost. Finally, structural FE models are used to analyze explosions on several story buildings, and two scenarios are simulated with explosive charges for a full building, with a moderate computational cost.
Resumo:
La Fotogrametría, como ciencia y técnica de obtención de información tridimensional del espacio objeto a partir de imágenes bidimensionales, requiere de medidas de precisión y en ese contexto, la calibración geométrica de cámaras ocupa un lugar importante. El conocimiento de la geometría interna de la cámara es fundamental para lograr mayor precisión en las medidas realizadas. En Fotogrametría Aérea se utilizan cámaras métricas (fabricadas exclusivamente para aplicaciones cartográficas), que incluyen objetivos fotográficos con sistemas de lentes complejos y de alta calidad. Pero en Fotogrametría de Objeto Cercano se está trabajando cada vez con más asiduidad con cámaras no métricas, con ópticas de peor calidad que exigen una calibración geométrica antes o después de cada trabajo. El proceso de calibración encierra tres conceptos fundamentales: modelo de cámara, modelo de distorsión y método de calibración. El modelo de cámara es un modelo matemático que aproxima la transformación proyectiva original a la realidad física de las lentes. Ese modelo matemático incluye una serie de parámetros entre los que se encuentran los correspondientes al modelo de distorsión, que se encarga de corregir los errores sistemáticos de la imagen. Finalmente, el método de calibración propone el método de estimación de los parámetros del modelo matemático y la técnica de optimización a emplear. En esta Tesis se propone la utilización de un patrón de calibración bidimensional que se desplaza en la dirección del eje óptico de la cámara, ofreciendo así tridimensionalidad a la escena fotografiada. El patrón incluye un número elevado de marcas, lo que permite realizar ensayos con distintas configuraciones geométricas. Tomando el modelo de proyección perspectiva (o pinhole) como modelo de cámara, se realizan ensayos con tres modelos de distorsión diferentes, el clásico de distorsión radial y tangencial propuesto por D.C. Brown, una aproximación por polinomios de Legendre y una interpolación bicúbica. De la combinación de diferentes configuraciones geométricas y del modelo de distorsión más adecuado, se llega al establecimiento de una metodología de calibración óptima. Para ayudar a la elección se realiza un estudio de las precisiones obtenidas en los distintos ensayos y un control estereoscópico de un panel test construido al efecto. ABSTRACT Photogrammetry, as science and technique for obtaining three-dimensional information of the space object from two-dimensional images, requires measurements of precision and in that context, the geometric camera calibration occupies an important place. The knowledge of the internal geometry of the camera is fundamental to achieve greater precision in measurements made. Metric cameras (manufactured exclusively for cartographic applications), including photographic lenses with complex lenses and high quality systems are used in Aerial Photogrammetry. But in Close Range Photogrammetry is working increasingly more frequently with non-metric cameras, worst quality optical components which require a geometric calibration before or after each job. The calibration process contains three fundamental concepts: camera model, distortion model and method of calibration. The camera model is a mathematical model that approximates the original projective transformation to the physical reality of the lenses. The mathematical model includes a series of parameters which include the correspondents to the model of distortion, which is in charge of correcting the systematic errors of the image. Finally, the calibration method proposes the method of estimation of the parameters of the mathematical modeling and optimization technique to employ. This Thesis is proposing the use of a pattern of two dimensional calibration that moves in the direction of the optical axis of the camera, thus offering three-dimensionality to the photographed scene. The pattern includes a large number of marks, which allows testing with different geometric configurations. Taking the projection model perspective (or pinhole) as a model of camera, tests are performed with three different models of distortion, the classical of distortion radial and tangential proposed by D.C. Brown, an approximation by Legendre polynomials and bicubic interpolation. From the combination of different geometric configurations and the most suitable distortion model, brings the establishment of a methodology for optimal calibration. To help the election, a study of the information obtained in the various tests and a purpose built test panel stereoscopic control is performed.
Resumo:
In data assimilation, one prepares the grid data as the best possible estimate of the true initial state of a considered system by merging various measurements irregularly distributed in space and time, with a prior knowledge of the state given by a numerical model. Because it may improve forecasting or modeling and increase physical understanding of considered systems, data assimilation now plays a very important role in studies of atmospheric and oceanic problems. Here, three examples are presented to illustrate the use of new types of observations and the ability of improving forecasting or modeling.
Resumo:
Cytotoxic T cells recognize mosaic structures consisting of target peptides embedded within self-major histocompatibility complex (MHC) class I molecules. This structure has been described in great detail for several peptide-MHC complexes. In contrast, how T-cell receptors recognize peptide-MHC complexes have been less well characterized. We have used a complete set of singly substituted analogs of a mouse MHC class I, Kk-restricted peptide, influenza hemagglutinin (Ha)255-262, to address the binding specificity of this MHC molecule. Using the same peptide-MHC complexes we determined the fine specificity of two Ha255-262-specific, Kk-restricted T cells, and of a unique antibody, pSAN, specific for the same peptide-MHC complex. Independently, a model of the Ha255-262-Kk complex was generated through homology modeling and molecular mechanics refinement. The functional data and the model corroborated each other showing that peptide residues 1, 3, 4, 6, and 7 were exposed on the MHC surface and recognized by the T cells. Thus, the majority, and perhaps all, of the side chains of the non-primary anchor residues may be available for T-cell recognition, and contribute to the stringent specificity of T cells. A striking similarity between the specificity of the T cells and that of the pSAN antibody was found and most of the peptide residues, which could be recognized by the T cells, could also be recognized by the antibody.
Resumo:
Este trabalho é referente ao desenvolvimento de um calibrador multiobjetivo automático do modelo SWMM (Storm Water Management Model), e avaliação de algumas fontes de incertezas presentes no processo de calibração, visando à representação satisfatória da transformação chuva-vazão. O código foi escrito em linguagem C, e aplica os conceitos do método de otimização multiobjetivo NSGAII (Non Dominated Sorting Genetic Algorithm) com elitismo controlado, além de utilizar o código fonte do modelo SWMM para a determinação das vazões simuladas. Paralelamente, também foi criada uma interface visual, para melhorar a facilidade de utilização do calibrador. Os testes do calibrador foram aplicados a três sistemas diferentes: um sistema hipotético disponibilizado no pacote de instalação do SWMM; um sistema real de pequenas dimensões, denominado La Terraza, localizado no município de Sierra Vista, Arizona (EUA); e um sistema de maiores dimensões, a bacia hidrográfica do Córrego do Gregório, localizada no município de São Carlos (SP). Os resultados indicam que o calibrador construído apresenta, em geral, eficiência satisfatória, porém é bastante dependente da qualidade dos dados observados em campo e dos parâmetros de entrada escolhidos pelo usuário. Foi demonstrada a importância da escolha dos eventos utilizados na calibração, do estabelecimento de limites adequados nos valores das variáveis de decisão, da escolha das funções objetivo e, principalmente, da qualidade e representatividade dos dados de monitoramento pluvio e fluviométrico. Conclui-se que estes testes desenvolvidos contribuem para o entendimento mais aprofundado dos processos envolvidos na modelagem e calibração, possibilitando avanços na confiabilidade dos resultados da modelagem.
Resumo:
OBJECTIVE Type A aortic dissection is a life-threatening disease requiring immediate surgical treatment. With emerging catheter-based technologies, endovascular stent-graft implantation to treat aneurysms and dissections has become a standardized procedure. However, endovascular treatment of the ascending aorta remains challenging. Thus we designed an ascending aortic dissection model to allow simulation of endovascular treatment. METHODS Five formalin-fixed human aortas were prepared. The ascending aorta was opened semicircularly in the middle portion and the medial layer was separated from the intima. The intimal tube was readapted using running monofilament sutures. The preparations were assessed by 128-slice computed tomography. A bare-metal stent was implanted for thoracic endovascular aortic repair in 4 of the aortic dissection models. RESULTS Separation of the intimal and medial layer of the aorta was considered to be sufficient because computed tomography showed a clear image of the dissection membrane in each aorta. The dissection was located 3.9 ± 1.4 cm proximally from the aortic annulus, with a length of 4.6 ± 0.9 cm. Before stent implantation, the mean distance from the intimal flap to the aortic wall was measured as 0.63 ± 0.163 cm in the ascending aorta. After stent implantation, this distance decreased to 0.26 ± 0.12 cm. CONCLUSION This model of aortic dissection of the ascending human aorta was reproducible with a comparable pathological and morphological appearance. The technique and model can be used to evaluate new stent-graft technologies to treat type A dissection and facilitate training for surgeons.
Resumo:
High-impact, localized intense rainfall episodes represent a major socio-economic problem for societies worldwide, and at the same time these events are notoriously difficult to simulate properly in climate models. Here, the authors investigate how horizontal resolution and model formulation influence this issue by applying the HARMONIE regional climate model (HCLIM) with three different setups; two using convection parameterization at 15 and 6.25 km horizontal resolution (the latter within the “grey-zone” scale), with lateral boundary conditions provided by ERA-Interim reanalysis and integrated over a pan-European domain, and one with explicit convection at 2 km resolution (HCLIM2) over the Alpine region driven by the 15 km model. Seven summer seasons were sampled and validated against two high-resolution observational data sets. All HCLIM versions underestimate the number of dry days and hours by 20-40%, and overestimate precipitation over the Alpine ridge. Also, only modest added value were found of “grey-zone” resolution. However, the single most important outcome is the substantial added value in HCLIM2 compared to the coarser model versions at sub-daily time scales. It better captures the local-to-regional spatial patterns of precipitation reflecting a more realistic representation of the local and meso-scale dynamics. Further, the duration and spatial frequency of precipitation events, as well as extremes, are closer to observations. These characteristics are key ingredients in heavy rainfall events and associated flash floods, and the outstanding results using HCLIM in convection-permitting setting are convincing and encourage further use of the model to study changes in such events in changing climates.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06