978 resultados para prediction problems
Resumo:
One of the most important challenges in chemistry and material science is the connection between the contents of a compound and its chemical and physical properties. In solids, these are greatly influenced by the crystal structure.rnrnThe prediction of hitherto unknown crystal structures with regard to external conditions like pressure and temperature is therefore one of the most important goals to achieve in theoretical chemistry. The stable structure of a compound is the global minimum of the potential energy surface, which is the high dimensional representation of the enthalpy of the investigated system with respect to its structural parameters. The fact that the complexity of the problem grows exponentially with the system size is the reason why it can only be solved via heuristic strategies.rnrnImprovements to the artificial bee colony method, where the local exploration of the potential energy surface is done by a high number of independent walkers, are developed and implemented. This results in an improved communication scheme between these walkers. This directs the search towards the most promising areas of the potential energy surface.rnrnThe minima hopping method uses short molecular dynamics simulations at elevated temperatures to direct the structure search from one local minimum of the potential energy surface to the next. A modification, where the local information around each minimum is extracted and used in an optimization of the search direction, is developed and implemented. Our method uses this local information to increase the probability of finding new, lower local minima. This leads to an enhanced performance in the global optimization algorithm.rnrnHydrogen is a highly relevant system, due to the possibility of finding a metallic phase and even superconductor with a high critical temperature. An application of a structure prediction method on SiH12 finds stable crystal structures in this material. Additionally, it becomes metallic at relatively low pressures.
Resumo:
The main objective of this project is to experimentally demonstrate geometrical nonlinear phenomena due to large displacements during resonant vibration of composite materials and to explain the problem associated with fatigue prediction at resonant conditions. Three different composite blades to be tested were designed and manufactured, being their difference in the composite layup (i.e. unidirectional, cross-ply, and angle-ply layups). Manual envelope bagging technique is explained as applied to the actual manufacturing of the components; problems encountered and their solutions are detailed. Forced response tests of the first flexural, first torsional, and second flexural modes were performed by means of a uniquely contactless excitation system which induced vibration by using a pulsed airflow. Vibration intensity was acquired by means of Polytec LDV system. The first flexural mode is found to be completely linear irrespective of the vibration amplitude. The first torsional mode exhibits a general nonlinear softening behaviour which is interestingly coupled with a hardening behaviour for the unidirectional layup. The second flexural mode has a hardening nonlinear behaviour for either the unidirectional and angle-ply blade, whereas it is slightly softening for the cross-ply layup. By using the same equipment as that used for forced response analyses, free decay tests were performed at different airflow intensities. Discrete Fourier Trasform over the entire decay and Sliding DFT were computed so as to visualise the presence of nonlinear superharmonics in the decay signal and when they were damped out from the vibration over the decay time. Linear modes exhibit an exponential decay, while nonlinearities are associated with a dry-friction damping phenomenon which tends to increase with increasing amplitude. Damping ratio is derived from logarithmic decrement for the exponential branch of the decay.
Resumo:
BACKGROUND Many preschool children have wheeze or cough, but only some have asthma later. Existing prediction tools are difficult to apply in clinical practice or exhibit methodological weaknesses. OBJECTIVE We sought to develop a simple and robust tool for predicting asthma at school age in preschool children with wheeze or cough. METHODS From a population-based cohort in Leicestershire, United Kingdom, we included 1- to 3-year-old subjects seeing a doctor for wheeze or cough and assessed the prevalence of asthma 5 years later. We considered only noninvasive predictors that are easy to assess in primary care: demographic and perinatal data, eczema, upper and lower respiratory tract symptoms, and family history of atopy. We developed a model using logistic regression, avoided overfitting with the least absolute shrinkage and selection operator penalty, and then simplified it to a practical tool. We performed internal validation and assessed its predictive performance using the scaled Brier score and the area under the receiver operating characteristic curve. RESULTS Of 1226 symptomatic children with follow-up information, 345 (28%) had asthma 5 years later. The tool consists of 10 predictors yielding a total score between 0 and 15: sex, age, wheeze without colds, wheeze frequency, activity disturbance, shortness of breath, exercise-related and aeroallergen-related wheeze/cough, eczema, and parental history of asthma/bronchitis. The scaled Brier scores for the internally validated model and tool were 0.20 and 0.16, and the areas under the receiver operating characteristic curves were 0.76 and 0.74, respectively. CONCLUSION This tool represents a simple, low-cost, and noninvasive method to predict the risk of later asthma in symptomatic preschool children, which is ready to be tested in other populations.
Resumo:
BACKGROUND Timing is critical for efficient hepatitis A vaccination in high endemic areas as high levels of maternal IgG antibodies against the hepatitis A virus (HAV) present in the first year of life may impede the vaccine response. OBJECTIVES To describe the kinetics of the decline of anti-HAV maternal antibodies, and to estimate the time of complete loss of maternal antibodies in infants in León, Nicaragua, a region in which almost all mothers are anti-HAV seropositive. METHODS We collected cord blood samples from 99 healthy newborns together with 49 corresponding maternal blood samples, as well as further blood samples at 2 and 7 months of age. Anti-HAV IgG antibody levels were measured by enzyme immunoassay (EIA). We predicted the time when antibodies would fall below 10 mIU/ml, the presumed lowest level of seroprotection. RESULTS Seroprevalence was 100% at birth (GMC 8392 mIU/ml); maternal and cord blood antibody concentrations were similar. The maternal antibody levels of the infants decreased exponentially with age and the half-life of the maternal antibody was estimated to be 40 days. The relationship between the antibody concentration at birth and time until full waning was described as: critical age (months)=3.355+1.969 × log(10)(Ab-level at birth). The survival model estimated that loss of passive immunity will have occurred in 95% of infants by the age of 13.2 months. CONCLUSIONS Complete waning of maternal anti-HAV antibodies may take until early in the second year of life. The here-derived formula relating maternal or cord blood antibody concentrations to the age at which passive immunity is lost may be used to determine the optimal age of childhood HAV vaccination.
Resumo:
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.
Resumo:
Theory: Interpersonal factors play a major role in causing and maintaining depression. It is unclear, however, to what degree significant others of the patient need to be involved for characterizing the patient's interpersonal style. Therefore, our study sought to investigate how impact messages as perceived by the patients' significant others add to the prediction of psychotherapy process and outcome above and beyond routine assessments, and therapist factors. Method: 143 outpatients with major depressive disorder were treated by 24 therapists with CBT or Exposure-Based Cognitive Therapy. Interpersonal style was measured pre and post therapy with the informant‐based Impact Message Inventory (IMI), in addition to the self‐report Inventory of Interpersonal Problems (IIP‐32). Indicators for the patients' dominance and affiliation as well as interpersonal distress were calculated from these measures. Depressive and general symptomatology was assessed at pre, post, and at three months follow‐up, and by process measures after every session. Results: Whereas significant other's reports did not add significantly to the prediction of the early therapeutic alliance, central mechanisms of change, or post‐therapy outcome including therapist factors, the best predictor of outcome 3 months post therapy was an increase in dominance as perceived by significant others. Conclusions: The patients' significant others seem to provide important additional information about the patients' interpersonal style and therefore should be included in the diagnostic process. Moreover, practitioners should specifically target interpersonal change as a potential mechanism of change in psychotherapy for depression.
Resumo:
Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
Fuel cycles are designed with the aim of obtaining the highest amount of energy possible. Since higher burnup values are reached, it is necessary to improve our disposal designs, traditionally based on the conservative assumption that they contain fresh fuel. The criticality calculations involved must consider burnup by making the most of the experimental and computational capabilities developed, respectively, to measure and predict the isotopic content of the spent nuclear fuel. These high burnup scenarios encourage a review of the computational tools to find out possible weaknesses in the nuclear data libraries, in the methodologies applied and their applicability range. Experimental measurements of the spent nuclear fuel provide the perfect framework to benchmark the most well-known and established codes, both in the industry and academic research activity. For the present paper, SCALE 6.0/TRITON and MONTEBURNS 2.0 have been chosen to follow the isotopic content of four samples irradiated in the Spanish Vandellós-II pressurized water reactor up to burnup values ranging from 40 GWd/MTU to 75 GWd/MTU. By comparison with the experimental data reported for these samples, we can probe the applicability of these codes to deal with high burnup problems. We have developed new computational tools within MONTENBURNS 2.0. They make possible to handle an irradiation history that includes geometrical and positional changes of the samples within the reactor core. This paper describes the irradiation scenario against which the mentioned codes and our capabilities are to be benchmarked.
Resumo:
Crowd induced dynamic loading in large structures, such as gymnasiums or stadium, is usually modelled as a series of harmonic loads which are defined in terms of their Fourier coefficients. Different values of these coefficients that were obtained from full scale measurements can be found in codes. Recently, an alternative has been proposed, based on random generation of load time histories that take into account phase lag among individuals inside the crowd. This paper presents the testing done on a structure designed to be a gymnasium. Two series of dynamic test were performed on the gym slab. For the first test an electrodynamic shaker was placed at several locations and during the second one people located inside a marked area bounced and jumped guided by different metronome rates. A finite element model (FEM) is presented and a comparison of numerically predicted and experimentally observed vibration modes and frequencies has been used to assess its validity. The second group of measurements will be compared with predictions made using the FEM model and three alternatives for crowd induced load modelling.
Resumo:
Passengers comfort in terms of acoustic noise levels is a key train design parameter, especially relevant in high speed trains, where the aerodynamic noise is dominant. The aim of the work, described in this paper, is to make progress in the understanding of the flow field around high speed trains in an open field, which is a subject of interest for many researchers with direct industrial applications, but also the critical configuration of the train inside a tunnel is studied in order to evaluate the external loads arising from noise sources of the train. The airborne noise coming from the wheels (wheelrail interaction), which is the dominant source at a certain range of frequencies, is also investigated from the numerical and experimental points of view. The numerical prediction of the noise in the interior of the train is a very complex problem, involving many different parameters: complex geometries and materials, different noise sources, complex interactions among those sources, broad range of frequencies where the phenomenon is important, etc. During recent years a research plan is being developed at IDR/UPM (Instituto de Microgravedad Ignacio Da Riva, Universidad Politécnica de Madrid) involving both numerical simulations, wind tunnel and full-scale tests to address this problem. Comparison of numerical simulations with experimental data is a key factor in this process.
Resumo:
En hidrodinámica, el fenómeno de Sloshing se puede definir como el movimiento de la superficie libre de un fluido dentro de un contenedor sometido a fuerzas y perturbaciones externas. El fluido en cuestión experimenta violentos movimientos con importantes deformaciones de su superficie libre. La dinámica del fluido puede llegar a generar cargas hidrodinámicas considerables las cuales pueden afectar la integridad estructural y/o comprometer la estabilidad del vehículo que transporta dicho contenedor. El fenómeno de Sloshing ha sido extensivamente investigado matemática, numérica y experimentalmente, siendo el enfoque experimental el más usado debido a la complejidad del problema, para el cual los modelos matemáticos y de simulación son aun incapaces de predecir con suficiente rapidez y precisión las cargas debidas a dicho fenómeno. El flujo generado por el Sloshing usualmente se caracteriza por la presencia de un fluido multifase (gas-liquido) y turbulencia. Reducir al máximo posible la complejidad del fenómeno de Sloshing sin perder la esencia del problema es el principal reto de esta tesis doctoral, donde un trabajo experimental enfocado en casos canónicos de Sloshing es presentado y documentado con el objetivo de aumentar la comprensión de dicho fenómeno y por tanto intentar proveer información valiosa para validaciones de códigos numéricos. El fenómeno de Sloshing juega un papel importante en la industria del transporte marítimo de gas licuado (LNG). El mercado de LNG en los últimos años ha reportado un crecimiento hasta tres veces mayor al de los mercados de petróleo y gas convencionales. Ingenieros en laboratorios de investigación e ingenieros adscritos a la industria del LNG trabajan continuamente buscando soluciones económicas y seguras para contener, transferir y transportar grandes volúmenes de LNG. Los buques transportadores de LNG (LNGC) han pasado de ser unos pocos buques con capacidad de 75000 m3 hace unos treinta años, a una amplia flota con una capacidad de 140000 m3 actualmente. En creciente número, hoy día se construyen buques con capacidades que oscilan entre 175000 m3 y 250000 m3. Recientemente un nuevo concepto de buque LNG ha salido al mercado y se le conoce como FLNG. Un FLNG es un buque de gran valor añadido que solventa los problemas de extracción, licuefacción y almacenamiento del LNG, ya que cuenta con equipos de extracción y licuefacción a bordo, eliminando por tanto las tareas de transvase de las estaciones de licuefacción en tierra hacia los buques LNGC. EL LNG por tanto puede ser transferido directamente desde el FLNG hacia los buques LNGC en mar abierto. Niveles de llenado intermedios en combinación con oleaje durante las operaciones de trasvase inducen movimientos en los buques que generan por tanto el fenómeno de Sloshing dentro de los tanques de los FLNG y los LNGC. El trabajo de esta tesis doctoral lidia con algunos de los problemas del Sloshing desde un punto de vista experimental y estadístico, para ello una serie de tareas, descritas a continuación, se han llevado a cabo : 1. Un dispositivo experimental de Sloshing ha sido configurado. Dicho dispositivo ha permitido ensayar secciones rectangulares de tanques LNGC a escala con movimientos angulares de un grado de libertad. El dispositivo experimental ha sido instrumentado para realizar mediciones de movimiento, presiones, vibraciones y temperatura, así como la grabación de imágenes y videos. 2. Los impactos de olas generadas dentro de una sección rectangular de un LNGC sujeto a movimientos regulares forzados han sido estudiados mediante la caracterización del fenómeno desde un punto de vista estadístico enfocado en la repetitividad y la ergodicidad del problema. 3. El estudio de los impactos provocados por movimientos regulares ha sido extendido a un escenario más realístico mediante el uso de movimientos irregulares forzados. 4. El acoplamiento del Sloshing generado por el fluido en movimiento dentro del tanque LNGC y la disipación de la energía mecánica de un sistema no forzado de un grado de libertad (movimiento angular) sujeto a una excitación externa ha sido investigado. 5. En la última sección de esta tesis doctoral, la interacción entre el Sloshing generado dentro en una sección rectangular de un tanque LNGC sujeto a una excitación regular y un cuerpo elástico solidario al tanque ha sido estudiado. Dicho estudio corresponde a un problema de interacción fluido-estructura. Abstract In hydrodynamics, we refer to sloshing as the motion of liquids in containers subjected to external forces with large free-surface deformations. The liquid motion dynamics can generate loads which may affect the structural integrity of the container and the stability of the vehicle that carries such container. The prediction of these dynamic loads is a major challenge for engineers around the world working on the design of both the container and the vehicle. The sloshing phenomenon has been extensively investigated mathematically, numerically and experimentally. The latter has been the most fruitful so far, due to the complexity of the problem, for which the numerical and mathematical models are still incapable of accurately predicting the sloshing loads. The sloshing flows are usually characterised by the presence of multiphase interaction and turbulence. Reducing as much as possible the complexity of the sloshing problem without losing its essence is the main challenge of this phd thesis, where experimental work on selected canonical cases are presented and documented in order to better understand the phenomenon and to serve, in some cases, as an useful information for numerical validations. Liquid sloshing plays a key roll in the liquified natural gas (LNG) maritime transportation. The LNG market growth is more than three times the rated growth of the oil and traditional gas markets. Engineers working in research laboratories and companies are continuously looking for efficient and safe ways for containing, transferring and transporting the liquified gas. LNG carrying vessels (LNGC) have evolved from a few 75000 m3 vessels thirty years ago to a huge fleet of ships with a capacity of 140000 m3 nowadays and increasing number of 175000 m3 and 250000 m3 units. The concept of FLNG (Floating Liquified Natural Gas) has appeared recently. A FLNG unit is a high value-added vessel which can solve the problems of production, treatment, liquefaction and storage of the LNG because the vessel is equipped with a extraction and liquefaction facility. The LNG is transferred from the FLNG to the LNGC in open sea. The combination of partial fillings and wave induced motions may generate sloshing flows inside both the LNGC and the FLNG tanks. This work has dealt with sloshing problems from a experimental and statistical point of view. A series of tasks have been carried out: 1. A sloshing rig has been set up. It allows for testing tanks with one degree of freedom angular motion. The rig has been instrumented to measure motions, pressure and conduct video and image recording. 2. Regular motion impacts inside a rectangular section LNGC tank model have been studied, with forced motion tests, in order to characterise the phenomenon from a statistical point of view by assessing the repeatability and practical ergodicity of the problem. 3. The regular motion analysis has been extended to an irregular motion framework in order to reproduce more realistic scenarios. 4. The coupled motion of a single degree of freedom angular motion system excited by an external moment and affected by the fluid moment and the mechanical energy dissipation induced by sloshing inside the tank has been investigated. 5. The last task of the thesis has been to conduct an experimental investigation focused on the strong interaction between a sloshing flow in a rectangular section of a LNGC tank subjected to regular excitation and an elastic body clamped to the tank. It is thus a fluid structure interaction problem.
Resumo:
A method is presented for computing the average solution of problems that are too complicated for adequate resolution, but where information about the statistics of the solution is available. The method involves computing average derivatives by interpolation based on linear regression, and an updating of a measure constrained by the available crude information. Examples are given.
Resumo:
Mild traumatic brain injury (mTBI) is a common injury and a significant proportion of those affected report chronic symptoms. This study investigated prediction of post-concussion symptoms using an Emergency Department (ED) assessment that examined neuropsychological and balance deficits and pain severity of 29 concussed individuals. Thirty participants with minor orthopedic injuries and 30 ED visitors were recruited as control subjects. Concussed and orthopedically injured participants were followed up by telephone at one month to assess symptom severity. In the ED, concussed subjects performed worse on some neuropsychological tests and had impaired balance compared to controls. They also reported significantly more post-concussive symptoms at follow-up. Neurocognitive impairment, pain and balance deficits were all significantly correlated with severity of post-concussion symptoms. The findings suggest that a combination of variables assessable in the ED may be useful in predicting which individuals will suffer persistent post-concussion problems.
Resumo:
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.