927 resultados para Viscous Dampers,Five Step Method,Equivalent Static Analysis Procedure,Yielding Frames,Passive Energy Dissipation Systems
Resumo:
This research analyzes the discourse of teachers of ancient oriental arts, namely: Yoga, Tai Chi Chuan, Kung Fu, Lian Gong, Qwan Ki Do, reveals the meaning that subjects have about what are the benefits to health of those already considered practical alternatives conventional treatment of health. The research uses the phenomenological method specifically the method of the phenomenon. The method first provides a pre-reflection on the oriental arts, exposing not systematically, some curiosities changing between aspects of oriental arts: historical, main teaching methods, styles, the character of struggle, why are linked to health, the benefits of their practices, movements, philosophy, concepts, ways of acting, the roots, the basics. In this article the pre-reflection does not appear to meet the standards of Congress page limit. Rather concentrate on the next step of the research and present the phenomenon situated in the experience of those who have experienced it in the case of the five arts teachers mentioned above. The method performs individual analysis or ideographic analysis of the five discourses of teachers from each oriental art and then conducts the general or nomothetic. The result is presented an analysis of generalities, convergence, divergence and individuality of meanings expressed by the subject, to finally develop a discussion of these data and weave a synthesis. The analyzes reveal that Eastern practices currently as some of these five analyzed here, those involving aspects of struggle and concentration are excellent allied health development and well-being. Understanding the philosophical aspect is inherent in the practice of the movements, although the character of the struggle, which they transmit through knowledge of their origins and roots is the current thinking routine that generates a life philosophy of selfhelp, self-enabling and tranquility preparation to overcome the tensions, disputes, everyday challenges. The speeches also reveal that the psychophysical aspects of diseases are real, because the practices of movements and the physical drilling of these oriental arts have enabled visible improvements in health by working the mental and the physical in an integrated manner. Masters must have all the knowledge to better meet today's society, preserving these cultures, passing them through the generations.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Topical Treatment Using Amphotericin B and DMSO for an Atypically Located Equine Cutaneous Pythiosis
Resumo:
Background: Cutaneous lesions by Pythium insidiosum infection are commonly observed in horses, especially in those living at flooded environments. Equine pythiosis is characterized by the development of tumoral masses that are frequently located at distal limbs, ventral abdomen, thorax, breast and face. The lesions are usually granulomatous, serosanguineous and ulcerated, most often destroyed by self-mutilation due to the intense pruritus. The proposed treatment includes surgical excision followed by antifungal drugs administration, which can be done systemically or topically. Amphotericin B and dimethyl sulfoxide (DMSO) in association has been successfully used for cutaneous pythiosis topical treatment due to the DMSO property to carry any substance through plasmatic membranes. Case: The present report concerns a 12-year-old mixed breed gelding presenting with self-mutilation of a tumoral mass located at the left flank. The owners reported that the horse had initially presented a small wound that had evolved to a 20-cm in diameter mass in 4 weeks. Tissue samples were collected, processed and stained by the Gomori's methenamine silver (GMS) method. The histopathological analysis revealed Pythium insidiosum hyphae in a granulomatous tissue, especially located at peripheral region, where kunkers were present. Surgical excision of the mass followed by cauterization was indicated as initial treatment, and due to financial reasons, the owners elected only the topical antifungal therapy to control the fungus infection after surgery. Flunixin meglumine was also administrated for five days aiming the control of pain and inflammation. The wound was cleaned with povidone-iodine solution and rinsed with a solution containing, 50 mg, of amphotericin B in 10 mL of sterile water and 10 mL of DMSO. This procedure was carried Out twice a day. The wound healed fast due to an excellent centripetal epithelialization. and the horse was discharged after 64 days showing only 5% of the initial wound area. The owner reported by telephone the complete healing and hair growth 10 days after discharge. Discussion: Despite the atypical location of the tumoral lesion described at the present report, the history and clinical manifestations, especially the intense pruritus, showed similarity with other characteristic reports of equine cutaneous pythiosis. The diagnosis was confirmed by the histopathological examination showing hyphae structures, as described to be evidences of the presence of Pythium insidiosum in the tissue. The surgical procedure was the first step to provide remission of clinical signs, and one day after surgery the pruritus desapeared. After excision of the granulomatous tissue and cauterization, daily topical administration of amphotericin B associated with DMSO was effective in destroying the infectious agent, as observed by the excellent epithelization. A pink granulation tissue grew up providing an ideal surface for epithelial migration and the healing process progressed quickly. Centripetal epithelialization reduced the wound area until 3% of the initial area in 64 days of treatment, when the remaining wound was found almost completely healed and covered with hair. At the present report, the horse presenting pythiosis was only topically treated. The recommended therapy using amphotericin B and DMSO solution was effective, economically viable and low risk, considering that the systemic antifungal therapy usually suggested is expensive and extremely nephrotoxic. The atypical location of the lesion on the left flank shows that any anatomical region can be affected by the fungus, since the conditions for its development were present.
Resumo:
Purpose: To describe a new computerized method for the analysis of lid contour based on the measurement of multiple radial midpupil lid distances. Design: Evaluation of diagnostic technology. Participants and Controls: Monocular palpebral fissure images of 35 patients with Graves' upper eyelid retraction and of 30 normal subjects. Methods: Custom software was used to measure the conventional midpupil upper lid distance (MPLD) and 12 oblique MPLDs on each 15 degrees across the temporal (105 degrees, 120 degrees, 135 degrees, 150 degrees, 165 degrees, and 180 degrees) and nasal (75 degrees, 60 degrees, 45 degrees, 30 degrees, 15 degrees, and 0 degrees) sectors of the lid fissure. Main Outcome Measures: Mean, standard deviation, 5th and 95th percentiles of the oblique MPLDs obtained for patients and controls. Temporal/nasal MPLD ratios of the same angles with respect to the midline. Results: The MPLDs increased from the vertical midline in both nasal and temporal sectors of the fissure. In the control group the differences between the mean central MPLD (90 degrees) and those up to 30 degrees in the nasal (75 degrees and 60 degrees) and temporal sectors (105 degrees and 120 degrees) were not significant. For greater eccentricities, all temporal and nasal mean MPLDs increased significantly. When the MPLDs of the same angles were compared between groups, the mean values of the Graves' patients differed from control at all angles (F = 4192; P<0.0001). The greatest temporal/nasal asymmetry occurred 60 degrees from the vertical midline. Conclusions: The measurement of radial MPLD is a simple and effective way to characterize lid contour abnormalities. In patients with Graves' upper eyelid retraction, the method demonstrated that the maximum amplitude of the lateral lid flare sign occurred at 60 degrees from the vertical midline. Financial Disclosure(s): The authors have no proprietary or commercial interest in any of the materials discussed in this article. Ophthalmology 2012; 119: 625-628 (C) 2012 by the American Academy of Ophthalmology.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
Viscous dampers are characterized as very effective devices applied for seismic design and retrofitting. The objective of this thesis is to apply the Five-Step Procedure ,developed by a research group in University of Bologna, for sizing the viscous dampers to be installed in an existing precast RC structure. The idea is to apply the viscous damping devices in different positions in the structure then to identify and compare the performance of all types placement position.
Resumo:
The purpose of this research project is to study an innovative method for the stability assessment of structural steel systems, namely the Modified Direct Analysis Method (MDM). This method is intended to simplify an existing design method, the Direct Analysis Method (DM), by assuming a sophisticated second-order elastic structural analysis will be employed that can account for member and system instability, and thereby allow the design process to be reduced to confirming the capacity of member cross-sections. This last check can be easily completed by substituting an effective length of KL = 0 into existing member design equations. This simplification will be particularly useful for structural systems in which it is not clear how to define the member slenderness L/r when the laterally unbraced length L is not apparent, such as arches and the compression chord of an unbraced truss. To study the feasibility and accuracy of this new method, a set of 12 benchmark steel structural systems previously designed and analyzed by former Bucknell graduate student Jose Martinez-Garcia and a single column were modeled and analyzed using the nonlinear structural analysis software MASTAN2. A series of Matlab-based programs were prepared by the author to provide the code checking requirements for investigating the MDM. By comparing MDM and DM results against the more advanced distributed plasticity analysis results, it is concluded that the stability of structural systems can be adequately assessed in most cases using MDM, and that MDM often appears to be a more accurate but less conservative method in assessing stability.
Resumo:
Any functionally important mutation is embedded in an evolutionary matrix of other mutations. Cladistic analysis, based on this, is a method of investigating gene effects using a haplotype phylogeny to define a set of tests which localize causal mutations to branches of the phylogeny. Previous implementations of cladistic analysis have not addressed the issue of analyzing data from related individuals, though in human studies, family data are usually needed to obtain unambiguous haplotypes. In this study, a method of cladistic analysis is described in which haplotype effects are parameterized in a linear model which accounts for familial correlations. The method was used to study the effect of apolipoprotein (Apo) B gene variation on total-, LDL-, and HDL-cholesterol, triglyceride, and Apo B levels in 121 French families. Five polymorphisms defined Apo B haplotypes: the signal peptide Insertion/deletion, Bsp 1286I, XbaI, MspI, and EcoRI. Eleven haplotypes were found, and a haplotype phylogeny was constructed and used to define a set of tests of haplotype effects on lipid and apo B levels.^ This new method of cladistic analysis, the parametric method, found significant effects for single haplotypes for all variables. For HDL-cholesterol, 3 clusters of evolutionarily-related haplotypes affecting levels were found. Haplotype effects accounted for about 10% of the genetic variance of triglyceride and HDL-cholesterol levels. The results of the parametric method were compared to those of a method of cladistic analysis based on permutational testing. The permutational method detected fewer haplotype effects, even when modified to account for correlations within families. Simulation studies exploring these differences found evidence of systematic errors in the permutational method due to the process by which haplotype groups were selected for testing.^ The applicability of cladistic analysis to human data was shown. The parametric method is suggested as an improvement over the permutational method. This study has identified candidate haplotypes for sequence comparisons in order to locate the functional mutations in the Apo B gene which may influence plasma lipid levels. ^
Resumo:
Quantification of protein expression based on immunohistochemistry (IHC) is an important step in clinical diagnoses and translational tissue-based research. Manual scoring systems are used in order to evaluate protein expression based on staining intensities and distribution patterns. However, visual scoring remains an inherently subjective approach. The aim of our study was to explore whether digital image analysis proves to be an alternative or even superior tool to quantify expression of membrane-bound proteins. We analyzed five membrane-binding biomarkers (HER2, EGFR, pEGFR, β-catenin, and E-cadherin) and performed IHC on tumor tissue microarrays from 153 esophageal adenocarcinomas patients from a single center study. The tissue cores were scored visually applying an established routine scoring system as well as by using digital image analysis obtaining a continuous spectrum of average staining intensity. Subsequently, we compared both assessments by survival analysis as an end point. There were no significant correlations with patient survival using visual scoring of β-catenin, E-cadherin, pEGFR, or HER2. In contrast, the results for digital image analysis approach indicated that there were significant associations with disease-free survival for β-catenin, E-cadherin, pEGFR, and HER2 (P = 0.0125, P = 0.0014, P = 0.0299, and P = 0.0096, respectively). For EGFR, there was a greater association with patient survival when digital image analysis was used compared to when visual scoring was (visual: P = 0.0045, image analysis: P < 0.0001). The results of this study indicated that digital image analysis was superior to visual scoring. Digital image analysis is more sensitive and, therefore, better able to detect biological differences within the tissues with greater accuracy. This increased sensitivity improves the quality of quantification.
Resumo:
Developing a Model Interruption is a known human factor that contributes to errors and catastrophic events in healthcare as well as other high-risk industries. The landmark Institute of Medicine (IOM) report, To Err is Human, brought attention to the significance of preventable errors in medicine and suggested that interruptions could be a contributing factor. Previous studies of interruptions in healthcare did not offer a conceptual model by which to study interruptions. As a result of the serious consequences of interruptions investigated in other high-risk industries, there is a need to develop a model to describe, understand, explain, and predict interruptions and their consequences in healthcare. Therefore, the purpose of this study was to develop a model grounded in the literature and to use the model to describe and explain interruptions in healthcare. Specifically, this model would be used to describe and explain interruptions occurring in a Level One Trauma Center. A trauma center was chosen because this environment is characterized as intense, unpredictable, and interrupt-driven. The first step in developing the model began with a review of the literature which revealed that the concept interruption did not have a consistent definition in either the healthcare or non-healthcare literature. Walker and Avant’s method of concept analysis was used to clarify and define the concept. The analysis led to the identification of five defining attributes which include (1) a human experience, (2) an intrusion of a secondary, unplanned, and unexpected task, (3) discontinuity, (4) externally or internally initiated, and (5) situated within a context. However, before an interruption could commence, five conditions known as antecedents must occur. For an interruption to take place (1) an intent to interrupt is formed by the initiator, (2) a physical signal must pass a threshold test of detection by the recipient, (3) the sensory system of the recipient is stimulated to respond to the initiator, (4) an interruption task is presented to recipient, and (5) the interruption task is either accepted or rejected by v the recipient. An interruption was determined to be quantifiable by (1) the frequency of occurrence of an interruption, (2) the number of times the primary task has been suspended to perform an interrupting task, (3) the length of time the primary task has been suspended, and (4) the frequency of returning to the primary task or not returning to the primary task. As a result of the concept analysis, a definition of an interruption was derived from the literature. An interruption is defined as a break in the performance of a human activity initiated internal or external to the recipient and occurring within the context of a setting or location. This break results in the suspension of the initial task by initiating the performance of an unplanned task with the assumption that the initial task will be resumed. The definition is inclusive of all the defining attributes of an interruption. This is a standard definition that can be used by the healthcare industry. From the definition, a visual model of an interruption was developed. The model was used to describe and explain the interruptions recorded for an instrumental case study of physicians and registered nurses (RNs) working in a Level One Trauma Center. Five physicians were observed for a total of 29 hours, 31 minutes. Eight registered nurses were observed for a total of 40 hours 9 minutes. Observations were made on either the 0700–1500 or the 1500-2300 shift using the shadowing technique. Observations were recorded in the field note format. The field notes were analyzed by a hybrid method of categorizing activities and interruptions. The method was developed by using both a deductive a priori classification framework and by the inductive process utilizing line-byline coding and constant comparison as stated in Grounded Theory. The following categories were identified as relative to this study: Intended Recipient - the person to be interrupted Unintended Recipient - not the intended recipient of an interruption; i.e., receiving a phone call that was incorrectly dialed Indirect Recipient – the incidental recipient of an interruption; i.e., talking with another, thereby suspending the original activity Recipient Blocked – the intended recipient does not accept the interruption Recipient Delayed – the intended recipient postpones an interruption Self-interruption – a person, independent of another person, suspends one activity to perform another; i.e., while walking, stops abruptly and talks to another person Distraction – briefly disengaging from a task Organizational Design – the physical layout of the workspace that causes a disruption in workflow Artifacts Not Available – supplies and equipment that are not available in the workspace causing a disruption in workflow Initiator – a person who initiates an interruption Interruption by Organizational Design and Artifacts Not Available were identified as two new categories of interruption. These categories had not previously been cited in the literature. Analysis of the observations indicated that physicians were found to perform slightly fewer activities per hour when compared to RNs. This variance may be attributed to differing roles and responsibilities. Physicians were found to have more activities interrupted when compared to RNs. However, RNs experienced more interruptions per hour. Other people were determined to be the most commonly used medium through which to deliver an interruption. Additional mediums used to deliver an interruption vii included the telephone, pager, and one’s self. Both physicians and RNs were observed to resume an original interrupted activity more often than not. In most interruptions, both physicians and RNs performed only one or two interrupting activities before returning to the original interrupted activity. In conclusion the model was found to explain all interruptions observed during the study. However, the model will require an even more comprehensive study in order to establish its predictive value.
Resumo:
We have performed quantitative X-ray diffraction (qXRD) analysis of 157 grab or core-top samples from the western Nordic Seas between (WNS) ~57°-75°N and 5° to 45° W. The RockJock Vs6 analysis includes non-clay (20) and clay (10) mineral species in the <2 mm size fraction that sum to 100 weight %. The data matrix was reduced to 9 and 6 variables respectively by excluding minerals with low weight% and by grouping into larger groups, such as the alkali and plagioclase feldspars. Because of its potential dual origins calcite was placed outside of the sum. We initially hypothesized that a combination of regional bedrock outcrops and transport associated with drift-ice, meltwater plumes, and bottom currents would result in 6 clusters defined by "similar" mineral compositions. The hypothesis was tested by use of a fuzzy k-mean clustering algorithm and key minerals were identified by step-wise Discriminant Function Analysis. Key minerals in defining the clusters include quartz, pyroxene, muscovite, and amphibole. With 5 clusters, 87.5% of the observations are correctly classified. The geographic distributions of the five k-mean clusters compares reasonably well with the original hypothesis. The close spatial relationship between bedrock geology and discrete cluster membership stresses the importance of this variable at both the WNS-scale and at a more local scale in NE Greenland.
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
En esta tesis se investiga la interacción entre un fluido viscoso y un cuerpo sólido en presencia de una superficie libre. El problema se expresa teóricamente poniendo especial atención a los aspectos de conservación de energía y de la interacción del fluido con el cuerpo. El problema se considera 2D y monofásico, y un desarrollo matemático permite una descomposición de los términos disipativos en términos relacionados con la superficie libre y términos relacionados con la enstrofía. El modelo numérico utilizado en la tesis se basa en el método sin malla Smoothed Particle Hydrodynamics (SPH). De manera análoga a lo que se hace a nivel continuo, las propiedades de conservación se estudian en la tesis con el sistema discreto de partículas. Se tratan también las condiciones de contorno de un cuerpo que se mueve en un flujo viscoso, implementadas con el método ghost-fluid. Se ha desarrollado un algoritmo explícito de interacción fluido / cuerpo. Se han documentado algunos casos de modo detallado con el objetivo de comprobar la capacidad del modelo para reproducir correctamente la disipación de energía y el movimiento del cuerpo. En particular se ha investigado la atenuación de una onda estacionaria, comparando la simulación numérica con predicciones teóricas. Se han realizado otras pruebas para monitorizar la disipación de energía para flujos más violentos que implican la fragmentación de la superficie libre. La cantidad de energía disipada con los diferentes términos se ha evaluado en los casos estudiados con el modelo numérico. Se han realizado otras pruebas numéricas para verificar la técnica de modelización de la interacción fluido / cuerpo, concretamente las fuerzas ejercidas por las olas en cuerpos con formas simples, y el equilibrio de un cuerpo flotante con una forma compleja. Una vez que el modelo numérico ha sido validado, se han realizado simulaciones numéricas para obtener una comprensión más completa de la física implicada en casos (casi) realistas sobre los había aspectos que no se conocían suficientemente. En primer lugar se ha estudiado el el flujo alrededor de un cilindro bajo la superficie libre. El estudio se ha realizado con un número de Reynolds moderado, para un rango de inmersiones del cilindro y números de Froude. La solución numérica permite una investigación de los patrones complejos que se producen. La estela del cilindro interactúa con la superficie libre. Se han identificado algunos inestabilidades características. El segundo estudio se ha realizado sobre el problema de sloshing, tanto experimentalmente como numéricamente. El análisis se restringe a aguas poco profundas y con oscilación horizontal, pero se ha estudiado un gran número de condiciones, lo que lleva a una comprensión bastante completa de los sistemas de onda involucradas. La última parte de la tesis trata también sobre un problema de sloshing pero esta vez el tanque está oscilando con rotación y hay acoplamiento con un sistema mecánico. El sistema se llama pendulum-TLD (Tuned Liquid Damper - con líquido amortiguador). Este tipo de sistema se utiliza normalmente para la amortiguación de las estructuras civiles. El análisis se ha realizado analíticamente, numéricamente y experimentalmente utilizando líquidos con viscosidades diferentes, centrándose en características no lineales y mecanismos de disipación. ABSTRA C T The subject of the present thesis is the interaction between a viscous fluid and a solid body in the presence of a free surface. The problem is expressed first theoretically with a particular focus on the energy conservation and the fluid-body interaction. The problem is considered 2D and monophasic, and some mathematical development allows for a decomposition of the energy dissipation into terms related to the Free Surface and others related to the enstrophy. The numerical model used on the thesis is based on Smoothed Particle Hydrodynamics (SPH): a computational method that works by dividing the fluid into particles. Analogously to what is done at continuum level, the conservation properties are studied on the discrete system of particles. Additionally the boundary conditions for a moving body in a viscous flow are treated and discussed using the ghost-fluid method. An explicit algorithm for handling fluid-body coupling is also developed. Following these theoretical developments on the numerical model, some test cases are devised in order to test the ability of the model to correctly reproduce the energy dissipation and the motion of the body. The attenuation of a standing wave is used to compare what is numerically simulated to what is theoretically predicted. Further tests are done in order to monitor the energy dissipation in case of more violent flows involving the fragmentation of the free-surface. The amount of energy dissipated with the different terms is assessed with the numerical model. Other numerical tests are performed in order to test the fluid/body interaction method: forces exerted by waves on simple shapes, and equilibrium of a floating body with a complex shape. Once the numerical model has been validated, numerical tests are performed in order to get a more complete understanding of the physics involved in (almost) realistic cases. First a study is performed on the flow passing a cylinder under the free surface. The study is performed at moderate Reynolds numbers, for various cylinder submergences, and various Froude numbers. The capacity of the numerical solver allows for an investigation of the complex patterns which occur. The wake from the cylinder interacts with the free surface, and some characteristical flow mechanisms are identified. The second study is done on the sloshing problem, both experimentally and numerically. The analysis is restrained to shallow water and horizontal excitation, but a large number of conditions are studied, leading to quite a complete understanding of the wave systems involved. The last part of the thesis still involves a sloshing problem but this time the tank is rolling and there is coupling with a mechanical system. The system is named pendulum-TLD (Tuned Liquid Damper). This kind of system is normally used for damping of civil structures. The analysis is then performed analytically, numerically and experimentally for using liquids with different viscosities, focusing on non-linear features and dissipation mechanisms.
Resumo:
Las playas sustentadas por medio de un pie sumergido son una atractiva alternativa de diseño de regeneración de playas especialmente cuando las condiciones física del emplazamiento o las características de la arena nativa y de préstamo producen perfiles de alimentación que no se intersectan. La observación y propuesta de este tipo de solución data de los años 1960’s, así como la experiencia internacional en la construcción de este tipo de playas. Sin embargo, a pesar de su utilización y los estudios en campo y laboratorio, no se dispone de criterios ingenieriles que apoyen el diseño de las mismas. Esta tesis consiste en un análisis experimental del perfil de playas sustentadas en un pie sumergido (o colgadas) que se concreta en una propuesta de directrices de diseño general que permiten estimar la ubicación y características geométricas del pie sumergido frente a un oleaje y material que constituye la playa determinados. En la tesis se describe el experimento bidimensional realizado en el modelo físico de fondo móvil, donde se combinan cinco tipos de oleaje con tres configuraciones del pie sumergido (“Sin estructura”, configuración baja o “Estructura 1” y configuración alta o “Estructura 2”), se presentan los resultados obtenidos y se realiza una discusión detallada de las implicaciones de los resultados desde el punto de vista hidrodinámico utilizando monomios adimensionales. Se ha realizado un análisis detallado del estado del arte sobre playas colgadas, presentando el concepto y las experiencias de realizaciones en distintos países. Se ha realizado una cuidadosa revisión de la literatura publicada sobre estudios experimentales de playas colgadas, modelos teóricos y otras cuestiones auxiliares, necesarias para la formulación de la metodología de la tesis. El estudio realizado se ha estructurado en dos fases. En la primera fase se ha realizado una experimentación en un modelo físico de fondo móvil construido en las instalaciones del Centro de Estudios de Puertos y Costas (CEPYC) del Centro de Estudios y Experimentación de Obras Públicas (CEDEX), consistente en un canal de 36 m de longitud, 3 m de anchura y 1.5 m de altura, provisto de un generador de oleaje de tipo pistón. Se ha diseñado una campaña de 15 ensayos, que se obtienen sometiendo a cinco tipos de oleaje tres configuraciones distintas de playa colgada. En los ensayos se ha medido el perfil de playa en distintos instantes hasta llegar al equilibrio, determinando a partir de esos datos el retroceso de la línea de costa y el volumen de sedimentos perdido. El tiempo total efectivo de ensayo asciende a casi 650 horas, y el número de perfiles de evolución de playa obtenidos totaliza 229. En la segunda fase se ha abordado el análisis de los resultados obtenidos con la finalidad de comprender el fenómeno, identificar las variables de las que depende y proponer unas directrices ingenieriles de diseño. Se ha estudiado el efecto de la altura de ola, del periodo del oleaje, del francobordo adimensional y del parámetro de Dean, constatándose la dificultad de comprensión del funcionamiento de estas obras ya que pueden ser beneficiosas, perjudiciales o inocuas según los casos. También se ha estudiado la respuesta del perfil de playa en función de otros monomios adimensionales, tales como el número de Reynolds o el de Froude. En el análisis se ha elegido el monomio “plunger” como el más significativo, encontrando relaciones de éste con el peralte de oleaje, la anchura de coronación adimensional, la altura del pie de playa adimensional y el parámetro de Dean. Finalmente, se propone un método de diseño de cuatro pasos que permite realizar un primer encaje del diseño funcional de la playa sustentada frente a un oleaje de características determinadas. Las contribuciones más significativas desde el punto de vista científico son: - La obtención del juego de resultados experimentales. - La caracterización del comportamiento de las playas sustentadas. - Las relaciones propuestas entre el monomio plunger y las distintas variables explicativas seleccionadas, que permiten predecir el comportamiento de la obra. - El método de diseño propuesto, en cuatro pasos, para este tipo de esquemas de defensa de costas. Perched beaches are an attractive beach nourishment design alternative especially when either the site conditions or the characteristics of both the native and the borrow sand lead to a non-intersecting profile The observation and suggestion of the use of this type of coastal defence scheme dates back to the 1960’s, as well as the international experience in the construction of this type of beaches. However, in spite of its use and the field and laboratory studies performed to-date, no design engineering guidance is available to support its design. This dissertation is based on the experimental work performed on a movable bed physical model and the use of dimensionless parameters in analyzing the results to provide general functional design guidance that allow the designer, at a particular stretch of coast - to estimate the location and geometric characteristics of the submerged sill as well as to estimate the suitable sand size to be used in the nourishment. This dissertation consists on an experimental analysis of perched beaches by means of a submerged sill, leading to the proposal of general design guidance that allows to estimate the location and geometric characteristics of the submerged sill when facing a wave condition and for a given beach material. The experimental work performed on a bi-dimensional movable bed physical model, where five types of wave conditions are combined with three configurations of the submerged sill (“No structure”, low structure or “Structure 1”, and high structure or “Structure 2”) is described, results are presented, and a detailed discussion of the results - from the hydrodynamic point of view – of the implications of the results by using dimensionless parameters is carried out. A detailed state of the art analysis about perched beaches has been performed, presenting the “perched beach concept” and the case studies of different countries. Besides, a careful revision of the literature about experimental studies on perched beaches, theoretical models, and other topics deemed necessary to formulate the methodology of this work has been completed. The study has been divided into two phases. Within the first phase, experiments on a movable-bed physical model have been developed. The physical model has been built in the Centro de Estudios de Puertos y Costas (CEPYC) facilities, Centro de Estudios y Experimentación de Obras Públicas (CEDEX). The wave flume is 36 m long, 3 m wide and 1.5 m high, and has a piston-type regular wave generator available. The test plan consisted of 15 tests resulting from five wave conditions attacking three different configurations of the perched beach. During the development of the tests, the beach profile has been surveyed at different intervals until equilibrium has been reached according to these measurements. Retreat of the shoreline and relative loss of sediment in volume have been obtained from the measurements. The total effective test time reaches nearly 650 hours, whereas the total number of beach evolution profiles measured amounts to 229. On the second phase, attention is focused on the analysis of results with the aim of understanding the phenomenon, identifying the governing variables and proposing engineering design guidelines. The effect of the wave height, the wave period, the dimensionless freeboard and of the Dean parameter have been analyzed. It has been pointed out the difficulty in understanding the way perched beaches work since they turned out to be beneficial, neutral or harmful according to wave conditions and structure configuration. Besides, the beach profile response as a function of other dimensionless parameters, such as Reynolds number or Froude number, has been studied. In this analysis, the “plunger” parameter has been selected as the most representative, and relationships between the plunger parameter and the wave steepness, the dimensionless crest width, the dimensionless crest height, and the Dean parameter have been identified. Finally, an engineering 4-step design method has been proposed, that allows for the preliminary functional design of the perched beach for a given wave condition. The most relevant contributions from the scientific point of view have been: - The acquisition of a consistent set of experimental results. - The characterization of the behavior of perched beaches. - The proposed relationships between the plunger parameter and the different explanatory variables selected, that allow for the prediction of the beach behavior. - The proposed design method, four-step method, for this type of coastal defense schemes.
Resumo:
In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.