33 resultados para Points and lines
em Universidad Politécnica de Madrid
Resumo:
Three-dimensional Direct Numerical Simulations combined with Particle Image Velocimetry experiments have been performed on a hemisphere-cylinder at Reynolds number 1000 and angle of attack 20◦. At these flow conditions, a pair of vortices, so-called “horn” vortices, are found to be associated with flow separation. In order to understand the highly complex phenomena associated with this fully threedimensional massively separated flow, different structural analysis techniques have been employed: Proper Orthogonal and Dynamic Mode Decompositions, POD and DMD, respectively, as well as criticalpoint theory. A single dominant frequency associated with the von Karman vortex shedding has been identified in both the experimental and the numerical results. POD and DMD modes associated with this frequency were recovered in the analysis. Flow separation was also found to be intrinsically linked to the observed modes. On the other hand, critical-point theory has been applied in order to highlight possible links of the topology patterns over the surface of the body with the computed modes. Critical points and separation lines on the body surface show in detail the presence of different flow patterns in the base flow: a three-dimensional separation bubble and two pairs of unsteady vortices systems, the horn vortices, mentioned before, and the so-called “leeward” vortices. The horn vortices emerge perpendicularly from the body surface at the separation region. On the other hand, the leeward vortices are originated downstream of the separation bubble, as a result of the boundary layer separation. The frequencies associated with these vortical structures have been quantified.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that ¡Ilústrate different trade-offs among program size, running time, or levéis of verbosity of the messages shown to the user.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that illustrate different trade-offs among program size, running time, or levels of verbosity of the messages shown to the user.
Resumo:
La configuración de un cilindro acoplado a una semi-esfera, conocida como ’hemispherecylinder’, se considera como un modelo simplificado para numerosas aplicaciones industriales tales como fuselaje de aviones o submarinos. Por tanto, el estudio y entendimiento de los fenómenos fluidos que ocurren alrededor de dicha geometría presenta gran interés. En esta tesis se muestra la investigación del origen y evolución de los, ya conocidos, patrones de flujo (burbuja de separación, vórtices ’horn’ y vórtices ’leeward’) que se dan en esta geometría bajo condiciones de flujo separado. Para ello se han llevado a cabo simulaciones numéricas (DNS) y ensayos experimentales usando la técnica de Particle Image Velocimetry (PIV), para una variedad de números de Reynolds (Re) y ángulos de ataque (AoA). Se ha aplicado sobre los resultados numéricos la teoría de puntos críticos obteniendo, por primera vez para esta geometría, un diagrama de bifurcaciones que clasifica los diferentes regímenes topológicos en función del número de Reynolds y del ángulo de ataque. Se ha llevado a cabo una caracterización completa sobre el origen y la evolución de los patrones estructurales característicos del cuerpo estudiado. Puntos críticos de superficie y líneas de corriente tridimensionales han ayudado a describir el origen y la evolución de las principales estructuras presentes en el flujo hasta alcanzar un estado de estabilidad desde el punto de vista topológico. Este estado se asocia con el patrón de los vórtices ’horn’, definido por una topología característica que se encuentra en un rango de números de Reynolds muy amplio y en regímenes compresibles e incompresibles. Por otro lado, con el objeto de determinar las estructuras presentes en el flujo y sus frecuencias asociadas, se han usado distintas técnicas de análisis: Proper Orthogonal Decomposition (POD), Dynamic Mode Decomposition (DMD) y análisis de Fourier. Dichas técnicas se han aplicado sobre los datos experimentales y numéricos, demostrándose la buena concordancia entre ambos resultados. Finalmente, se ha encontrado en ambos casos, una frecuencia dominante asociada con una inestabilidad de los vórtices ’leeward’. ABSTRACT The hemisphere-cylinder may be considered as a simplified model for several geometries found in industrial applications such as aircrafts’ fuselages or submarines. Understanding the complex flow phenomena that surrounds this particular geometry is therefore of major industrial interest. This thesis presents an investigation of the origin and evolution of the complex flow pattern; i.e. separation bubbles, horn vortices and leeward vortices, around the hemisphere-cylinder under separated flow conditions. To this aim, threedimensional Direct Numerical Simulations (DNS) and experimental tests, using Particle Image Velocimetry (PIV) techniques, have been performed for a variety of Reynolds numbers (Re) and angles of attack (AoA). Critical point theory has been applied to the numerical simulations to provide, for the first time for this geometry, a bifurcation diagram that classifies the different flow topology regimes as a function of the Reynolds number and the angle of attack. A complete characterization about the origin and evolution of the complex structural patterns of this geometry has been put in evidence. Surface critical points and surface and volume streamlines were able to describe the main flow structures and their strong dependence with the flow conditions up to reach the structurally stable state. This state was associated with the pattern of the horn vortices, found on ranges from low to high Reynolds numbers and from incompressible to compressible regimes. In addition, different structural analysis techniques have been employed: Proper Orthogonal Decomposition (POD), Dynamic Mode Decomposition (DMD) and Fourier analysis. These techniques have been applied to the experimental and numerical data to extract flow structure information (i.e. modes and frequencies). Experimental and numerical modes are shown to be in good agreement. A dominant frequency associated with an instability of the leeward vortices has been identified in both, experimental and numerical results.
Resumo:
The traditional power grid is just a one-way supplier that gets no feedback data about the energy delivered, what tariffs could be the most suitable ones for customers, the shifting daily needs of electricity in a facility, etc. Therefore, it is only natural that efforts are being invested in improving power grid behavior and turning it into a Smart Grid. However, to this end, several components have to be either upgraded or created from scratch. Among the new components required, middleware appears as a critical one, for it will abstract all the diversity of the used devices for power transmission (smart meters, embedded systems, etc.) and will provide the application layer with a homogeneous interface involving power production and consumption management data that were not able to be provided before. Additionally, middleware is expected to guarantee that updates to the current metering infrastructure (changes in service or hardware availability) or any added legacy measuring appliance will get acknowledged for any future request. Finally, semantic features are of major importance to tackle scalability and interoperability issues. A survey on the most prominent middleware architectures for Smart Grids is presented in this paper, along with an evaluation of their features and their strong points and weaknesses.
Resumo:
This article presents a mathematical method for producing hard-chine ship hulls based on a set of numerical parameters that are directly related to the geometric features of the hull and uniquely define a hull form for this type of ship. The term planing hull is used generically to describe the majority of hard-chine boats being built today. This article is focused on unstepped, single-chine hulls. B-spline curves and surfaces were combined with constraints on the significant ship curves to produce the final hull design. The hard-chine hull geometry was modeled by decomposing the surface geometry into boundary curves, which were defined by design constraints or parameters. In planing hull design, these control curves are the center, chine, and sheer lines as well as their geometric features including position, slope, and, in the case of the chine, enclosed area and centroid. These geometric parameters have physical, hydrodynamic, and stability implications from the design point of view. The proposed method uses two-dimensional orthogonal projections of the control curves and then produces three-dimensional (3-D) definitions using B-spline fitting of the 3-D data points. The fitting considers maximum deviation from the curve to the data points and is based on an original selection of the parameterization. A net of B-spline curves (stations) is then created to match the previously defined 3-D boundaries. A final set of lofting surfaces of the previous B-spline curves produces the hull surface.
Resumo:
In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.
Resumo:
The aim of the present study was to assess the effects of game timeouts on basketball teams? offensive and defensive performances according to momentary differences in score and game period. The sample consisted of 144 timeouts registered during 18 basketball games randomly selected from the 2007 European Basketball Championship (Spain). For each timeout, five ball possessions were registered before (n?493) and after the timeout (n?475). The offensive and defensive efficiencies were registered across the first 35 min and last 5 min of games. A k-means cluster analysis classified the timeouts according to momentary score status as follows: losing ( ?10 to ?3 points), balanced ( ?2 to 3 points), and winning (4 to 10 points). Repeated-measures analysis of variance identified statistically significant main effects between pre and post timeout offensive and defensive values. Chi-square analysis of game period identified a higher percentage of timeouts called during the last 5 min of a game compared with the first 35 min (64.999.1% vs. 35.1910.3%; x ?5.4, PB0.05). Results showed higher post timeout offensive and defensive performances. No other effect or interaction was found for defensive performances. Offensive performances were better in the last 5 min of games, with the least differences when in balanced situations and greater differences when in winning situations. Results also showed one interaction between timeouts and momentary differences in score, with increased values when in losing and balanced situations but decreased values when in winning situations. Overall, the results suggest that coaches should examine offensive and defensive performances according to game period and differences in score when considering whether to call a timeout.
Resumo:
El MC en baloncesto es aquel fenómeno relacionado con el juego que presenta unas características particulares determinadas por la idiosincrasia de un equipo y puede afectar a los protagonistas y por ende al devenir del juego. En la presente Tesis se ha estudiado la incidencia del MC en Liga A.C.B. de baloncesto y para su desarrollo en profundidad se ha planteado dos investigaciones una cuantitativa y otra cualitativa cuya metodología se detalla a continuación: La investigación cuantitativa se ha basado en la técnica de estudio del “Performance analysis”, para ello se han estudiado cuatro temporadas de la Liga A.C.B. (del 2007/08 al 2010/11), tal y como refleja en la bibliografía consultada se han tomado como momentos críticos del juego a los últimos cinco minutos de partidos donde la diferencia de puntos fue de seis puntos y todos los Tiempos Extras disputados, de tal manera que se han estudiado 197 momentos críticos. La contextualización del estudio se ha hecho en función de la variables situacionales “game location” (local o visitante), “team quality” (mejores o peores clasificados) y “competition” (fases de LR y Playoff). Para la interpretación de los resultados se han realizado los siguientes análisis descriptivos: 1) Análisis Discriminante, 2) Regresión Lineal Múltiple; y 3) Análisis del Modelo Lineal General Multivariante. La investigación cualitativa se ha basado en la técnica de investigación de la entrevista semiestructurada. Se entrevistaron a 12 entrenadores que militaban en la Liga A.C.B. durante la temporada 2011/12, cuyo objetivo ha sido conocer el punto de vista que tiene el entrenador sobre el concepto del MC y que de esta forma pudiera dar un enfoque más práctico basado en su conocimiento y experiencia acerca de cómo actuar ante el MC en el baloncesto. Los resultados de ambas investigaciones coinciden en señalar la importancia del MC sobre el resultado final del juego. De igual forma, el concepto en sí entraña una gran complejidad por lo que se considera fundamental la visión científica de la observación del juego y la percepción subjetiva que presenta el entrenador ante el fenómeno, para la cual los aspectos psicológicos de sus protagonistas (jugadores y entrenadores) son determinantes. ABSTRACT The Critical Moment (CM) in basketball is a related phenomenon with the game that has particular features determined by the idiosyncrasies of a team and can affect the players and therefore the future of the game. In this Thesis we have studied the impact of CM in the A.C.B. League and from a profound development two investigations have been raised, quantitative and qualitative whose methodology is as follows: The quantitative research is based on the technique of study "Performance analysis", for this we have studied four seasons in the A.C.B. League (2007/08 to 2010/11), and as reflected in the literature the Critical Moments of the games were taken from the last five minutes of games where the point spread was six points and all overtimes disputed, such that 197 critical moments have been studied. The contextualization of the study has been based on the situational variables "game location" (home or away), "team quality" (better or lower classified) and "competition" (LR and Playoff phases). For the interpretation of the results the following descriptive analyzes were performed: 1) Discriminant Analysis, 2) Multiple Linear Regression Analysis; and 3) Analysis of Multivariate General Linear Model. Qualitative research is based on the technique of investigation of a semi-structured interview. 12 coaches who belonged to the A.C.B. League were interviewed in seasons 2011/12, which aimed to determine the point of view that the coach has on the CM concept and thus could give a more practical approach based on their knowledge and experience about how to deal with the CM in basketball. The results of both studies agree on the importance of the CM on the final outcome of the game. Similarly, the concept itself is highly complex so the scientific view of the observation of the game is considered essential as well as the subjective perception the coach presents before the phenomenon, for which the psychological aspects of their characters (players and coaches) are crucial.
Resumo:
Derive coordinate-free expressions for geometric characteristics of conics written in Bézier form in terms of their control points and weights.
Resumo:
La aplicación de criterios de sostenibilidad ha de entenderse como el procedimiento esencial para la necesaria reconversión del sector de la construcción, que movilizando el 10% de la economía mundial, representa más de la tercera parte del consumo mundial de recursos, en torno al 30-40% del consumo energético y emisiones de gases de efecto invernadero, 30-40% de la generación de residuos y el 12% de todo el gasto en agua dulce del planeta. La presente investigación se enmarca en una estrategia general de promover la evaluación de la sostenibilidad en la edificación en el contexto español, dando un primer paso centrado en la evaluación del comportamiento ambiental. El hilo conductor de la investigación parte de la necesidad de establecer un marco teórico de sostenibilidad, que permita clarificar conceptos y definir criterios de valoración adecuados. Como siguiente paso, la investigación se dirige a la revisión del panorama internacional de normativa e instrumentos voluntarios, con el objetivo de clarificar el difuso panorama que caracteriza a la sostenibilidad en el sector de la edificación en la actualidad y enmarcar la investigación en un contexto de políticas y programaciones ya existentes. El objetivo principal reside en el planteamiento de una metodología de evaluación de los aspectos o impactos ambientales asociados al ciclo de vida de la edificación, aplicable al contexto español, como una de las tres dimensiones que constituyen los pilares básicos de la sostenibilidad. Los ámbitos de evaluación de los aspectos sociales y económicos, para los que no existe actualmente un grado de definición metodológico suficientemente congruente, son adicionalmente examinados, de cara a ofrecer una visión holística de la evaluación. Previo al desarrollo de la propuesta, se aborda, en primer lugar, la descripción de las características básicas y limitaciones de la metodología de Análisis de Ciclo de Vida (ACV), para posteriormente proceder a profundizar en el estado del arte de aplicación de ACV a la edificación, realizando una revisión crítica de los trabajos de investigación que han sido desarrollados en los últimos años. Esta revisión permite extraer conclusiones sobre su grado de coherencia con el futuro entorno normativo e identificar dos necesidades prioritarias de actuación: -La necesidad de armonización, dadas las fuertes inconsistencias metodológicas detectadas, que imposibilitan la comparación de los resultados obtenidos en los trabajos de evaluación. -La necesidad de simplificación, dada la complejidad inherente a la evaluación, de modo que, manteniendo el máximo rigor, sea viable su aplicación práctica en el contexto español. A raíz de la participación en los trabajos de desarrollo normativo a nivel europeo, se ha adquirido una visión crítica sobre las implicaciones metodológicas de la normativa en definición, que permite identificar la hoja de ruta que marcará el escenario europeo en los próximos años. La definición de la propuesta metodológica integra los principios generales de aplicación de ACV con el protocolo metodológico establecido en la norma europea, considerando adicionalmente las referencias normativas de las prácticas constructivas en el contexto español. En el planteamiento de la propuesta se han analizado las posibles simplificaciones aplicables, con el objetivo de hacer viable su implementación, centrando los esfuerzos en la sistematización del concepto de equivalente funcional, el establecimiento de recomendaciones sobre el tipo de datos en función de su disponibilidad y la revisión crítica de los modelos de cálculo de los impactos ambientales. Las implicaciones metodológicas de la propuesta se describen a través de una serie de casos de estudio, que ilustran su viabilidad y las características básicas de aplicación. Finalmente, se realiza un recorrido por los aspectos que han sido identificados como prioritarios en la conformación del escenario de perspectivas futuras, líneas de investigación y líneas de acción. Abstract Sustainability criteria application must be understood as the essential procedure for the necessary restructuring of the construction sector, which mobilizes 10% of the world economy, accounting for more than one third of the consumption of the world's resources, around 30 - 40% of energy consumption and emissions of greenhouse gases, 30-40% of waste generation and 12% of all the fresh water use in the world. This research is in line with an overall strategy to promote the sustainability assessment of building in the Spanish context, taking a first step focused on the environmental performance assessment. The thread of the present research sets out from the need to establish a theoretical framework of sustainability which clarifies concepts and defines appropriate endpoints. As a next step, the research focuses on the review of the international panorama regulations and voluntary instruments, with the aim of clarifying the fuzzy picture that characterizes sustainability in the building sector at present while framing the research in the context of existing policies and programming. The main objective lies in the approach of a methodology for the assessment of the environmental impacts associated with the life cycle of building, applicable to the Spanish context, as one of the three dimensions that constitute the pillars of sustainability. The areas of assessment of social and economic issues, for which there is currently a degree of methodological definition consistent enough, are further examined, in order to provide a holistic view of the assessment. The description of the basic features and limitations of the methodology of Life Cycle Assessment (LCA) are previously addressed, later proceeding to deepen the state of the art of LCA applied to the building sector, conducting a critical review of the research works that have been developed in recent years. This review allows to establish conclusions about the degree of consistency with the future regulatory environment and to identify two priority needs for action: - The need for harmonization, given the strong methodological inconsistencies detected that prevent the comparison of results obtained in assessment works. - The need for simplification, given the inherent complexity of the assessment, so that, while maintaining the utmost rigor, make the practical application feasible in the Spanish context. The participation in the work of policy development at European level has helped to achieve a critical view of the methodological implications of the rules under debate, identifying the roadmap that will mark the European scene in the coming years. The definition of the proposed methodology integrates the general principles of LCA methodology with the protocol established in the European standard, also considering the regulatory standards to construction practices in the Spanish context. In the proposed approach, possible simplifications applicable have been analyzed, in order to make its implementation possible, focusing efforts in systematizing the functional equivalent concept, establishing recommendations on the type of data based on their availability and critical review of the calculation models of environmental impacts. The methodological implications of the proposal are described through a series of case studies, which illustrate the feasibility and the basic characteristics of its application. Finally, the main aspects related to future prospects, research lines and lines of action that have been identified as priorities are outlined.
Resumo:
La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.
Resumo:
Transverse galloping is a type of aeroelastic instability characterised by large amplitude, low frequency oscillation of a structure in the direction normal to the mean wind direction. It normally appears in bodies with small stiffness and structural damping, provided the incident flow velocity is high enough. In the simplest approach transverse galloping can be considered as a one-degree-of-freedom oscillator subjected to aerodynamic forces, which in turn can be described by using a quasi-steady description. In this frame it has been demonstrated that hysteresis phenomena in transverse galloping is related to the existence of inflection points in the curve giving the dependence with the angle of attack of the aerodynamic coefficient normal to the incident flow. Aiming at experimentally checking such a relationship between these inflection points and hysteresis, wind tunnel experiments have been conducted. Experiments have been restricted to isosceles triangular cross-section bodies, whose galloping behaviour is well documented. Experimental results show that, according to theoretical predictions, hysteresis takes place at the angles of attack where there are inflection points in the lift coefficient curve, provided that the body is prone to gallop at these angles of attack.
Resumo:
A unified solution framework is presented for one-, two- or three-dimensional complex non-symmetric eigenvalue problems, respectively governing linear modal instability of incompressible fluid flows in rectangular domains having two, one or no homogeneous spatial directions. The solution algorithm is based on subspace iteration in which the spatial discretization matrix is formed, stored and inverted serially. Results delivered by spectral collocation based on the Chebyshev-Gauss-Lobatto (CGL) points and a suite of high-order finite-difference methods comprising the previously employed for this type of work Dispersion-Relation-Preserving (DRP) and Padé finite-difference schemes, as well as the Summationby- parts (SBP) and the new high-order finite-difference scheme of order q (FD-q) have been compared from the point of view of accuracy and efficiency in standard validation cases of temporal local and BiGlobal linear instability. The FD-q method has been found to significantly outperform all other finite difference schemes in solving classic linear local, BiGlobal, and TriGlobal eigenvalue problems, as regards both memory and CPU time requirements. Results shown in the present study disprove the paradigm that spectral methods are superior to finite difference methods in terms of computational cost, at equal accuracy, FD-q spatial discretization delivering a speedup of ð (10 4). Consequently, accurate solutions of the three-dimensional (TriGlobal) eigenvalue problems may be solved on typical desktop computers with modest computational effort.
Resumo:
Fractal and multifractal are concepts that have grown increasingly popular in recent years in the soil analysis, along with the development of fractal models. One of the common steps is to calculate the slope of a linear fit commonly using least squares method. This shouldn?t be a special problem, however, in many situations using experimental data the researcher has to select the range of scales at which is going to work neglecting the rest of points to achieve the best linearity that in this type of analysis is necessary. Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. In this method we don?t have to assume that the outlier point is simply an extreme observation drawn from the tail of a normal distribution not compromising the validity of the regression results. In this work we have evaluated the capacity of robust regression to select the points in the experimental data used trying to avoid subjective choices. Based on this analysis we have developed a new work methodology that implies two basic steps: ? Evaluation of the improvement of linear fitting when consecutive points are eliminated based on R pvalue. In this way we consider the implications of reducing the number of points. ? Evaluation of the significance of slope difference between fitting with the two extremes points and fitted with the available points. We compare the results applying this methodology and the common used least squares one. The data selected for these comparisons are coming from experimental soil roughness transect and simulated based on middle point displacement method adding tendencies and noise. The results are discussed indicating the advantages and disadvantages of each methodology.