845 resultados para physical non-linearity
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
Nowadays offshore wind turbines represents a valid answer for energy production but with an increasing in costs mainly due to foundation technology required. Hybrid foundations composed by suction caissons over which is welded a tower supporting the nacelle and the blades allows a strong costs reduction. Here a monopod configuration is studied in a sandy soil in a 10 m water depth. Bearing capacity, sliding resistance and pull-out resistance are evaluated. In a second part the installation process occurring in four steps is analysed. considering also the effect of stress enhancement due to frictional forces opposing to penetration growing at skirt sides both inside and outside. In a three dimensional finite element model using Straus7 the soil non-linearity is considered in an approximate way through an iterative procedure using the Yokota empirical decay curves.
Resumo:
Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.
Resumo:
The literature on the erosive potential of drinks and other products is summarised, and aspects of the conduct of screening tests as well as possible correlations of the erosive potential with various solution parameters are discussed. The solution parameters that have been suggested as important include pH, acid concentration (with respect to buffer capacity and concentration of undissociated acid), degree of saturation, calcium and phosphate concentrations, and inhibitors of erosion. Based on the available data, it is concluded that the dominant factor in erosion is pH. The effect of buffer capacity seems to be pH dependent. The degree of saturation probably has a non-linear relationship with erosion. While calcium at elevated concentrations is known to reduce erosion effectively, it is not known whether it is important at naturally occurring concentrations. Fluoride at naturally occurring concentrations is inversely correlated with erosive potential, but phosphate is probably not. Natural plant gums, notably pectin, do not inhibit erosion, so they are unlikely to interfere with the prediction of erosive potential. The non-linearity of some solution factors and interactions with pH need to be taken into account when developing multivariate models for predicting the erosive potential of different solutions. Finally, the erosive potential of solutions towards enamel and dentine might differ.
Resumo:
The mechanisms of Ar release from K-feldspar samples in laboratory experiments and during their geological history are assessed here. Modern petrology clearly established that the chemical and isotopic record of minerals is normally dominated by aqueous recrystallization. The laboratory critique is trickier, which explains why so many conflicting approaches have been able to survive long past their expiration date. Current models are evaluated for self-consistency; especially Arrhenian non-linearity leads to paradoxes. The models’ testable geological predictions suggest that temperature-based downslope extrapolations often overestimate observed geological Ar mobility substantially. An updated interpretation is based on the unrelatedness of geological behaviour to laboratory experiments. The isotopic record of K-feldspar in geological samples is not a unique function of temperature, as recrystallisation promoted by aqueous fluids is the predominant mechanism controlling isotope transport. K-feldspar should therefore be viewed as a hygrochronometer. Laboratory degassing proceeds from structural rearrangements and phase transitions such as are observed in situ at high temperature in Na and Pb feldspars. These effects violate the mathematics of an inert Fick’s Law matrix and preclude downslope extrapolation. The similar upward-concave, non-linear shapes of Arrhenius trajectories of many silicates, hydrous and anhydrous, are likely common manifestations of structural rearrangements in silicate structures.
Resumo:
Once seen as anomalous, facilitative interactions among plants and their importance for community structure and functioning are now widely recognized. The growing body of modelling, descriptive and experimental studies on facilitation covers a wide variety of terrestrial and aquatic systems throughout the globe. However, the lack of a general body of theory linking facilitation among different types of organisms and biomes and their responses to environmental changes prevents further advances in our knowledge regarding the evolutionary and ecological implications of facilitation in plant communities. Moreover, insights gathered from alternative lines of inquiry may substantially improve our understanding of facilitation, but these have been largely neglected thus far. Despite over 15 years of research and debate on this topic, there is no consensus on the degree to which plant–plant interactions change predictably along environmental gradients (i.e. the stress-gradient hypothesis), and this hinders our ability to predict how plant–plant interactions may affect the response of plant communities to ongoing global environmental change. The existing controversies regarding the response of plant–plant interactions across environmental gradients can be reconciled when clearly considering and determining the species-specificity of the response, the functional or individual stress type, and the scale of interest (pairwise interactions or community-level response). Here, we introduce a theoretical framework to do this, supported by multiple lines of empirical evidence. We also discuss current gaps in our knowledge regarding how plant–plant interactions change along environmental gradients. These include the existence of thresholds in the amount of species-specific stress that a benefactor can alleviate, the linearity or non-linearity of the response of pairwise interactions across distance from the ecological optimum of the beneficiary, and the need to explore further how frequent interactions among multiple species are and how they change across different environments. We review the latest advances in these topics and provide new approaches to fill current gaps in our knowledge. We also apply our theoretical framework to advance our knowledge on the evolutionary aspects of plant facilitation, and the relative importance of facilitation, in comparison with other ecological processes, for maintaining ecosystem structure, functioning and dynamics. We build links between these topics and related fields, such as ecological restoration, woody encroachment, invasion ecology, ecological modelling and biodiversity–ecosystem-functioning relationships. By identifying commonalities and insights from alternative lines of research, we further advance our understanding of facilitation and provide testable hypotheses regarding the role of (positive) biotic interactions in the maintenance of biodiversity and the response of ecological communities to ongoing environmental changes.
Resumo:
It is still an open question how equilibrium warming in response to increasing radiative forcing - the specific equilibrium climate sensitivity S - depends on background climate. We here present palaeodata-based evidence on the state dependency of S, by using CO2 proxy data together with a 3-D ice-sheet-model-based reconstruction of land ice albedo over the last 5 million years (Myr). We find that the land ice albedo forcing depends non-linearly on the background climate, while any non-linearity of CO2 radiative forcing depends on the CO2 data set used. This non-linearity has not, so far, been accounted for in similar approaches due to previously more simplistic approximations, in which land ice albedo radiative forcing was a linear function of sea level change. The latitudinal dependency of ice-sheet area changes is important for the non-linearity between land ice albedo and sea level. In our set-up, in which the radiative forcing of CO2 and of the land ice albedo (LI) is combined, we find a state dependence in the calculated specific equilibrium climate sensitivity, S[CO2,LI], for most of the Pleistocene (last 2.1 Myr). During Pleistocene intermediate glaciated climates and interglacial periods, S[CO2,LI] is on average ~ 45 % larger than during Pleistocene full glacial conditions. In the Pliocene part of our analysis (2.6-5 Myr BP) the CO2 data uncertainties prevent a well-supported calculation for S[CO2,LI], but our analysis suggests that during times without a large land ice area in the Northern Hemisphere (e.g. before 2.82 Myr BP), the specific equilibrium climate sensitivity, S[CO2,LI], was smaller than during interglacials of the Pleistocene. We thus find support for a previously proposed state change in the climate system with the widespread appearance of northern hemispheric ice sheets. This study points for the first time to a so far overlooked non-linearity in the land ice albedo radiative forcing, which is important for similar palaeodata-based approaches to calculate climate sensitivity. However, the implications of this study for a suggested warming under CO2 doubling are not yet entirely clear since the details of necessary corrections for other slow feedbacks are not fully known and the uncertainties that exist in the ice-sheet simulations and global temperature reconstructions are large.
Resumo:
The Baltic Sea has experienced three major intervals of bottom water hypoxia following the intrusion of seawater ca. 8 kyrs ago. These intervals occurred during the Holocene Thermal Maximum (HTM), Medieval Climate Anomaly (MCA) and during recent decades. Here, we show that sequestration of both Fe and Mn in Baltic Sea sediments generally increases with water depth, and we attribute this to shelf-to-basin transfer ("shuttling") of Fe and Mn. Burial of Mn in slope and basin sediments was enhanced following the lake-brackish/marine transition at the beginning of the hypoxic interval during the HTM. During hypoxic intervals, shelf-to-basin transfer of Fe was generally enhanced but that of Mn was reduced. However, intensification of hypoxia within hypoxic intervals led to decreased burial of both Mn and Fe in deep basin sediments. This implies a non-linearity in shelf Fe release upon expanding hypoxia with initial enhanced Fe release relative to oxic conditions followed by increased retention in shelf sediments, likely in the form of iron sulfide minerals. For Mn, extended hypoxia leads to more limited sequestration as Mn carbonate in deep basin sediments, presumably because of more rapid reduction of Mn oxides formed after inflows and subsequent escape of dissolved Mn to the overlying water. Our Fe records suggest that modern Baltic Sea hypoxia is more widespread than in the past. Furthermore, hypoxia-driven variations in shelf-to-basin transfer of Fe may have impacted the dynamics of P and sulfide in the Baltic Sea thus providing potential feedbacks on the further development of hypoxia.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
The objective of this paper is to design a path following control system for a car-like mobile robot using classical linear control techniques, so that it adapts on-line to varying conditions during the trajectory following task. The main advantages of the proposed control structure is that well known linear control theory can be applied in calculating the PID controllers to full control requirements, while at the same time it is exible to be applied in non-linear changing conditions of the path following task. For this purpose the Frenet frame kinematic model of the robot is linearised at a varying working point that is calculated as a function of the actual velocity, the path curvature and kinematic parameters of the robot, yielding a transfer function that varies during the trajectory. The proposed controller is formed by a combination of an adaptive PID and a feed-forward controller, which varies accordingly with the working conditions and compensates the non-linearity of the system. The good features and exibility of the proposed control structure have been demonstrated through realistic simulations that include both kinematics and dynamics of the car-like robot.
Resumo:
Output bits from an optical logic cell present noise due to the type of technique used to obtain the Boolean functions of two input data bits. We have simulated the behavior of an optically programmable logic cell working with Fabry Perot-laser diodes of the same type employed in optical communications (1550nm) but working here as amplifiers. We will report in this paper a study of the bit noise generated from the optical non-linearity process allowing the Boolean function operation of two optical input data signals. Two types of optical logic cells will be analyzed. Firstly, a classical "on-off" behavior, with transmission operation of LD amplifier and, secondly, a more complicated configuration with two LD amplifiers, one working on transmission and the other one in reflection mode. This last configuration has nonlinear behavior emulating SEED-like properties. In both cases, depending on the value of a "1" input data signals to be processed, a different logic function can be obtained. Also a CW signal, known as control signal, may be apply to fix the type of logic function. The signal to noise ratio will be analyzed for different parameters, as wavelength signals and the hysteresis cycles regions associated to the device, in relation with the signals power level applied. With this study we will try to obtain a better understanding of the possible effects present on an optical logic gate with Laser Diodes.
Resumo:
El objetivo del presente trabajo es el de presentar la situación actual de las posibles relaciones entre los comportamientos de las células lógicas, empleadas en Computación Óptica, y algunos comportamientos no lineales obtenidos en sistemas complejos. Como se mostrará, las arquitecturas empleadas en sistemas de cálculo, y más en concreto, las unidades básicas de que están compuestas, pueden dar lugar a situaciones no previstas de antemano y, como consecuencia, generar procesos ajenos a los inicialmente previstos. En concreto se mostrará como de una célula lógica, pueden obtenerse comportamientos caóticos. Este estudio se extenderá al análisis de redes neuronales biológicas y a su posible modelización con las anteriores técnicas. Como caso concreto de estudio se ofrecerá una simulación de la retina de los vertebrados, obtenida mediante las células lógicas presentadas anteriormente.
Resumo:
Las pilas de los puentes son elementos habitualmente verticales que, generalmente, se encuentran sometidos a un estado de flexión compuesta. Su altura significativa en muchas ocasiones y la gran resistencia de los materiales constituyentes de estos elementos – hormigón y acero – hace que se encuentren pilas de cierta esbeltez en la que los problemas de inestabilidad asociados al cálculo en segundo orden debido a la no linealidad geométrica deben ser considerados. Además, la mayoría de las pilas de nuestros puentes y viaductos están hechas de hormigón armado por lo que se debe considerar la fisuración del hormigón en las zonas en que esté traccionado. Es decir, el estudio del pandeo de pilas esbeltas de puentes requiere también la consideración de un cálculo en segundo orden mecánico, y no solo geométrico. Por otra parte, una pila de un viaducto no es un elemento que pueda considerarse como aislado; al contrario, su conexión con el tablero hace que aparezca una interacción entre la propia pila y aquél que, en cierta medida, supone una cierta coacción al movimiento de la propia cabeza de pila. Esto hace que el estudio de la inestabilidad de una pila esbelta de un puente no puede ser resuelto con la “teoría del pandeo de la pieza aislada”. Se plantea, entonces, la cuestión de intentar definir un procedimiento que permita abordar el problema complicado del pandeo de pilas esbeltas de puentes pero empleando herramientas de cálculo no tan complejas como las que resuelven “el pandeo global de una estructura multibarra, teniendo en cuenta todas las no linealidades, incluidas las de las coacciones”. Es decir, se trata de encontrar un procedimiento, que resulta ser iterativo, que resuelva el problema planteado de forma aproximada, pero suficientemente ajustada al resultado real, pero empleando programas “convencionales” de cálculo que sean capaces de : - por una parte, en la estructura completa: o calcular en régimen elástico lineal una estructura plana o espacial multibarra compleja; - por otra, en un modelo de una sola barra aislada: o considerar las no linealidades geométricas y mecánicas a nivel tensodeformacional, o considerar la no linealidad producida por la fisuración del hormigón, o considerar una coacción “elástica” en el extremo de la pieza. El objeto de este trabajo es precisamente la definición de ese procedimiento iterativo aproximado, la justificación de su validez, mediante su aplicación a diversos casos paramétricos, y la presentación de sus condicionantes y limitaciones. Además, para conseguir estos objetivos se han elaborado unos ábacos de nueva creación que permiten estimar la reducción de rigidez que supone la fisuración del hormigón en secciones huecas monocajón de hormigón armado. También se han creado unos novedosos diagramas de interacción axil-flector válidos para este tipo de secciones en flexión biaxial. Por último, hay que reseñar que otro de los objetivos de este trabajo – que, además, le da título - era cuantificar el valor de la coacción que existe en la cabeza de una pila debido a que el tablero transmite las cargas de una pila al resto de los integrantes de la subestructura y ésta, por tanto, colabora a reducir los movimientos de la cabeza de pila en cuestión. Es decir, la cabeza de una pila no está exenta lo cual mejora su comportamiento frente al pandeo. El régimen de trabajo de esta coacción es claramente no lineal, ya que la rigidez de las pilas depende de su grado de fisuración. Además, también influye cómo las afecta la no linealidad geométrica que, para la misma carga, aumenta la flexión de segundo orden de cada pila. En este documento se define cuánto vale esta coacción, cómo hay que calcularla y se comprueba su ajuste a los resultados obtenidos en el l modelo no lineal completo. The piers of the bridges are vertical elements where axial loads and bending moments are to be considered. They are often high and also the strength of the materials they are made of (concrete and steel) is also high. This means that slender piers are very common and, so, the instabilities produced by the second order effects due to the geometrical non linear effects are to be considered. In addition to this, the piers are usually made of reinforced concrete and, so, the effects of the cracking of the concrete should also be evaluated. That is, the analysis of the instabilities of te piers of a bridge should consider both the mechanical and the geometrical non linearities. Additionally, the pier of a bridge is not a single element, but just the opposite; the connection of the pier to the deck of the bridge means that the movements of the top of the pier are reduced compared to the situation of having a free end at the top of the pier. The connection between the pier and the deck is the reason why the instability of the pier cannot be analysed using “the buckling of a compressed single element method”. So, the question of defining an approximate method for analysing the buckling of the slender piers of a bridge but using a software less complex than what it is needed for analysing the “ global buckling of a multibeam structure considering all t”, is arisen. Then, the goal should be trying to find a procedure for analysing the said complex problem of the buckling of the slender piers of a bridge using a simplified method. This method could be an iterative (step by step) procedure, being accurate enough, using “normal” software having the following capabilities: - Related to the calculation of the global structure o Ability for calculating a multibesam strucutre using elastic analysis. - Related to the calculation of a single beam strcuture:: o Ability for taking into account the geometrical and mechanical () non linearities o Ability for taking into account the cracking of the concrete. o Ability for using partial stiff constraints (elastic springs) at the end of the elements One of the objectives of this document is just defining this simplified methodology, justifying the accuracy of the proposed procedure by using it on some different bridges and presenting the exclusions and limitations of the propose method. In addition to this, some new charts have been created for calculating the reduction of the stiffness of hollow cross sections made of reinforced concrete. Also, new charts for calculating the reinforcing of hollow cross sections under biaxial bending moments are also included in the document. Finally, it is to be said that another aim of the document – as it is stated on the title on the document – is defining the value of the constraint on the top of the pier because of the connection of the pier to the deck .. and to the other piers. That is, the top of the pier is not a free end of a beam and so the buckling resistance of the pier is significantly improved. This constraint is a non-elastic constraint because the stiffness of each pier depends on the level of cracking. Additionally, the geometrical non linearity is to be considered as there is an amplification of the bending moments due to the increasing of the movements of the top of the pier. This document is defining how this constraints is to be calculated; also the accuracy of the calculations is evaluated comparing the final results with the results of the complete non linear calculations
Resumo:
Esta pesquisa reflete sobre questões da ética contemporânea na publicidade dirigida ao público feminino. A discussão de tais questões debruça sobre a vertente deontológica (convicção). O objetivo do estudo é investigar como os anúncios publicados nas revistas Claudia e Nova articulam questões de tal ética. Assim, buscou-se verificar, por meio da análise de conteúdo, se os anúncios seguiam os princípios contidos no Código Brasileiro de Auto-Regulamentação Publicitária. Em uma segunda etapa, pretendeu-se investigar, por meio da análise de discurso, como se deu a construção dos anúncios sob o enfoque da ética e da mulher na sociedade dos dias de hoje. Concluiu-se que as representações da ética deontológica na publicidade feminina ocorrem de maneira não linear e fragmentada. A não linearidade se refere ao não cumprimento dos princípios éticos por parte de alguns anúncios analisados. Já a fragmentação diz respeito ao modo como a mulher é retratada e como os produtos são divulgados nos anúncios, a partir de diferentes padrões de conduta (princípios) e baseados em valores diversificados. Ora os anúncios apresentam os produtos de maneira verdadeira ou não, ora as mulheres aparecem sob um enfoque baseado em valores contemporâneos ou em valores tradicionais de modo diferenciado.(AU)
Resumo:
Esta pesquisa reflete sobre questões da ética contemporânea na publicidade dirigida ao público feminino. A discussão de tais questões debruça sobre a vertente deontológica (convicção). O objetivo do estudo é investigar como os anúncios publicados nas revistas Claudia e Nova articulam questões de tal ética. Assim, buscou-se verificar, por meio da análise de conteúdo, se os anúncios seguiam os princípios contidos no Código Brasileiro de Auto-Regulamentação Publicitária. Em uma segunda etapa, pretendeu-se investigar, por meio da análise de discurso, como se deu a construção dos anúncios sob o enfoque da ética e da mulher na sociedade dos dias de hoje. Concluiu-se que as representações da ética deontológica na publicidade feminina ocorrem de maneira não linear e fragmentada. A não linearidade se refere ao não cumprimento dos princípios éticos por parte de alguns anúncios analisados. Já a fragmentação diz respeito ao modo como a mulher é retratada e como os produtos são divulgados nos anúncios, a partir de diferentes padrões de conduta (princípios) e baseados em valores diversificados. Ora os anúncios apresentam os produtos de maneira verdadeira ou não, ora as mulheres aparecem sob um enfoque baseado em valores contemporâneos ou em valores tradicionais de modo diferenciado.(AU)