914 resultados para Linear boundary value control problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanical behavior of granular materials has been traditionally approached through two theoretical and computational frameworks: macromechanics and micromechanics. Macromechanics focuses on continuum based models. In consequence it is assumed that the matter in the granular material is homogeneous and continuously distributed over its volume so that the smallest element cut from the body possesses the same physical properties as the body. In particular, it has some equivalent mechanical properties, represented by complex and non-linear constitutive relationships. Engineering problems are usually solved using computational methods such as FEM or FDM. On the other hand, micromechanics is the analysis of heterogeneous materials on the level of their individual constituents. In granular materials, if the properties of particles are known, a micromechanical approach can lead to a predictive response of the whole heterogeneous material. Two classes of numerical techniques can be differentiated: computational micromechanics, which consists on applying continuum mechanics on each of the phases of a representative volume element and then solving numerically the equations, and atomistic methods (DEM), which consist on applying rigid body dynamics together with interaction potentials to the particles. Statistical mechanics approaches arise between micro and macromechanics. It tries to state which the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles and the interactions. The main objective of this paper is to introduce this approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few years, the common practice within air traffic management has been that commercial aircraft fly by following a set of predefined routes to reach their destination. Currently, aircraft operators are requesting more flexibility to fly according to their prefer- ences, in order to achieve their business objectives. Due to this reason, much research effort is being invested in developing different techniques which evaluate aircraft optimal trajectory and traffic synchronisation. Also, the inefficient use of the airspace using barometric altitude overall in the landing and takeoff phases or in Continuous Descent Approach (CDA) trajectories where currently it is necessary introduce the necessary reference setting (QNH or QFE). To solve this problem and to permit a better airspace management born the interest of this research. Where the main goals will be to evaluate the impact, weakness and strength of the use of geometrical altitude instead of the use of barometric altitude. Moreover, this dissertation propose the design a simplified trajectory simulator which is able to predict aircraft trajectories. The model is based on a three degrees of freedom aircraft point mass model that can adapt aircraft performance data from Base of Aircraft Data, and meteorological information. A feature of this trajectory simulator is to support the improvement of the strategic and pre-tactical trajectory planning in the future Air Traffic Management. To this end, the error of the tool (aircraft Trajectory Simulator) is measured by comparing its performance variables with actual flown trajectories obtained from Flight Data Recorder information. The trajectory simulator is validated by analysing the performance of different type of aircraft and considering different routes. A fuel consumption estimation error was identified and a correction is proposed for each type of aircraft model. In the future Air Traffic Management (ATM) system, the trajectory becomes the fundamental element of a new set of operating procedures collectively referred to as Trajectory-Based Operations (TBO). Thus, governmental institutions, academia, and industry have shown a renewed interest for the application of trajectory optimisation techniques in com- mercial aviation. The trajectory optimisation problem can be solved using optimal control methods. In this research we present and discuss the existing methods for solving optimal control problems focusing on direct collocation, which has received recent attention by the scientific community. In particular, two families of collocation methods are analysed, i.e., Hermite-Legendre-Gauss-Lobatto collocation and the pseudospectral collocation. They are first compared based on a benchmark case study: the minimum fuel trajectory problem with fixed arrival time. For the sake of scalability to more realistic problems, the different meth- ods are also tested based on a real Airbus 319 El Cairo-Madrid flight. Results show that pseudospectral collocation, which has shown to be numerically more accurate and computa- tionally much faster, is suitable for the type of problems arising in trajectory optimisation with application to ATM. Fast and accurate optimal trajectory can contribute properly to achieve the new challenges of the future ATM. As atmosphere uncertainties are one of the most important issues in the trajectory plan- ning, the final objective of this dissertation is to have a magnitude order of how different is the fuel consumption under different atmosphere condition. Is important to note that in the strategic phase planning the optimal trajectories are determined by meteorological predictions which differ from the moment of the flight. The optimal trajectories have shown savings of at least 500 [kg] in the majority of the atmosphere condition (different pressure, and temperature at Mean Sea Level, and different lapse rate temperature) with respect to the conventional procedure simulated at the same atmosphere condition.This results show that the implementation of optimal profiles are beneficial under the current Air traffic Management (ATM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One key issue in the simulation of bare electrodynamic tethers (EDTs) is the accurate and fast computation of the collected current, an ambient dependent operation necessary to determine the Lorentz force for each time step. This paper introduces a novel semianalytical solution that allows researchers to compute the current distribution along the tether efficient and effectively under orbital-motion-limited (OML) and beyond OML conditions, i.e., if tether radius is greater than a certain ambient dependent threshold. The method reduces the original boundary value problem to a couple of nonlinear equations. If certain dimensionless variables are used, the beyond OML effect just makes the tether characteristic length L ∗ larger and it is decoupled from the current determination problem. A validation of the results and a comparison of the performance in terms of the time consumed is provided, with respect to a previous ad hoc solution and a conventional shooting method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis se basa en el estudio de la trayectoria que pasa por dos puntos en el problema de los dos cuerpos, inicialmente desarrollado por Lambert, del que toma su nombre. En el pasado, el Problema de Lambert se ha utilizado para la determinación de órbitas a partir de observaciones astronómicas de los cuerpos celestes. Actualmente, se utiliza continuamente en determinación de órbitas, misiones planetaria e interplanetarias, encuentro espacial e interceptación, o incluso en corrección de orbitas. Dada su gran importancia, se decide investigar especialmente sobre su solución y las aplicaciones en las misiones espaciales actuales. El campo de investigación abierto, es muy amplio, así que, es necesario determinar unos objetivos específicos realistas, en el contexto de ejecución de una Tesis, pero que sirvan para mostrar con suficiente claridad el potencial de los resultados aportados en este trabajo, e incluso poder extenderlos a otros campos de aplicación. Como resultado de este análisis, el objetivo principal de la Tesis se enfoca en el desarrollo de algoritmos para resolver el Problema de Lambert, que puedan ser aplicados de forma muy eficiente en las misiones reales donde aparece. En todos los desarrollos, se ha considerado especialmente la eficiencia del cálculo computacional necesario en comparación con los métodos existentes en la actualidad, destacando la forma de evitar la pérdida de precisión inherente a este tipo de algoritmos y la posibilidad de aplicar cualquier método iterativo que implique el uso de derivadas de cualquier orden. En busca de estos objetivos, se desarrollan varias soluciones para resolver el Problema de Lambert, todas ellas basadas en la resolución de ecuaciones transcendentes, con las cuales, se alcanzan las siguientes aportaciones principales de este trabajo: • Una forma genérica completamente diferente de obtener las diversas ecuaciones para resolver el Problema de Lambert, mediante desarrollo analítico, desde cero, a partir de las ecuaciones elementales conocidas de las cónicas (geométricas y temporal), proporcionando en todas ellas fórmulas para el cálculo de derivadas de cualquier orden. • Proporcionar una visión unificada de las ecuaciones más relevantes existentes, mostrando la equivalencia con variantes de las ecuaciones aquí desarrolladas. • Deducción de una nueva variante de ecuación, el mayor logro de esta Tesis, que destaca en eficiencia sobre todas las demás (tanto en coste como en precisión). • Estudio de la sensibilidad de la solución ante variación de los datos iniciales, y como aplicar los resultados a casos reales de optimización de trayectorias. • También, a partir de los resultados, es posible deducir muchas propiedades utilizadas en la literatura para simplificar el problema, en particular la propiedad de invariancia, que conduce al Problema Transformado Simplificado. ABSTRACT This thesis is based on the study of the two-body, two-point boundary-value problem, initially developed by Lambert, from who it takes its name. Since the past, Lambert's Problem has been used for orbit determination from astronomical observations of celestial bodies. Currently, it is continuously used in orbit determinations, for planetary and interplanetary missions, space rendezvous, and interception, or even in orbit corrections. Given its great importance, it is decided to investigate their solution and applications in the current space missions. The open research field is very wide, it is necessary to determine specific and realistic objectives in the execution context of a Thesis, but that these serve to show clearly enough the potential of the results provided in this work, and even to extended them to other areas of application. As a result of this analysis, the main aim of the thesis focuses on the development of algorithms to solve the Lambert’s Problem which can be applied very efficiently in real missions where it appears. In all these developments, it has been specially considered the efficiency of the required computational calculation compared to currently existing methods, highlighting how to avoid the loss of precision inherent in such algorithms and the possibility to apply any iterative method involving the use of derivatives of any order. Looking to meet these objectives, a number of solutions to solve the Lambert’s Problem are developed, all based on the resolution of transcendental equations, with which the following main contributions of this work are reached: • A completely different generic way to get the various equations to solve the Lambert’s Problem by analytical development, from scratch, from the known elementary conic equations (geometrics and temporal), by providing, in all cases, the calculation of derivatives of any order. • Provide a unified view of most existing relevant equations, showing the equivalence with variants of the equations developed here. • Deduction of a new variant of equation, the goal of this Thesis, which emphasizes efficiency (both computational cost and accuracy) over all other. • Estudio de la sensibilidad de la solución ante la variación de las condiciones iniciales, mostrando cómo aprovechar los resultados a casos reales de optimización de trayectorias. • Study of the sensitivity of the solution to the variation of the initial data, and how to use the results to real cases of trajectories’ optimization. • Additionally, from results, it is possible to deduce many properties used in literature to simplify the problem, in particular the invariance property, which leads to a simplified transformed problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a primary-parallel secondaryseries multicore forward microinverter for photovoltaic ac-module application. The presented microinverter operates with a constant off-time boundary mode control, providing MPPT capability and unity power factor. The proposed multitransformer solution allows using low-profile unitary turns ratio transformers. Therefore, the transformers are better coupled and the overall performance of the microinverter is improved. Due to the multiphase solution, the number of devices increases but the current stress and losses per device are reduced contributing to an easier thermal management. Furthermore, the decoupling capacitor is split among the phases, contributing to a low-profile solution without electrolytic capacitors suitable to be mounted in the frame of a PV module. The proposed solution is compared to the classical parallel-interleaved approach, showing better efficiency in a wide power range and improving the weighted efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Em geral, uma embarcação de planeio é projetada para atingir elevados níveis de velocidade. Esse atributo de desempenho está diretamente relacionado ao porte da embarcação e à potência instalada em sua planta propulsiva. Tradicionalmente, durante o projeto de uma embarcação, as análises de desempenho são realizadas através de resultados de embarcações já existentes, retirados de séries sistemáticas ou de embarcações já desenvolvidas pelo estaleiro e/ou projetista. Além disso, a determinação dos atributos de desempenho pode ser feita através de métodos empíricos e/ou estatísticos, onde a embarcação é representada através de seus parâmetros geométricos principais; ou a partir de testes em modelos em escala reduzida ou protótipos. No caso específico de embarcações de planeio, o custo dos testes em escala reduzida é muito elevado em relação ao custo de projeto. Isso faz com que a maioria dos projetistas não opte por ensaios experimentais das novas embarcações em desenvolvimento. Ao longo dos últimos anos, o método de Savitsky foi largamente utilizado para se realizar estimativas de potência instalada de uma embarcação de planeio. Esse método utiliza um conjunto de equações semi-empíricas para determinar os esforços atuantes na embarcação, a partir dos quais é possível determinar a posição de equilíbrio de operação e a força propulsora necessária para navegar em uma dada velocidade. O método de Savitsky é muito utilizado nas fases iniciais de projeto, onde a geometria do casco ainda não foi totalmente definida, pois utiliza apenas as características geométricas principais da embarcação para realização das estimativas de esforços. À medida que se avança nas etapas de projeto, aumenta o detalhamento necessário das estimativas de desempenho. Para a realização, por exemplo, do projeto estrutural é necessária uma estimativa do campo de pressão atuante no fundo do casco, o qual não pode ser determinado pelo método de Savitsky. O método computacional implementado nesta dissertação, tem o objetivo de determinar as características do escoamento e o campo de pressão atuante no casco de uma embarcação de planeio navegando em águas calmas. O escoamento é determinado através de um problema de valor de contorno, no qual a superfície molhada no casco é considerada um corpo esbelto. Devido ao uso da teoria de corpo esbelto o problema pode ser tratado, separadamente, em cada seção, onde as condições de contorno são forçadamente respeitadas através de uma distribuição de vórtices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Epidemiologic evidence suggests that serum carotenoids are potent antioxidants and may play a protective role in the development of chronic diseases including cancers, cardiovascular disease, and inflammatory diseases. The role of these antioxidants in the pathogenesis of diabetes mellitus remains unclear. Objective: This study examined data from a cross-sectional survey to investigate the association between serum carotenoids and type 2 diabetes. Design: Study participants were adults aged >= 25 y (n = 1597) from 6 randomly selected cities and towns in Queensland, Australia. Study examinations conducted between October and December 2000 included fasting plasma glucose, an oral-glucose-tolerance test, and measurement of the serum concentrations of 5 carotenoid compounds. Results: Mean 2-h postload plasma glucose and fasting insulin concentrations decreased significantly with increasing quintiles of the 5 serum carotenoids-alpha-carotene, beta-carotene, beta-cryptoxanthin, lutein/zeaxanthin, and lycopene. Geometric mean concentrations for all serum carotenoids decreased (all decreases were significant except that of lycopene) with declining glucose tolerance status. beta-Carotene had the greatest decrease, to geometric means of 0.59, 0.50, and 0.42 mu mol/L in persons with normal glucose tolerance, impaired glucose metabolism, and type 2 diabetes, respectively (P < 0.01 for linear trend), after control for potential confounders. Conclusions: Serum carotenoids are inversely associated with type 2 diabetes and impaired glucose metabolism. Randomized trials of diets high in carotenoid-rich vegetables and fruit are needed to confirm these results and those from other observational studies. Such evidence would have very important implications for the prevention of diabetes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A concept has been developed where characteristic load cycles of longwall shields can describe most of the interaction between a longwall support and the roof. A characteristic load cycle is the change in support pressure with time from setting the support against the roof to the next release and movement of the support. The concept has been validated through the back-analysis of more than 500 000 individual load cycles in five longwall panels at four mines and seven geotechnical domains. The validation process depended upon the development of new software capable of both handling the large quantity of data emanating from a modern longwall and accurately delineating load cycles. Existing software was found not to be capable of delineating load cycles to a sufficient accuracy. Load-cycle analysis can now be used quantitatively to assess the adequacy of support capacity and the appropriateness of set pressure for the conditions under which a longwall is being operated. When linked to a description of geotechnical conditions, this has allowed the development of a database for support selection for greenfield sites. For existing sites, the load-cycle characteristic concept allows for a diagnosis of strata-support problem areas, enabling changes to be made to set pressure and mining strategies to manage better, or avoid, strata control problems. With further development of the software, there is the prospect of developing a system that is able to respond to changes in strata-support interaction in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The internationally accepted Wolfson Heat Treatment Centre Engineering Group test was used to evaluate the cooling characteristics of the most popular commercial polymer quenchants: polyalkylene glycols, polyvinylpyrrolidones and polyacrylates. Prototype solutions containing poly(ethyloxazoline) were also examined. Each class of polymer was capable of providing a wide range of cooling rates depending on the product formulation, concentration, temperature, agitation, ageing and contamination. Cooling rates for synthetic quenchants were generally intermediate between those of water and oil. Control techniques, drag-out losses and response to quenching in terms of hardness and residual stress for a plain carbon steel, were also considered. A laboratory scale method for providing a controllable level of forced convection was developed. Test reproducibility was improved by positioning the preheated Wolfson probe 25mm above the geometric centre of a 25mm diameter orifice through which the quenchant was pumped at a velocity of 0.5m/s. On examination, all polymer quenchants were found to operate by the same fundamental mechanism associated with their viscosity and ability to form an insulating polymer-rich-film. The nature of this film, which formed at the vapour/liquid interface during boiling, was dependent on the polymer's solubility characteristics. High molecular weight polymers and high concentration solutions produced thicker, more stable insulating films. Agitation produced thinner more uniform films. Higher molecular weight polymers were more susceptible to degradation, and increased cooling rates, with usage. Polyvinylpyrrolidones can be cross-linked resulting in erratic performance, whilst the anionic character of polyacrylates can lead to control problems. Volatile contaminants tend to decrease the rate of cooling and salts to increase it. Drag-out increases upon raising the molecular weight of the polymer and its solution viscosity. Kinematic viscosity measurements are more effective than refractometer readings for concentration control, although a quench test is the most satisfactory process control method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents a multicriteria decision support system, called MultiDecision-2, which consists of two independent parts - MKA-2 subsystem and MKO-2 subsystem. MultiDecision-2 software system supports the decision makers (DMs) in the solving process of different problems of multicriteria analysis and linear (continues and integer) problems of multicriteria optimization. The two subsystems MKA-2 and MKO-2 of of MultiDecision-2 are briefly described in the paper in the terms of the class of the problems being solved, the system structure, the operation with the interface modules for input data entry and the information about DM’s local preferences, as well as the operation with the interface modules for visualization of the current and final solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics Subject Classification: 35J05, 35J25, 35C15, 47H50, 47G30

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics Subject Classification 2010: 35M10, 35R11, 26A33, 33C05, 33E12, 33C20.