983 resultados para pseudo-dynamic solver
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
The stepped and excessively slow execution of pseudo-dynamic tests has been found to be the source of some errors arising from strain-rate effect and stress relaxation. In order to control those errors, a new continuous test method which allows the selection of a more suitable time scale factor in the response is proposed in this work. By dimensional analysis, such scaled-time response is obtained theoretically by augmenting the inertial and damping properties of the structure, for which we propose the use of hydraulic pistons which are servo-controlled to produce active mass and damping, nevertheless using an equipment which is similar to that required in a pseudo-dynamic test. The results of the successful implementation of this technique for a simple specimen are shown here.
Resumo:
Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.
Resumo:
The Pseudo-Dynamic Test Method (PDTM) is being developped currently as an alternative to the shaking table testing of large size models. However, the stepped slow execution of the former type of test has been found to be the source of important errors arising from the stress relaxation. A new continuous test method, wich allows the selection of a suitable time-scale factor in the response in order to control these errors, es proposed here. Such scaled-time response is theoretically obtained by simply augmenting the mass of the structure for wich some practical solutions are proposed.
Resumo:
This paper analyses earthquake data in the perspective of dynamical systems and its Pseudo Phase Plane representation. The seismic data is collected from the Bulletin of the International Seismological Centre. The geological events are characterised by their magnitude and geographical location and described by means of time series of sequences of Dirac impulses. Fifty groups of data series are considered, according to the Flinn-Engdahl seismic regions of Earth. For each region, Pearson’s correlation coefficient is used to find the optimal time delay for reconstructing the Pseudo Phase Plane. The Pseudo Phase Plane plots are then analysed and characterised.
Resumo:
A frequency-domain method for nonlinear analysis of structural systems with viscous, hysteretic, nonproportional and frequency-dependent damping is presented. The nonlinear effects and nonproportional damping are considered through pseudo-force terms. The modal coordinates uncoupled equations are iteratively solved. The treatment of initial conditions in the frequency domain which is necessary for the treatment of the uncoupled equations is initially adressed.
Resumo:
Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.
Resumo:
Dynamic experiments in a nonadiabatic packed bed were carried out to evaluate the response to disturbances in wall temperature and inlet airflow rate and temperature. A two-dimensional, pseudo-homogeneous, axially dispersed plug-flow model was numerically solved and used to interpret the results. The model parameters were fitted in distinct stages: effective radial thermal conductivity (K (r)) and wall heat transfer coefficient (h (w)) were estimated from steady-state data and the characteristic packed bed time constant (tau) from transient data. A new correlation for the K (r) in packed beds of cylindrical particles was proposed. It was experimentally proved that temperature measurements using radially inserted thermocouples and a ring-shaped sensor were not distorted by heat conduction across the thermocouple or by the thermal inertia effect of the temperature sensors.
Resumo:
Granule impact deformation has long been recognised as important in determining whether or not two colliding granules will coalesce. Work in the last 10 years has highlighted the fact that viscous effects are significant in granulation. The relative strengths of different formulations can vary with strain rate. Therefore, traditional strength measurements made at pseudo-static conditions give no indication, even qualitatively, of how materials will behave at high strain rates, and hence are actually misleading when used to model granule coalescence. This means that new standard methods need to be developed for determining the strain rates encountered by granules inside industrial equipment and also for measuring the mechanical properties of granules at these strain rates. The constitutive equations used in theoretical models of granule coalescence also need to be extended to include strain-rate dependent components.
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes. Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions.
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes . Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions
Resumo:
PURPOSE: The aim of our study was to describe the clinical presentation of an unusual evanescent, exudative, choroidal pseudo-tumor with acute painful onset, and propose a pathogenesis. METHODS: We carried out a retrospective, observational study using the case series of three patients presenting with an evanescent, exudative, choroidal pseudo-tumor with acute painful onset. Ultra-widefield fluorescein and indocyanine green angiography (ICGA) using the Heidelberg Retina Angiograph and the Staurenghi 230 SLO Retina Lens were used to propose a pathogenesis of this unusual entity. RESULTS: In all three cases, acute ocular pain led to discovery of an exudative, partially hemorrhagic choroidal mass (thickness 2.4 mm-4.1 mm on ultrasound) that quickly regressed within weeks. In the subacute phase, all patients showed choroidal circulation abnormalities on dynamic wide-field ICGA in the affected quadrant, with delayed arterio-venous filling in two patients, and a poorly-defined vortex vein in the third. The choroidal circulation abnormalities resolved within 8-12 weeks, simultaneously with the spontaneous resolution of the choroidal pseudo-tumor. The findings evoked a self-resolving vortex vein occlusion in the corresponding quadrants with acute, painful choroidal exudation. CONCLUSIONS: An evanescent, exudative, hemorragic choroidal pseudo-tumor with acute painful onset may be caused by a vortex vein occlusion. Future patients need to be studied with ICGA in the acute phase to confirm this hypothesis.
Resumo:
Resumen basado en el de la publicación
Resumo:
Muchas de las nuevas aplicaciones emergentes de Internet tales como TV sobre Internet, Radio sobre Internet,Video Streamming multi-punto, entre otras, necesitan los siguientes requerimientos de recursos: ancho de banda consumido, retardo extremo-a-extremo, tasa de paquetes perdidos, etc. Por lo anterior, es necesario formular una propuesta que especifique y provea para este tipo de aplicaciones los recursos necesarios para su buen funcionamiento. En esta tesis, proponemos un esquema de ingeniería de tráfico multi-objetivo a través del uso de diferentes árboles de distribución para muchos flujos multicast. En este caso, estamos usando la aproximación de múltiples caminos para cada nodo egreso y de esta forma obtener la aproximación de múltiples árboles y a través de esta forma crear diferentes árboles multicast. Sin embargo, nuestra propuesta resuelve la fracción de la división del tráfico a través de múltiples árboles. La propuesta puede ser aplicada en redes MPLS estableciendo rutas explícitas en eventos multicast. En primera instancia, el objetivo es combinar los siguientes objetivos ponderados dentro de una métrica agregada: máxima utilización de los enlaces, cantidad de saltos, el ancho de banda total consumido y el retardo total extremo-a-extremo. Nosotros hemos formulado esta función multi-objetivo (modelo MHDB-S) y los resultados obtenidos muestran que varios objetivos ponderados son reducidos y la máxima utilización de los enlaces es minimizada. El problema es NP-duro, por lo tanto, un algoritmo es propuesto para optimizar los diferentes objetivos. El comportamiento que obtuvimos usando este algoritmo es similar al que obtuvimos con el modelo. Normalmente, durante la transmisión multicast los nodos egresos pueden salir o entrar del árbol y por esta razón en esta tesis proponemos un esquema de ingeniería de tráfico multi-objetivo usando diferentes árboles para grupos multicast dinámicos. (en el cual los nodos egresos pueden cambiar durante el tiempo de vida de la conexión). Si un árbol multicast es recomputado desde el principio, esto podría consumir un tiempo considerable de CPU y además todas las comuicaciones que están usando el árbol multicast serán temporalmente interrumpida. Para aliviar estos inconvenientes, proponemos un modelo de optimización (modelo dinámico MHDB-D) que utilice los árboles multicast previamente computados (modelo estático MHDB-S) adicionando nuevos nodos egreso. Usando el método de la suma ponderada para resolver el modelo analítico, no necesariamente es correcto, porque es posible tener un espacio de solución no convexo y por esta razón algunas soluciones pueden no ser encontradas. Adicionalmente, otros tipos de objetivos fueron encontrados en diferentes trabajos de investigación. Por las razones mencionadas anteriormente, un nuevo modelo llamado GMM es propuesto y para dar solución a este problema un nuevo algoritmo usando Algoritmos Evolutivos Multi-Objetivos es propuesto. Este algoritmo esta inspirado por el algoritmo Strength Pareto Evolutionary Algorithm (SPEA). Para dar una solución al caso dinámico con este modelo generalizado, nosotros hemos propuesto un nuevo modelo dinámico y una solución computacional usando Breadth First Search (BFS) probabilístico. Finalmente, para evaluar nuestro esquema de optimización propuesto, ejecutamos diferentes pruebas y simulaciones. Las principales contribuciones de esta tesis son la taxonomía, los modelos de optimización multi-objetivo para los casos estático y dinámico en transmisiones multicast (MHDB-S y MHDB-D), los algoritmos para dar solución computacional a los modelos. Finalmente, los modelos generalizados también para los casos estático y dinámico (GMM y GMM Dinámico) y las propuestas computacionales para dar slución usando MOEA y BFS probabilístico.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.