891 resultados para Nonlinear dynamic analysis
Resumo:
Diseases are believed to arise from dysregulation of biological systems (pathways) perturbed by environmental triggers. Biological systems as a whole are not just the sum of their components, rather ever-changing, complex and dynamic systems over time in response to internal and external perturbation. In the past, biologists have mainly focused on studying either functions of isolated genes or steady-states of small biological pathways. However, it is systems dynamics that play an essential role in giving rise to cellular function/dysfunction which cause diseases, such as growth, differentiation, division and apoptosis. Biological phenomena of the entire organism are not only determined by steady-state characteristics of the biological systems, but also by intrinsic dynamic properties of biological systems, including stability, transient-response, and controllability, which determine how the systems maintain their functions and performance under a broad range of random internal and external perturbations. As a proof of principle, we examine signal transduction pathways and genetic regulatory pathways as biological systems. We employ widely used state-space equations in systems science to model biological systems, and use expectation-maximization (EM) algorithms and Kalman filter to estimate the parameters in the models. We apply the developed state-space models to human fibroblasts obtained from the autoimmune fibrosing disease, scleroderma, and then perform dynamic analysis of partial TGF-beta pathway in both normal and scleroderma fibroblasts stimulated by silica. We find that TGF-beta pathway under perturbation of silica shows significant differences in dynamic properties between normal and scleroderma fibroblasts. Our findings may open a new avenue in exploring the functions of cells and mechanism operative in disease development.
Resumo:
In patients diagnosed with pharmaco-resistant epilepsy, cerebral areas responsible for seizure generation can be defined by performing implantation of intracranial electrodes. The identification of the epileptogenic zone (EZ) is based on visual inspection of the intracranial electroencephalogram (IEEG) performed by highly qualified neurophysiologists. New computer-based quantitative EEG analyses have been developed in collaboration with the signal analysis community to expedite EZ detection. The aim of the present report is to compare different signal analysis approaches developed in four different European laboratories working in close collaboration with four European Epilepsy Centers. Computer-based signal analysis methods were retrospectively applied to IEEG recordings performed in four patients undergoing pre-surgical exploration of pharmaco-resistant epilepsy. The four methods elaborated by the different teams to identify the EZ are based either on frequency analysis, on nonlinear signal analysis, on connectivity measures or on statistical parametric mapping of epileptogenicity indices. All methods converge on the identification of EZ in patients that present with fast activity at seizure onset. When traditional visual inspection was not successful in detecting EZ on IEEG, the different signal analysis methods produced highly discordant results. Quantitative analysis of IEEG recordings complement clinical evaluation by contributing to the study of epileptogenic networks during seizures. We demonstrate that the degree of sensitivity of different computer-based methods to detect the EZ in respect to visual EEG inspection depends on the specific seizure pattern.
Resumo:
The dynamic effects of high-speed trains on viaducts are important issues for the design of the structures, as well as for the consideration of safe running conditions for the trains. In this work we start by reviewing the relevance of some basic design aspects. The significance of impact factor envelopes for moving loads is considered first. Resonance which may be achieved for high-speed trains requires dynamic analysis, for which some key aspects are discussed. The relevance of performing a longitudinal distribution of axle loads, the number of modes taken in analysis, and the consideration of vehicle-structure interaction are discussed with representative examples. The lateral dynamic effects of running trains on bridges is of importance for laterally compliant viaducts, such as some very tall structures erected in new high-speed lines. The relevance of this study is mainly for the safety of the traffic, considering both internal actions such as the hunting motion as well as external actions such as wind or earthquakes [1]. These studies require three-dimensional dynamic coupled vehicle-bridge models, and consideration of wheel to rail contact, a phenomenon which is complex and costly to model in detail. We describe here a fully nonlinear coupled model, described in absolute coordinates and incorporated into a commercial finite element framework [2]. The wheel-rail contact has been considered using a FastSim algorithm which provides a compromise between accuracy and computational cost, and captures the main nonlinear response of the contact interface. Two applications are presented, firstly to a vehicle subject to a strong wind gust traversing a bridge, showing the relevance of the nonlinear wheel-rail contact model as well as the dynamic interaction between bridge and vehicle. The second application is to a real HS viaduct with a long continuous deck and tall piers and high lateral compliance [3]. The results show the safety of the traffic as well as the importance of considering features such as track alignment irregularities.
Resumo:
For the past 20 years, dynamic analysis of shells has been one of the most fascinating fields for research. Using the new light materials the building engineer soon discovered that the subsequent reduction of gravity forces produced not only the desired shape freedom but the appearance of ecologic loads as the first factor of design; loads which present strong random properties and marked dynamic influence. On the other hand, the technological advance in the aeronautical and astronautical field placed the engineers in front of shell structures of nonconventional shape and able to sustain substantialy dynamic loads. The response to the increasingly challenger problems of the last two decades has been very bright; new forms, new materials and new methods of analysis have arosen in the design of off-shore platforms, nuclear vessels, space crafts, etc. Thanks to the intensity of the lived years we have at our disposition a coherent and homogeneous amount of knowledge which enable us to face problems of inconceivable complexity when IASS was founded. The open minded approach to classical problems and the impact of the computer are, probably, important factors in the Renaissance we have enjoyed these years, and a good proof of this are the papers presented to the previous IASS meetings as well as that we are going to consider in this one. Particularly striking is the great number of papers based on a mathematical modeling in front of the meagerness of those treating laboratory experiments on physical models. The universal entering of the computer into almost every phase of our lifes, and the cost of physical models, are –may be- reasons for this lack of experimental methods. Nevertheless they continue offering useful results as are those obtained with the shaking-table in which the computer plays an essential role in the application of loads as well as in the instantaneous treatment of control data. Plates 1 and 2 record the papers presented under dynamic heading, 40% of them are from Japan in good correlation with the relevance that Japanese research has traditionally showed in this area. Also interesting is to find old friends as profesors Tanaka, Nishimura and Kostem who presented valuable papers in previous IASS conferences. As we see there are papers representative of all tendencies, even purely analytical! Better than discuss them in detail, which can be done after the authors presentation, I think we can comment in the general pattern of the dynamical approach are summarized in plate 3.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
The authors present a charge/flux formulation of the equations of memristive circuits, which seemingly show that the memristor should not be considered as a dynamic circuit element. Here, is shown that this approach implicitly reduces the dynamic analysis to a certain subset of the state space in such a way that the dynamic contribution of memristors is hidden. This reduction might entail a substantial loss of information, regarding e.g. the local stability properties of the circuit. Two examples illustrate this. It is concluded that the memristor, even with its unconventional features, must be considered as a dynamic element.
Resumo:
Esta tesis propone una completa formulación termo-mecánica para la simulación no-lineal de mecanismos flexibles basada en métodos libres de malla. El enfoque se basa en tres pilares principales: la formulación de Lagrangiano total para medios continuos, la discretización de Bubnov-Galerkin, y las funciones de forma libres de malla. Los métodos sin malla se caracterizan por la definición de un conjunto de funciones de forma en dominios solapados, junto con una malla de integración de las ecuaciones discretas de balance. Dos tipos de funciones de forma se han seleccionado como representación de las familias interpolantes (Funciones de Base Radial) y aproximantes (Mínimos Cuadrados Móviles). Su formulación se ha adaptado haciendo sus parámetros compatibles, y su ausencia de conectividad predefinida se ha aprovechado para interconectar múltiples dominios de manera automática, permitiendo el uso de mallas de fondo no conformes. Se propone una formulación generalizada de restricciones, juntas y contactos, válida para sólidos rígidos y flexibles, siendo estos últimos discretizados mediante elementos finitos (MEF) o libres de malla. La mayor ventaja de este enfoque reside en que independiza completamente el dominio con respecto de las uniones y acciones externas a cada sólido, permitiendo su definición incluso fuera del contorno. Al mismo tiempo, también se minimiza el número de ecuaciones de restricción necesarias para la definición de uniones realistas. Las diversas validaciones, ejemplos y comparaciones detalladas muestran como el enfoque propuesto es genérico y extensible a un gran número de sistemas. En concreto, las comparaciones con el MEF indican una importante reducción del error para igual número de nodos, tanto en simulaciones mecánicas, como térmicas y termo-mecánicas acopladas. A igualdad de error, la eficiencia numérica de los métodos libres de malla es mayor que la del MEF cuanto más grosera es la discretización. Finalmente, la formulación se aplica a un problema de diseño real sobre el mantenimiento de estructuras masivas en el interior de un reactor de fusión, demostrando su viabilidad en análisis de problemas reales, y a su vez mostrando su potencial para su uso en simulación en tiempo real de sistemas no-lineales. A new complete formulation is proposed for the simulation of nonlinear dynamic of multibody systems with thermo-mechanical behaviour. The approach is founded in three main pillars: total Lagrangian formulation, Bubnov-Galerkin discretization, and meshfree shape functions. Meshfree methods are characterized by the definition of a set of shape functions in overlapping domains, and a background grid for integration of the Galerkin discrete equations. Two different types of shape functions have been chosen as representatives of interpolation (Radial Basis Functions), and approximation (Moving Least Squares) families. Their formulation has been adapted to use compatible parameters, and their lack of predefined connectivity is used to interconnect different domains seamlessly, allowing the use of non-conforming meshes. A generalized formulation for constraints, joints, and contacts is proposed, which is valid for rigid and flexible solids, being the later discretized using either finite elements (FEM) or meshfree methods. The greatest advantage of this approach is that makes the domain completely independent of the external links and actions, allowing to even define them outside of the boundary. At the same time, the number of constraint equations needed for defining realistic joints is minimized. Validation, examples, and benchmarks are provided for the proposed formulation, demonstrating that the approach is generic and extensible to further problems. Comparisons with FEM show a much lower error for the same number of nodes, both for mechanical and thermal analyses. The numerical efficiency is also better when coarse discretizations are used. A final demonstration to a real problem for handling massive structures inside of a fusion reactor is presented. It demonstrates that the application of meshfree methods is feasible and can provide an advantage towards the definition of nonlinear real-time simulation models.
Resumo:
In the present paper, the endogenous theory of time preference is extended to analyze those processes of capital accumulation and changes in environmental quality that are dynamically optimum with respect to the intertemporal preference ordering of the representative individual of the society in question. The analysis is carried out within the conceptual framework of the dynamic analysis of environmental quality, as has been developed by a number of economists for specific cases of the fisheries and forestry commons. The duality principles on intertemporal preference ordering and capital accumulation are extended to the situation where processes of capital accumulation are subject to the Penrose effect, which exhibit the marginal decrease in the effect of investment in private and social overhead capital upon the rate at which capital is accumulated. The dynamically optimum time-path of economic activities is characterized by the proportionality of two systems of imputed, or efficient, prices, one associated with the given intertemporal ordering and another associated with processes of accumulation of private and social overhead capital. It is particularly shown that the dynamically optimality of the processes of capital accumulation involving both private and social overhead capital is characterized by the conditions that are identical with those involving private capital, with the role of social overhead capital only indirectly exhibited.
Resumo:
A análise dinâmica experimental tem sido amplamente pesquisada como uma ferramenta de avaliação de integridade de estruturas de concreto armado. Existem técnicas de identificação de danos baseadas em propriedades modais como frequências de ressonâncias, deformadas modais, curvaturas modais e amortecimento. Há também técnicas baseadas na não linearidade da resposta dinâmica, que apesar do grande potencial na detecção de danos, têm sido pouco exploradas nos últimos anos. Este trabalho tem por objetivo avaliar a integridade estrutural de vigas de concreto armado através do comportamento da resposta dinâmica. Foram realizados ensaios dinâmicos em duas vigas de concreto armado com 3,5 m de comprimento, 25 cm de largura, 35 cm de altura e idênticas taxas de armaduras, mas configuradas com barras de aço de diferentes diâmetros, 2 ϕ 16 mm e 8 ϕ 8 mm, respectivamente. Tais vigas, inicialmente íntegras, foram submetidas a ciclos de carregamento e descarregamento com intensidades crescentes até atingir a ruptura do elemento. Após cada ciclo, as propriedades dinâmicas foram avaliadas experimentalmente, com o emprego de técnicas de excitação por sinais do tipo aleatório e tipo transiente, respectivamente, visando determinar parâmetros que indiquem a deterioração gradativa do elemento. Nesses ensaios dinâmicos aplicaram-se diferentes amplitudes da força de excitação. Verificou-se que o aumento da amplitude da força dinâmica de excitação provocou reduções nos valores das frequências de ressonância de 1,1% e 2,4%, associadas, respectivamente, às excitações aleatórias e transientes; e um comportamento não linear dos índices de amortecimento, associados às excitações aleatórias, mantendo um crescimento linear com as excitações transientes. Constatou-se, ainda, que os valores das frequências de ressonância decrescem com a redução de rigidez mecânica, diminuída com o aumento do nível de fissuração induzido nos modelos. Já os valores dos índices de amortecimento, após cada ciclo, se comportaram de forma não linear e assumiram diferentes valores, conforme a técnica de excitação empregada. Acredita-se que esta não linearidade está relacionada aos danos provocados no elemento pela solicitação estrutural e, por consequência, ao processo de como a dissipação de energia é empregada no processo de instauração, configuração e propagação das fissuras nos elementos de concreto armado.
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. ^ Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. ^ The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures—cash in advance and documentary credit—have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.^
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures -- cash in advance and documentary credit -- have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
In combination of the advantages of both parallel mechanisms and compliant mechanisms, a compliant parallel mechanism with two rotational DOFs (degrees of freedom) is designed to meet the requirement of a lightweight and compact pan-tilt platform. Firstly, two commonly-used design methods i.e. direct substitution and FACT (Freedom and Constraint Topology) are applied to design the configuration of the pan-tilt system, and similarities and differences of the two design alternatives are compared. Then inverse kinematic analysis of the candidate mechanism is implemented by using the pseudo-rigid-body model (PRBM), and the Jacobian related to its differential kinematics is further derived to help designer realize dynamic analysis of the 8R compliant mechanism. In addition, the mechanism’s maximum stress existing within its workspace is tested by finite element analysis. Finally, a method to determine joint damping of the flexure hinge is presented, which aims at exploring the effect of joint damping on actuator selection and real-time control. To the authors’ knowledge, almost no existing literature concerns with this issue.
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.