960 resultados para Nonlinear dynamic analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A three dimensional nonlinear viscoelastic constitutive model for the solid propellant is developed. In their earlier work, the authors have developed an isotropic constitutive model and verified it for one dimensional case. In the present work, the validity of the model is extended to three-dimensional cases. Large deformation, dewetting and cyclic loading effects are treated as the main sources of nonlinear behavior of the solid propellant. Viscoelastic dewetting criteria is used and the softening of the solid propellant due to dewetting is treated by the modulus decrease. The nonlinearities during cyclic loading are accounted for by the functions of the octahedral shear strain measure. The constitutive equation is implemented into a finite element code for the analysis of propellant grains. A commercial finite element package ‘ABAQUS’ is used for the analysis and the model is introduced into the code through a user subroutine. The model is evaluated with different loading conditions and the predicted values are in good agreement with the measured ones. The resulting model applied to analyze a solid propellant grain for the thermal cycling load.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main complexities in the simulation of the nonlinear dynamics of rigid bodies consists in describing properly the finite rotations that they may undergo. It is well known that, to avoid singularities in the representation of the SO(3) rotation group, at least four parameters must be used. However, it is computationally expensive to use a four-parameters representation since, as only three of the parameters are independent, one needs to introduce constraint equations in the model, leading to differential-algebraic equations instead of ordinary differential ones. Three-parameter representations are numerically more efficient. Therefore, the objective of this paper is to evaluate numerically the influence of the parametrization and its singularities on the simulation of the dynamics of a rigid body. This is done through the analysis of a heavy top with a fixed point, using two three-parameter systems, Euler's angles and rotation vector. Theoretical results were used to guide the numerical simulation and to assure that all possible cases were analyzed. The two parametrizations were compared using several integrators. The results show that Euler's angles lead to faster integration compared to the rotation vector. An Euler's angles singular case, where representation approaches a theoretical singular point, was analyzed in detail. It is shown that on the contrary of what may be expected, 1) the numerical integration is very efficient, even more than for any other case, and 2) in spite of the uncertainty on the Euler's angles themselves, the body motion is well represented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chaotic dynamical systems exhibit trajectories in their phase space that converges to a strange attractor. The strangeness of the chaotic attractor is associated with its dimension in which instance it is described by a noninteger dimension. This contribution presents an overview of the main definitions of dimension discussing their evaluation from time series employing the correlation and the generalized dimension. The investigation is applied to the nonlinear pendulum where signals are generated by numerical integration of the mathematical model, selecting a single variable of the system as a time series. In order to simulate experimental data sets, a random noise is introduced in the time series. State space reconstruction and the determination of attractor dimensions are carried out regarding periodic and chaotic signals. Results obtained from time series analyses are compared with a reference value obtained from the analysis of mathematical model, estimating noise sensitivity. This procedure allows one to identify the best techniques to be applied in the analysis of experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chaotic behaviour is one of the hardest problems that can happen in nonlinear dynamical systems with severe nonlinearities. It makes the system's responses unpredictable. It makes the system's responses to behave similar to noise. In some applications it should be avoided. One of the approaches to detect the chaotic behaviour is nding the Lyapunov exponent through examining the dynamical equation of the system. It needs a model of the system. The goal of this study is the diagnosis of chaotic behaviour by just exploring the data (signal) without using any dynamical model of the system. In this work two methods are tested on the time series data collected from AMB (Active Magnetic Bearing) system sensors. The rst method is used to nd the largest Lyapunov exponent by Rosenstein method. The second method is a 0-1 test for identifying chaotic behaviour. These two methods are used to detect if the data is chaotic. By using Rosenstein method it is needed to nd the minimum embedding dimension. To nd the minimum embedding dimension Cao method is used. Cao method does not give just the minimum embedding dimension, it also gives the order of the nonlinear dynamical equation of the system and also it shows how the system's signals are corrupted with noise. At the end of this research a test called runs test is introduced to show that the data is not excessively noisy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Master’s Thesis is dedicated to the investigation and testing conventional and nonconventional Kramers-Kronig relations on simulated and experimentally measured spectra. It is done for both linear and nonlinear optical spectral data. Big part of attention is paid to the new method of obtaining complex refractive index from a transmittance spectrum without direct information of the sample thickness. The latter method is coupled with terahertz tome-domain spectroscopy and Kramers-Kronig analysis applied for testing the validity of complex refractive index. In this research precision of data inversion is evaluated by root-mean square error. Testing of methods is made over different spectral range and implementation of this methods in future is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging markets have come to play a significant role in the world, not only due to their strong economic growth but because they have been able to foster an increasing number of innovative high technology oriented firms. However, as the markets continue to change and develop, there remain many companies in emerging markets that struggle with their competitiveness and innovativeness. To improve competitive capabilities, many scholars have come to favor interfirm cooperation, which is perceived to help companies access new knowledge and complementary resources and, by so doing, enables them to catch up quickly with Western competitors. Regardless of numerous attempts by strategic management scholars, the research field remains very fragmented and lacks understanding on how and when interfirm cooperation contributes to firm performance and competiveness in emerging markets. Furthermore, the reasons why interfirm R&D sometimes succeeds but fails at other times frequently remain unidentified. This thesis combines the extant literature on competitive and cooperative strategy, dynamic capabilities, and R&D cooperation while studying interfirm R&D relationships in and between Russian manufacturing companies. Employing primary survey data, the thesis presents numerous novel findings regarding the effect of R&D cooperation and different types of R&D partner on firms’ exploration and exploitation performance. Utilizing a competitive strategy framework enables these effects to be explained in more detail, and especially why interfirm cooperation, regardless of its potential, has had a modest effect on the general competitiveness of emerging market firms. This thesis contributes especially to the strategic management literature and presents a more holistic perspective on the usefulness of cooperative strategy in emerging markets. It provides a framework through which it is possible to assess the potential impacts of different R&D cooperation partners and to clarify the causal relationships between cooperation, performance, and long term competitiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s global industrial service business, markets are dynamic and finding new ways of value creation towards customers has become more and more challenging. Customer orientation is needed because of the demanding after-sales business which is both quickly changing and stochastic in nature. In after-sales business customers require fast and reliable service for their spare part needs. This thesis objective is to clarify this challenging after-sales business environment and find ways to increase customer satisfaction via balanced measurement system which will help to find possible targets to reduce order cycle times in a large metal and mineral company Outotec (Filters)’ Spare Part Supply business line. In case study, internal documents and data and numerical calculations together with qualitative interviews with different persons in key roles of Spare Part Supply organizations are used to analyze the performance of different processes from the spare parts delivery function. The chosen performance measurement tool is Balanced Scorecard which is slightly modified to suit the lead time study from customer’s perspective better. Findings show that many different processes in spare parts supply are facing different kind of challenges in achieving the lead time levels wanted and that these processes’ problems seem to accumulate. Findings also show that putting effort in supply side challenges and information flows visibility should give the best results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the present study was to establish a method for quantitative analysis of von Willebrand factor (vWF) multimeric composition using a mathematical framework based on curve fitting. Plasma vWF multimers from 15 healthy subjects and 13 patients with advanced pulmonary vascular disease were analyzed by Western immunoblotting followed by luminography. Quantitative analysis of luminographs was carried out by calculating the relative densities of low, intermediate and high molecular weight fractions using laser densitometry. For each densitometric peak (representing a given fraction of vWF multimers) a mean area value was obtained using data from all group subjects (patients and normal individuals) and plotted against the distance between the peak and IgM (950 kDa). Curves were constructed for each group using nonlinear fitting. Results indicated that highly accurate curves could be obtained for healthy controls and patients, with respective coefficients of determination (r²) of 0.9898 and 0.9778. Differences were observed between patients and normal subjects regarding curve shape, coefficients and the region of highest protein concentration. We conclude that the method provides accurate quantitative information on the composition of vWF multimers and may be useful for comparisons between groups and possibly treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to examine the time course of endothelial function after a single handgrip exercise session combined with blood flow restriction in healthy young men. Nine participants (28±5.8 years) completed a single session of bilateral dynamic handgrip exercise (20 min with 60% of the maximum voluntary contraction). To induce blood flow restriction, a cuff was placed 2 cm below the antecubital fossa in the experimental arm. This cuff was inflated to 80 mmHg before initiation of exercise and maintained through the duration of the protocol. The experimental arm and control arm were randomly selected for all subjects. Brachial artery flow-mediated dilation (FMD) and blood flow velocity profiles were assessed using Doppler ultrasonography before initiation of the exercise, and at 15 and 60 min after its cessation. Blood flow velocity profiles were also assessed during exercise. There was a significant increase in FMD 15 min after exercise in the control arm compared with before exercise (64.09%±16.59%, P=0.001), but there was no change in the experimental arm (-12.48%±12.64%, P=0.252). FMD values at 15 min post-exercise were significantly higher for the control arm in comparison to the experimental arm (P=0.004). FMD returned to near baseline values at 60 min after exercise, with no significant difference between arms (P=0.424). A single handgrip exercise bout provoked an acute increase in FMD 15 min after exercise, returning to near baseline values at 60 min. This response was blunted by the addition of an inflated pneumatic cuff to the exercising arm.