960 resultados para Misspecification, Sign restrictions, Shock identification, Model validation.
Resumo:
Työn tavoitteena oli tutkia innovaatioita ja organisaation innovaatiokyvykkyyttä, innovaatiokyvykkyyden taustatekijöitä sekä innovaatioprosessin alkupään (Fuzzy Front End, FFE) sekä siinä tapahtuvan päätöksenteon johtamista. Lisäksi tavoitteena oli suunnitella innovaatioprosessin alkupään toimintamalli selkeyttämään toimintaa prosessin alkupäässä sekä antaa toimenpide-ehdotuksia ja suosituksia. Tutkimuksen teoriaosuus tehtiin kirjallisuustutkimuksena. Tutkimuksen empiirinen osuus suoritettiin case -analyysinä yrityksen henkilöhaastattelu- ja toimintatutkimuksen muodossa. Innovaatioprosessin alkupäähän on tunnistettu toimintamalleja, joilla selkeytetään ja tehostetaan prosessin alkupään vaiheita. Vaiheet ovat mahdollisuuksien tunnistaminen, mahdollisuuksien analysointi, ideointi, ideoiden valitseminen ja konsepti- ja teknologiakehitys. Innovaatioprosessin rinnalla kulkee päätöksenteon prosessi, jonka suhteen tunnistetaan selkeät päätöksentekokohdat ja kriteerit prosessissa etenemiselle. Innovaatio- ja päätöksentekoprosessiin osallistuu eri vaiheissa sekä yrityksen sisäiset, kuten henkilöstö, että ulkoiset, kuten asiakkaat, toimittajat ja verkostokumppanit, sidosryhmät. Lisäksi innovaatioprosessin toimintaan vaikuttavat johdon tuki ja sitoutuminen, osallistujien kyky luovuuteen sekä muut innovaatiokyvykkyyden taustatekijät. Kaikki nämä tekijät tulee huomioida innovaatioprosessin alkupään mallia suunniteltaessa. Tutkimus tehtiin tietoliikennealan yrityksen tarpeisiin. Yrityksessä on käytössä aloitetoimintaa, mutta sen ei koeta tarjoavan riittävästi ideoita yrityksen tuotekehityksen tarpeisiin. Yrityksen henkilöstön innovaatiopotentiaali on suuri, mikä halutaan hyödyntää paremmin suunnittelemalla yrityksen käyttöön soveltuva, innovaatioprosessin alkupään toimintaan ohjaava, vakioitu ja henkilöstöä ja muita yhteistyötahoja, kuten asiakkaita, osallistava toimintamalli. Toimenpide-ehdotuksina ja suosituksina esitetään innovaatioprosessin alkupään johtamisen toimintamallia. Esitetyssä mallissa määritellään vaiheet, menetelmät, päätöksenteko ja vastuut. Toimintamalli esitetään soveltuen yhdistettäväksi yrityksessä käytössä olevaan innovaatioprosessin loppupään, tuotekehitysprojektien läpiviemisen, malliin.
Resumo:
Life cycle costing (LCC) practices are spreading from military and construction sectors to wider area of industries. Suppliers as well as customers are demanding comprehensive cost knowledge that includes all relevant cost elements through the life cycle of products. The problem of total cost visibility is being acknowledged and the performance of suppliers is evaluated not just by low acquisition costs of their products, but by total value provided through the life time of their offerings. The main purpose of this thesis is to provide better understanding of product cost structure to the case company. Moreover, comprehensive theoretical body serves as a guideline or methodology for further LCC process. Research includes the constructive analysis of LCC related concepts and features as well as overview of life cycle support services in manufacturing industry. The case study aims to review the existing LCC practices within the case company and provide suggestions for improvements. It includes identification of most relevant life cycle cost elements, development of cost breakdown structure and generic cost model for data collection. Moreover, certain cost-effective suggestions are provided as well. This research should support decision making processes, assessment of economic viability of products, financial planning, sales and other processes within the case company.
Resumo:
Spermatogenesis, i.e sperm production in the seminiferous tubules of the testis, is a complex process that takes over one month to complete. Life-long ability of sperm production ultimately lies in a small population of undifferentiated cells, called spermatogonial stem cells (SSCs). These cells give rise to differentiating spermatogonia, which are committed to mature into spermatozoa. SSCs represent a heterogeneous population of cells and many aspects of their basic biology are still unknown. Understanding the mechanisms behind the cell fate decision of these cells is important to gain more insights into the causes of infertility and testis cancer. In addition, an interesting new aspect is the use of testis-derived stem cells in regenerative medicine. Our data demonstrated that adult mouse testis houses a population of Nanog-expressing spermatogonia. Based on mRNA and protein analysis these cells are enriched in stage XII of the mouse seminiferous epithelial cycle. The cells derived from this stage have the highest capacity to give rise to ES cell-like cells which express Oct4 and Nanog. These cells are under tight non- GDNF regulation but their fate can be dictated by activating p21 signalling. Comparative studies suggested that these cells are regulated like ES cells. Taken together these data imply that pluripotent cells are present in the adult mammalian testis. CIP2A (cancerous inhibitor of PP2A) has been associated with tumour aggressiveness and poor prognosis. In the testis it is expressed by the descendants of stem cells, i.e. the spermatogonial progenitor cells. Our data suggest that CIP2A acts upstream of PLZF and is needed for quantitatively normal spermatogenesis. Classification of CIP2A as a cancer/testis gene makes it an attractive target for cancer therapy. Study on the CIP2A deficient mouse model demonstrates that systemic inhibition of CIP2A does not severely interfere with growth and development or tissue or organ function, except for the spermatogenic output. These data demonstrate that CIP2A is required for quantitatively normal spermatogenesis. Hedgehog (Hh) signalling is involved in the development and maintenance of many different tissues and organs. According to our data, Hh signalling is active at many different levels during rat spermatogenesis: in spermatogonia, spermatocytes and late elongating spermatids. Localization of Suppressor of Fused (SuFu), the negative regulator of the pathway, specifically in early elongating spermatids suggests that Hh signalling needs to be shut down in these cells. Introduction of Hh signalling inhibitor resulted in an increase in germ cell apoptosis. Follicle-stimulating hormone (FSH) and inhibition of receptor tyrosine kinases resulted in down-regulation of Hh signalling. These data show that Hh signalling is under endocrine and paracrine control and it promotes germ cell survival.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
Particle Image Velocimetry, PIV, is an optical measuring technique to obtain velocity information of a flow in interest. With PIV it is possible to achieve two or three dimensional velocity vector fields from a measurement area instead of a single point in a flow. Measured flow can be either in liquid or in gas form. PIV is nowadays widely applied to flow field studies. The need for PIV is to obtain validation data for Computational Fluid Dynamics calculation programs that has been used to model blow down experiments in PPOOLEX test facility in the Lappeenranta University of Technology. In this thesis PIV and its theoretical background are presented. All the subsystems that can be considered to be part of a PIV system are presented as well with detail. Emphasis is also put to the mathematics behind the image evaluation. The work also included selection and successful testing of a PIV system, as well as the planning of the installation to the PPOOLEX facility. Already in the preliminary testing PIV was found to be good addition to the measuring equipment for Nuclear Safety Research Unit of LUT. The installation to PPOOLEX facility was successful even though there were many restrictions considering it. All parts of the PIV system worked and they were found out to be appropriate for the planned use. Results and observations presented in this thesis are a good background to further PIV use.
Resumo:
Technological capabilities are built to support different types of collaboration, and this gives the justification to widely observe, how activity environments are influenced by technology. Technology as an enabler can be addressed from different perspectives, other than merely technological. Dynamic, evolving environment is at the same time interesting but also challenging. As a multinational collaboration environment, the maritime surveillance is an good example of time critical and evolving environment, where technological solutions enable new ways of collaboration. Justification for the inspiration to use maritime environment as the baseline for understanding the challenges in creating and maintaining adequate level of situational awareness, derives from the complexity of the collaboration and information sharing environment elements, needed to be taken into account, when analyzing criticalities related to decision making. Situational awareness is an important element supporting decision making, and challenges related to it can also be observed in the maritime environment. This dissertation describes the structures and factors involved in this complex setting, found from the case studies that should be taken into account when trying to understand, how these elements affect the activities. This dissertation focuses on the gray area that is between a life threatening situation and normal everyday activities. From the multinational experimentation series case studies, MNE5 and MNE6 it was possible to observe situations that were not life threatening for the participants themselves, but not also basic every day activities. These case studies provided a unique possibility to see situations, where gaining of situational awareness and decision making are challenged with time critical crisis situations. Unfortunately organizations do not normally take the benefit from the everyday work to prepare themselves for possible emerging crisis situations. This dissertation focuses on creating a conceptual model and a concept that supports organizations – also outside the maritime community – to improve their ability to support gaining of situational awareness from the individual training level, all the way to changes in organizational structures in aiming for better support for decision making from the individual level to the highest decision making level. Quick changes and unpredictability are reality in organizations and organizations do not have the possibility to control all the factors that affect their functioning. Since we cannot be prepared for everything, and predict every crisis, individual activities inside teams and as a part of organizations, need to be supported with guidance, tools and training in order to support acting in challenging situations. In fact the ideology of the conceptual model created, lies especially in the aim of not controlling everything in beforehand, but supporting organizations with concrete procedures to help individuals to react in different, unpredictable situations, instead of focusing on traditional risk prevention and management. Technological capabilities are not automatically solutions for functional challenges; this is why it is justified to broaden the problem area observation from the technological perspective. This dissertation demonstrates that it is possible to support collaboration in a multinational environment with technological solutions, but it requires the recognition of technological limitations and accepting the possible restrictions related to technological innovations. Technology should not be considered value per se, the value of technology should be defined according to the support of activities, including strategic and operational environment evaluation, identification of organizational elements, and taking into account also the social factors and their challenges. Then we are one step closer to providing technological solutions that support the actual activities by taking into account the variables of the activity environment in question. The multidisciplinary view to approach the information sharing and collaboration framework, is derived especially from the complexity of decision making and building of situational awareness, since they are not build or created in vacuity, but in the organizational framework by the people doing it with the technological capabilities, enabled by the organizational structures. Introduced case studies were related to maritime environment, but according to the research results, it is valid to argue, that based on the lessons learned it is possible to create and further develop conceptual model and to create a general concept to support a wider range of organizations in their attempt to gain better level of situational awareness (SA) and to support decision making. To proof the versatile usage of the developed concept, I have introduced the case study findings to the health care environment and reflected the identified elements from the trauma center to the created concept. The main contribution to complete this adventure is the presented situational awareness concept created in the respect to NATO concept structure. This has been done to tackle the challenge of collaboration by focusing on situational awareness in the information sharing context by providing a theoretical ground and understanding, of how these issues should be approached, and how these elements can be generalized and used to support activities in other environments as well. This dissertation research has been a several year evolving process reflecting and affecting presented case studies and this learning experience from the case studies has also affected the goals and research questions of this dissertation. This venture has been written from a retro perspective according to ideology of process modeling and design rationale to present to the reader how this entire journey took place and what where the critical milestones that affected the end result, conceptual model. Support in a challenging information sharing framework can be provided with the right type of combination of tools, procedures and individual effort. This dissertation will provide insights to those with a new approach to war technology for the organizations to gain a better level of awareness and to improve the capabilities in decision making. This dissertation will present, from the war technology starting point, a new approach and possibility for the organizations to create a better level of awareness and support for decision making with the right combination of tools, procedures and individual effort.
Resumo:
Permanent magnet generators (PMG) represent the cutting edge technology in modern wind mills. The efficiency remains high (over 90%) at partial loads. To improve the machine efficiency even further, every aspect of machine losses has to be analyzed. Additional losses are often given as a certain percentage without providing any detailed information about the actual calculation process; meanwhile, there are many design-dependent losses that have an effect on the total amount of additional losses and that have to be taken into consideration. Additional losses are most often eddy current losses in different parts of the machine. These losses are usually difficult to calculate in the design process. In this doctoral thesis, some additional losses are identified and modeled. Further, suggestions on how to minimize the losses are given. Iron losses can differ significantly between the measured no-load values and the loss values under load. In addition, with embedded magnet rotors, the quadrature-axis armature reaction adds losses to the stator iron by manipulating the harmonic content of the flux. It was, therefore, re-evaluated that in salient pole machines, to minimize the losses and the loss difference between the no-load and load operation, the flux density has to be kept below 1.5 T in the stator yoke, which is the traditional guideline for machine designers. Eddy current losses may occur in the end-winding area and in the support structure of the machine, that is, in the finger plate and the clamping ring. With construction steel, these losses account for 0.08% of the input power of the machine. These losses can be reduced almost to zero by using nonmagnetic stainless steel. In addition, the machine housing may be subjected to eddy current losses if the flux density exceeds 1.5 T in the stator yoke. Winding losses can rise rapidly when high frequencies and 10–15 mm high conductors are used. In general, minimizing the winding losses is simple. For example, it can be done by dividing the conductor into transposed subconductors. However, this comes with the expense of an increase in the DC resistance. In the doctoral thesis, a new method is presented to minimize the winding losses by applying a litz wire with noninsulated strands. The construction is the same as in a normal litz wire but the insulation between the subconductors has been left out. The idea is that the connection is kept weak to prevent harmful eddy currents from flowing. Moreover, the analytical solution for calculating the AC resistance factor of the litz-wire is supplemented by including an end-winding resistance in the analytical solution. A simple measurement device is developed to measure the AC resistance in the windings. In the case of a litz-wire with originally noninsulated strands, vacuum pressure impregnation (VPI) is used to insulate the subconductors. In one of the two cases studied, the VPI affected the AC resistance factor, but in the other case, it did not have any effect. However, more research is needed to determine the effect of the VPI on litz-wire with noninsulated strands. An empirical model is developed to calculate the AC resistance factor of a single-layer formwound winding. The model includes the end-winding length and the number of strands and turns. The end winding includes the circulating current (eddy currents that are traveling through the whole winding between parallel strands) and the main current. The end-winding length also affects the total AC resistance factor.
Resumo:
A three degree of freedom model of the dynamic mass at the middle of a test sample, resembling a Stockbridge neutraliser, is introduced. This model is used to identify the hereby called equivalent complex cross section flexural stiffness (ECFS) of the beam element which is part of the whole test sample. This ECFS, once identified, gives the effective cross section flexural stiffness of the beam as well as its effective damping, measured as the loss factor of an equivalent viscoelastic beam. The beam element of the test sample may be of any complexity, such as a segment of stranded cable of the ACSR type. These data are important parameters for the design of overhead power transmission lines and other cable structures. A cost function is defined and used in the identification of the ECFS. An experiment, designed to measure the dynamic masses of two test samples, is described. Experimental and identified results are presented and discussed.
Resumo:
A three degree of freedom model of the dynamic mass at the middle of a test sample, resembling a Stockbridge neutraliser, is introduced. This model is used to identify the hereby called equivalent complex cross section flexural stiffness (ECFS) of the beam element which is part of the whole test sample. This ECFS, once identified, gives the effective cross section flexural stiffness of the beam as well as its effective damping, measured as the loss factor of an equivalent viscoelastic beam. The beam element of the test sample may be of any complexity, such as a segment of stranded cable of the ACSR type. These data are important parameters for the design of overhead power transmission lines and other cable structures. A cost function is defined and used in the identification of the ECFS. An experiment, designed to measure the dynamic masses of two test samples, is described. Experimental and identified results are presented and discussed.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
This thesis was carried out as a case study of a company YIT in order to clarify the sev-erest risks for the company and to build a method for project portfolio evaluation. The target organization creates new living environment by constructing residential buildings, business premises, infrastructure and entire areas worth for EUR 1.9 billion in the year 2013. Company has noted project portfolio management needs more information about the structure of project portfolio and possible influences of market shock situation. With interviews have been evaluated risks with biggest influence and most appropriate metrics to examine. The major risks for the company were evaluated by interviewing the executive staff. At the same time, the most appropriate risk metrics were considered. At the moment sales risk was estimated to have biggest impact on company‟s business. Therefore project port-folio evaluation model was created and three different scenarios for company‟s future were created in order to identify the scale of possible market shock situation. The created model is tested with public and descriptive figures of YIT in a one-year-long market shock and the impact on different metrics was evaluated. Study was conducted using con-structive research methodology. Results indicate that company has notable sales risk in certain sections of business portfolio.
Resumo:
Thalidomide has been shown to selectively inhibit TNF-a production in vitro by lipopolysaccharide (LPS)-stimulated monocytes. TNF-a has been shown to play a pivotal role in the pathophysiology of endotoxic shock. Using a mouse model of LPS-induced shock, we investigated the effects of thalidomide on the production of TNF-a and other cytokines and on animal survival. After injection of 100-350 µg LPS into mice, cytokines including TNF-a, IL-6, IL-10, IL-1ß, GM-CSF and IFN-g were measured in the serum. Administration of 200 mg/kg thalidomide to mice before LPS challenge modified the profile of LPS-induced cytokine secretion. Serum TNF-a levels were reduced by 93%, in a dose-dependent manner, and TNF-a mRNA expression in the spleens of mice was reduced by 70%. Serum IL-6 levels were also inhibited by 50%. Thalidomide induced a two-fold increase in serum IL-10 levels. Thalidomide treatment did not interfere with the production of GM-CSF, IL-1ß or IFN-g. The LD50 of LPS in this model was increased by thalidomide pre-treatment from 150 µg to 300 µg in 72 h. Thus, at otherwise lethal doses of LPS, thalidomide treatment was found to protect animals from death
Resumo:
The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.
Resumo:
The objective of the present study was to characterize the heart rate (HR) patterns of healthy males using the autoregressive integrated moving average (ARIMA) model over a power range assumed to correspond to the anaerobic threshold (AT) during discontinuous dynamic exercise tests (DDET). Nine young (22.3 ± 1.57 years) and 9 middle-aged (MA) volunteers (43.2 ± 3.53 years) performed three DDET on a cycle ergometer. Protocol I: DDET in steps with progressive power increases of 10 W; protocol II: DDET using the same power values as protocol 1, but applied randomly; protocol III: continuous dynamic exercise protocol with ventilatory and metabolic measurements (10 W/min ramp power), for the measurement of ventilatory AT. HR was recorded and stored beat-to-beat during DDET, and analyzed using the ARIMA (protocols I and II). The DDET experiments showed that the median physical exercise workloads at which AT occurred were similar for protocols I and II, i.e., AT occurred between 75 W (116 bpm) and 85 W (116 bpm) for the young group and between 60 W (96 bpm) and 75 W (107 bpm) for group MA in protocols I and II, respectively; in two MA volunteers the ventilatory AT occurred at 90 W (108 bpm) and 95 W (111 bpm). This corresponded to the same power values of the positive trend in HR responses. The change in HR response using ARIMA models at submaximal dynamic exercise powers proved to be a promising approach for detecting AT in normal volunteers.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.