982 resultados para Reference Model
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
Uudet palvelut ovat tarkeinta, mita asiakkaat odottavat uudelta teknologialta.Se on paaasiallinen syy siihen, etta asiakkaat ovat valmiita maksamaan uudesta teknologiasta ja kayttamaan sita. Sen vuoksi uuden verkon tuoma uusi palveluarkkitehtuuri on tarkea koko projektin onnistumiselle. Tama dokumentti keskittyy kolmannen sukupolven matkapuhelinverkkojen palveluarkkitehtuuriin, jonka viitemallista annetaan kuvaus. Verkon palvelut esitellaan ja kuvaillaan. Toteutukseen liittyvia asioita selostetaan. USA:n markkinoilla tarvittava WIN konsepti kuvataan ja sen toteutuksesta annetaan myos kuvaus. Lopussa kuvataan Pre-Paid tilaajien laskutustietojen kasittelya WIN konseptissa elvytystilanteessa.
Resumo:
A large amount of data for inconspicuous taxa is stored in natural history collections; however, this information is often neglected for biodiversity patterns studies. Here, we evaluate the performance of direct interpolation of museum collections data, equivalent to the traditional approach used in bryophyte conservation planning, and stacked species distribution models (S-SDMs) to produce reliable reconstructions of species richness patterns, given that differences between these methods have been insufficiently evaluated for inconspicuous taxa. Our objective was to contrast if species distribution models produce better inferences of diversity richness than simply selecting areas with the higher species numbers. As model species, we selected Iberian species of the genus Grimmia (Bryophyta), and we used four well-collected areas to compare and validate the following models: 1) four Maxent richness models, each generated without the data from one of the four areas, and a reference model created using all of the data and 2) four richness models obtained through direct spatial interpolation, each generated without the data from one area, and a reference model created with all of the data. The correlations between the partial and reference Maxent models were higher in all cases (0.45 to 0.99), whereas the correlations between the spatial interpolation models were negative and weak (-0.3 to -0.06). Our results demonstrate for the first time that S-SDMs offer a useful tool for identifying detailed richness patterns for inconspicuous taxa such as bryophytes and improving incomplete distributions by assessing the potential richness of under-surveyed areas, filling major gaps in the available data. In addition, the proposed strategy would enhance the value of the vast number of specimens housed in biological collections.
Resumo:
Tämä työ on tehty osana MASTO-tutkimushanketta, jonka tarkoituksena on kehittää ohjelmistotestauksen adaptiivinen referenssimalli. Työ toteutettiin tilastollisena tutkimuksena käyttäen survey-menetelmää. Tutkimuksessa haastateltiin 31 organisaatioyksikköä eri puolelta suomea, jotka tekevät keskikriittisiä sovelluksia. Tutkimuksen hypoteeseina oli laadun riippuvuus ohjelmistokehitysmenetelmästä, asiakkaan osallistumisesta, standardin toteutumisesta, asiakassuhteesta, liiketoimintasuuntautuneisuudesta, kriittisyydestä, luottamuksesta ja testauksen tasosta. Hypoteeseista etsittiin korrelaatiota laadun kanssa tekemällä korrelaatio ja regressioanalyysi. Lisäksi tutkimuksessa kartoitettiin minkälaisia ohjelmistokehitykseen liittyviä käytäntöjä, menetelmiä ja työkaluja organisaatioyksiköissä käytettiin, ongelmia ja parannusehdotuksia liittyen ohjelmistotestaukseen, merkittävimpiä tapoja asiakkaan vaikuttamiseksi ohjelmiston laatuun sekä suurimpia hyötyjä ja haittoja ohjelmistokehityksen tai testauksen ulkoistamisessa. Tutkimuksessa havaittiin, että laatu korreloi positiivisesti ja tilastollisesti merkitsevästi testauksen tason, standardin toteutumisen, asiakasosallistumisen suunnitteluvaiheessa sekä asiakasosallistumisen ohjaukseen kanssa, luottamuksen ja yhden asiakassuhteeseen liittyvän osakysymyksen kanssa. Regressioanalyysin perusteella muodostettiin regressioyhtälö, jossa laadun todettiin positiivisesti riippuvan standardin toteutumisesta, asiakasosallistumisesta suunnitteluvaiheessa sekä luottamuksesta.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
The aim of this bachelor’s thesis is to make a plan for the assessment of the adaptive reference model of software testing, which is based on the ISO/IEC 29119 testing standard. The assessment is not carried out in the scope of this thesis, but later in the related MASTO project. The ISO/IEC 29119 testing standard and the ISO/IEC 15504 process assessment standard are explained and a literary review is done about problems in software testing practices. Based on this background information a plan for the assessment is made. The assessment plan assesses the reference model from two perspectives. First the capability of the testing process described by the reference model is assessed and then the practical usefulness of the model is assessed.
Resumo:
Vaihtosuuntaajan IGBT-moduulin liitosten lämpötiloja ei voida suoraan mitata, joten niiden arviointiin tarvitaan reaaliaikainen lämpömalli. Tässä työssä on tavoitteena kehittää tähän tarkoitukseen C-kielellä implementoitu ratkaisu, joka on riittävän tarkka ja samalla mahdollisimman laskennallisesti tehokas. Ohjelmallisen toteutuksen täytyy myös sopia erilaisille moduulityypeille ja sen on tarvittaessa otettava huomioon saman moduulin muiden sirujen lämmittävä vaikutus toisiinsa. Kirjallisuuskatsauksen perusteella valitaan olemassa olevista lämpömalleista käytännön toteutuksen pohjaksi lämpöimpedanssimatriisiin perustuva malli. Lämpöimpedanssimatriisista tehdään Simulink-ohjelmalla s-tason simulointimalli, jota käytetään referenssinä muun muassa implementoinnin tarkkuuden verifiointiin. Lämpömalli tarvitsee tiedon vaihtosuuntaajan häviöistä, joten työssä on selvitetty eri vaihtoehtoja häviölaskentaan. Lämpömallin kehittäminen s-tason mallista valmiiksi C-kieliseksi koodiksi on kuvattu tarkasti. Ensin s-tason malli diskretoidaan z-tasoon. Z-tason siirtofunktiot muutetaan puolestaan ensimmäisen kertaluvun differenssiyhtälöiksi. Työssä kehitetty monen aikatason lämpömalli saadaan jakamalla ensimmäisen kertaluvun differenssiyhtälöt eri aikatasoille suoritettavaksi sen mukaan, mikä niiden kuvaileman termin vaatima päivitysnopeus on. Tällainen toteutus voi parhaimmillaan kuluttaa alle viidesosan kellojaksoja verrattuna suoraviivaiseen yhden aikatason toteutukseen. Implementoinnin tarkkuus on hyvä. Implementoinnin vaatimia suoritusaikoja testattiin Texas Instrumentsin TMS320C6727- prosessorilla (300 MHz). Esimerkkimallin laskemisen määritettiin kuluttavan vaihtosuuntaajan toimiessa 5 kHz kytkentätaajuudella vain 0,4 % prosessorin kellojaksoista. Toteutuksen tarkkuus ja laskentakapasiteetin vähäinen vaatimus mahdollistavat lämpömallin käyttämisen lämpösuojaukseen ja lisäämisen osaksi muuta jo prosessorilla olemassa olevaa systeemiä.
Resumo:
ABSTRACT Soy harvest matches seasons with shortage of dry matter supply for ruminant feeding in most Brazilian soy-growing areas. Agricultural machinery-producing companies must have market perception, observing new opportunities and developing equipment to meet costumers’ needs. This paper aims to design, build, and test a device to collect soybean crop residues from the combine cleaning mechanism, consisting mainly of vegetable straw (chaff), and the other plant parts (stems) remain being deposited onto the ground. For equipment designing, we made use of the architectural design methodology proposed in the reference model for the agricultural machinery development process. The equipment was designed and built following the proposed methodology, then installed and put into operation in a John Deere 1165 combine. After initial testing and few adjustments, the device showed satisfactory chaff-collecting performance. The equipment consists of a screw conveyor assembled transversely to the combine and a centrifugal fan assembled on the side. The collected chaff is dumped into a trailer towed by tractor.
Resumo:
ABSTRACT This study aims at presenting the process of machine design and agricultural implements by means of a reference model, formulated with the purpose of explaining the development activities of new products, serving as a guideline to coach human resources and to assist in formalizing the process in small and medium-sized businesses (SMB), i.e. up to 500 employees. The methodology used included the process modeling, carried out from case studies in the SMB, and the study of reference models in literature. The modeling formalism used was based on the IDEF0 standard, which identifies the dimensions required for the model detailing: input information; activities; tasks; knowledge domains; mechanisms; controls and information produced. These dimensions were organized in spreadsheets and graphs. As a result, a reference model with 27 activities and 71 tasks was obtained, distributed over four phases of the design process. The evaluation of the model was carried out by the companies participating in the case studies and by experts, who concluded that the model explains the actions needed to develop new products in SMB.
Resumo:
The interaction mean free path between neutrons and TRISO particles is simulated using scripts written in MATLAB to solve the increasing error present with an increase in the packing factor in the reactor physics code Serpent. Their movement is tracked both in an unbounded and in a bounded space. Their track is calculated, depending on the program, linearly directly using the position vectors of the neutrons and the surface equations of all the fuel particles; by dividing the space in multiple subspaces, each of which contain a fraction of the total number of particles, and choosing the particles from those subspaces through which the neutron passes through; or by choosing the particles that lie within an infinite cylinder formed on the movement axis of the neutron. The estimate from the current analytical model, based on an exponential distribution, for the mean free path, utilized by Serpent, is used as a reference result. The results from the implicit model in Serpent imply a too long mean free path with high packing factors. The received results support this observation by producing, with a packing factor of 17 %, approximately 2.46 % shorter mean free path compared to the reference model. This is supported by the packing factor experienced by the neutron, the simulation of which resulted in a 17.29 % packing factor. It was also observed that the neutrons leaving from the surfaces of the fuel particles, in contrast to those starting inside the moderator, do not follow the exponential distribution. The current model, as it is, is thus not valid in the determination of the free path lengths of the neutrons.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
This paper develops a general stochastic framework and an equilibrium asset pricing model that make clear how attitudes towards intertemporal substitution and risk matter for option pricing. In particular, we show under which statistical conditions option pricing formulas are not preference-free, in other words, when preferences are not hidden in the stock and bond prices as they are in the standard Black and Scholes (BS) or Hull and White (HW) pricing formulas. The dependence of option prices on preference parameters comes from several instantaneous causality effects such as the so-called leverage effect. We also emphasize that the most standard asset pricing models (CAPM for the stock and BS or HW preference-free option pricing) are valid under the same stochastic setting (typically the absence of leverage effect), regardless of preference parameter values. Even though we propose a general non-preference-free option pricing formula, we always keep in mind that the BS formula is dominant both as a theoretical reference model and as a tool for practitioners. Another contribution of the paper is to characterize why the BS formula is such a benchmark. We show that, as soon as we are ready to accept a basic property of option prices, namely their homogeneity of degree one with respect to the pair formed by the underlying stock price and the strike price, the necessary statistical hypotheses for homogeneity provide BS-shaped option prices in equilibrium. This BS-shaped option-pricing formula allows us to derive interesting characterizations of the volatility smile, that is, the pattern of BS implicit volatilities as a function of the option moneyness. First, the asymmetry of the smile is shown to be equivalent to a particular form of asymmetry of the equivalent martingale measure. Second, this asymmetry appears precisely when there is either a premium on an instantaneous interest rate risk or on a generalized leverage effect or both, in other words, whenever the option pricing formula is not preference-free. Therefore, the main conclusion of our analysis for practitioners should be that an asymmetric smile is indicative of the relevance of preference parameters to price options.
Resumo:
L'épaule est un complexe articulaire formé par le thorax, la clavicule, la scapula et l'humérus. Alors que les orientation et position de ces derniers la rendent difficile à étudier, la compréhension approfondie de l'interrelation de ces segments demeure cliniquement importante. Ainsi, un nouveau modèle du membre supérieur est développé et présenté. La cinématique articulaire de 15 sujets sains est collectée et reconstruite à l'aide du modèle. Celle-ci s'avère être généralement moins variable et plus facilement interprétable que le modèle de référence. Parallèlement, l'utilisation de simplifications, issues de la 2D, sur le calcul d'amplitude de mouvement en 3D est critiquée. Cependant, des cas d'exception où ces simplifications s'appliquent sont dégagés et prouvés. Ainsi, ils sont une éventuelle avenue d'amélioration supplémentaire des modèles sans compromission de leur validé.
Resumo:
The Dirichlet family owes its privileged status within simplex distributions to easyness of interpretation and good mathematical properties. In particular, we recall fundamental properties for the analysis of compositional data such as closure under amalgamation and subcomposition. From a probabilistic point of view, it is characterised (uniquely) by a variety of independence relationships which makes it indisputably the reference model for expressing the non trivial idea of substantial independence for compositions. Indeed, its well known inadequacy as a general model for compositional data stems from such an independence structure together with the poorness of its parametrisation. In this paper a new class of distributions (called Flexible Dirichlet) capable of handling various dependence structures and containing the Dirichlet as a special case is presented. The new model exhibits a considerably richer parametrisation which, for example, allows to model the means and (part of) the variance-covariance matrix separately. Moreover, such a model preserves some good mathematical properties of the Dirichlet, i.e. closure under amalgamation and subcomposition with new parameters simply related to the parent composition parameters. Furthermore, the joint and conditional distributions of subcompositions and relative totals can be expressed as simple mixtures of two Flexible Dirichlet distributions. The basis generating the Flexible Dirichlet, though keeping compositional invariance, shows a dependence structure which allows various forms of partitional dependence to be contemplated by the model (e.g. non-neutrality, subcompositional dependence and subcompositional non-invariance), independence cases being identified by suitable parameter configurations. In particular, within this model substantial independence among subsets of components of the composition naturally occurs when the subsets have a Dirichlet distribution