962 resultados para Two variable oregonator model
Resumo:
Control of an industrial robot is mainly a problem of dynamics. It includes non-linearities, uncertainties and external perturbations that should be considered in the design of control laws. In this work, two control strategies based on variable structure controllers (VSC) and a PD control algorithm are compared in relation to the tracking errors considering friction. The controller's performances are evaluated by adding an static friction model. Simulations and experimental results show it is possible to diminish tracking errors by using a model based friction compensation scheme. A SCARA robot is used to illustrate the conclusions of this paper.
Resumo:
The human immunoglobulin lambda variable locus (IGLV) is mapped at chromosome 22 band q11.1-q11.2. The 30 functional germline v-lambda genes sequenced untill now have been subgrouped into 10 families (Vl1 to Vl10). The number of Vl genes has been estimated at approximately 70. This locus is formed by three gene clusters (VA, VB and VC) that encompass the variable coding genes (V) responsible for the synthesis of lambda-type Ig light chains, and the Jl-Cl cluster with the joining segments and the constant genes. Recently the entire variable lambda gene locus was mapped by contig methodology and its one- megabase DNA totally sequenced. All the known functional V-lambda genes and pseudogenes were located. We screened a human genomic DNA cosmid library and isolated a clone with an insert of 37 kb (cosmid 8.3) encompassing four functional genes (IGLV7S1, IGLV1S1, IGLV1S2 and IGLV5a), a pseudogene (VlA) and a vestigial sequence (vg1) to study in detail the positions of the restriction sites surrounding the Vl genes. We generated a high resolution restriction map, locating 31 restriction sites in 37 kb of the VB cluster, a region rich in functional Vl genes. This mapping information opens the perspective for further RFLP studies and sequencing
Resumo:
Early stimulation has been shown to produce long-lasting effects in many species. Prenatal exposure to some strong stressors may affect development of the nervous system leading to behavioral impairment in adult life. The purpose of the present work was to study the postnatal harmful effects of exposure to variable mild stresses in rats during pregnancy. Female Holtzman rats were submitted daily to one session of a chronic variable stress (CVS) during pregnancy (prenatal stress; PS group). Control pregnant rats (C group) were undisturbed. The pups of PS and C dams were weighed and separated into two groups 48 h after delivery. One group was maintained with their own dams (PS group, N = 70; C group, N = 36) while the other PS pups were cross-fostered with C dams (PSF group, N = 47) and the other C pups were cross-fostered with PS dams (CF group, N = 58). Pups were undisturbed until weaning (postnatal day 28). The male offspring underwent motor activity tests (day 28), enriched environment tests (day 37) and social interaction tests (day 42) in an animal activity monitor. Body weight was recorded on days 2, 28 and 60. The PS pups showed lower birth weight than C pups (Duncan's test, P<0.05). The PS pups suckling with their stressed mothers displayed greater preweaning mortality (C: 23%, PS: 60%; c2 test, P<0.05) and lower body weight than controls at days 28 and 60 (Duncan's test, P<0.05 and P<0.01, respectively). The PS, PSF and CF groups showed lower motor activity scores than controls when tested at day 28 (Duncan's test, P<0.01 for PS group and P<0.05 for CF and PSF groups). In the enriched environment test performed on day 37, between-group differences in total motor activity were not detected; however, the PS, CF and PSF groups displayed less exploration time than controls (Duncan's test, P<0.05). Only the PS group showed impaired motor activity and impaired social behavior at day 42 (Duncan's test, P<0.05). In fact, CVS treatment during gestation plus suckling with a previously stressed mother caused long-lasting physical and behavioral changes in rats. Cross-fostering PS-exposed pups to a dam which was not submitted to stress counteracted most of the harmful effects of the treatment. It is probable that prenatal stress plus suckling from a previously stressed mother can induce long-lasting changes in the neurotransmitter systems involved in emotional regulation. Further experiments using neurochemical and pharmacological approaches would be interesting in this model.
Resumo:
To compare the sensitivity of dipyridamole, dobutamine and pacing stress echocardiography for the detection of myocardial ischemia we produced a physiologically significant stenosis in the left circumflex artery of 14 open-chest dogs (range: 50 to 89% reduction in luminal diameter). In each study, dobutamine (5 to 40 µg kg-1 min-1 in 3-min stages) and pacing (20 bpm increments, each 2 min, up to 260 bpm) were performed randomly, and then followed by dipyridamole (up to 0.84 mg/kg over 10 min). The positivity of stress echocardiography tests was quantitatively determined by a significant (P<0.05) reduction of or failure to increase absolute and percent systolic wall thickening in the stenotic artery supplied wall, as compared to the opposite wall (areas related to the left anterior descending artery). Systolic and diastolic frozen images were analyzed off-line by two blinded observers in the control and stress conditions. The results showed that 1) the sensitivity of dobutamine, dipyridamole and pacing stress tests was 57, 57 and 36%, respectively; 2) in animals with positive tests, the mean percent change of wall thickening in left ventricular ischemic segments was larger in the pacing (-19 ± 11%) and dipyridamole (-18 ± 16%) tests as compared to dobutamine (-9 ± 6%) (P = 0.05), but a similar mean reduction of wall thickening was observed when this variable was normalized to a control left ventricular segment (area related to the left anterior descending artery) (pacing: -16 ± 7%; dipyridamole: -25 ± 16%; dobutamine: -26 ± 10%; not significant), and 3) a significant correlation was observed between magnitude of coronary stenosis and left ventricular segmental dysfunction induced by ischemia in dogs submitted to positive stress tests. We conclude that the dobutamine and dipyridamole stress tests showed identical sensitivities for the detection of myocardial ischemia in this one-vessel disease animal model with a wide range of left circumflex artery stenosis. The pacing stress test was less sensitive, but the difference was not statistically significant. The magnitude of segmental left ventricular dysfunction induced by ischemia was similar in all stress tests evaluated.
Resumo:
Concentrated solar power (CSP) is a renewable energy technology, which could contribute to overcoming global problems related to pollution emissions and increasing energy demand. CSP utilizes solar irradiation, which is a variable source of energy. In order to utilize CSP technology in energy production and reliably operate a solar field including thermal energy storage system, dynamic simulation tools are needed in order to study the dynamics of the solar field, to optimize production and develop control systems. The object of this Master’s Thesis is to compare different concentrated solar power technologies and configure a dynamic solar field model of one selected CSP field design in the dynamic simulation program Apros, owned by VTT and Fortum. The configured model is based on German Novatec Solar’s linear Fresnel reflector design. Solar collector components including dimensions and performance calculation were developed, as well as a simple solar field control system. The preliminary simulation results of two simulation cases under clear sky conditions were good; the desired and stable superheated steam conditions were maintained in both cases, while, as expected, the amount of steam produced was reduced in the case having lower irradiation conditions. As a result of the model development process, it can be concluded, that the configured model is working successfully and that Apros is a very capable and flexible tool for configuring new solar field models and control systems and simulating solar field dynamic behaviour.
Resumo:
One hundred and seventy-two subj ects participated in this quantitative, correlational survey which tested Hackman and Oldham's Job Characteristics Model in an educational setting. Subjects were Teaching Masters, Chairmen and Deans from an Ontario community college. The data were collected via mailed questionnaire, on all variables of the model. Several reliable, valid instruments were used to test the variables. Data analysis through Pearson correlation and stepwise multiple regression analyses revealed that core job characteristics predicted certain critical psychological states and that these critical psychological states, in turn were able to predict various personal and work outcomes but not absenteeism. The context variable, Satisfaction with Co-workers, was the only consistent moderating variable between core characteristics and critical psychological states; however, individual employee differences did moderate the relationship between critical psychological states and all of the personal and work outcomes except Internal Work Motivation. Two other moderator variables, Satisfaction with Context and Growth Need Strength, demonstrated an ability to predict the outcome General Job Satisfaction. The research suggests that this model may be used for job design and redesign purposes within the community college setting.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
Rapport de recherche
Resumo:
Partout, des millions d'immigrants doivent apprendre à interagir avec une nouvelle culture (acculturation) et à s’y identifier (identification). Toutefois, il existe un débat important sur la relation entre l’acculturation et l’identification. Certains chercheurs les considèrent comme étant des concepts identiques; d’autres argumentent qu'un lien directionnel unit ces concepts (c.-à-d. l'identification mène à l'acculturation, ou l'acculturation mène à l'identification). Toutefois, aucune étude n'a pas investigué la nature et la direction de leur relation. Afin de clarifier ces questions, trois modèles théoriques testeront la relation entre l’acculturation et l’identification et deux variables centrales à l’immigration, soit être forcé à immigrer et l’incohérence des valeurs. Dans le premier modèle, les variables d'immigration prédirent simultanément l'acculturation et l'identification. Le second modèle avance que les variables d'immigration mènent à l'identification, qui mène à l'acculturation. Le troisième modèle précis plutôt que les variables d'immigration prédisent l'acculturation, qui prédit l'identification. Le premier modèle propose que l'acculturation et l'identification sont le même concept, tandis que les second et troisième stipulent qu'ils sont différents (ainsi que la direction de leur relation). Ces modèles seront comparés afin d’examiner l'existence et la direction du lien qui unit l'acculturation et l'identification. Lors de la première étude, 146 immigrants latino-américains ont répondu à un questionnaire. Les analyses des pistes causales appuient le troisième modèle stipulant que l'acculturation mène à l'identification et, donc, qu'ils sont des concepts distincts. Les résultats ont été confirmés à l’aide d’une deuxième étude où 15 immigrants latino-américains ont passé une entrevue semi-structurée. Les implications théoriques et pratiques seront discutées.
Resumo:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.
Resumo:
Essai doctoral présenté à la Faculté des arts et des sciences en vue de l'obtention du grade de Doctorat (D.Psy) en psychologie option psychologie clinique.
Resumo:
I provide choice-theoretic foundations for a simple two-stage model, called transitive shortlist methods, where choices are made by sequentially by applying a pair of transitive preferences (or rationales) to eliminate inferior alternatives. Despite its simplicity, the model accommodates a wide range of choice phenomena including the status quo bias, framing, homophily, compromise, and limited willpower. I establish that the model can be succinctly characterized in terms of some well-documented context effects in choice. I also show that the underlying rationales are straightforward to determine from readily observable reversals in choice. Finally, I highlight the usefulness of these results in a variety of applications.
Resumo:
Nature is full of phenomena which we call "chaotic", the weather being a prime example. What we mean by this is that we cannot predict it to any significant accuracy, either because the system is inherently complex, or because some of the governing factors are not deterministic. However, during recent years it has become clear that random behaviour can occur even in very simple systems with very few number of degrees of freedom, without any need for complexity or indeterminacy. The discovery that chaos can be generated even with the help of systems having completely deterministic rules - often models of natural phenomena - has stimulated a lo; of research interest recently. Not that this chaos has no underlying order, but it is of a subtle kind, that has taken a great deal of ingenuity to unravel. In the present thesis, the author introduce a new nonlinear model, a ‘modulated’ logistic map, and analyse it from the view point of ‘deterministic chaos‘.
Resumo:
In this paper a method of copy detection in short Malayalam text passages is proposed. Given two passages one as the source text and another as the copied text it is determined whether the second passage is plagiarized version of the source text. An algorithm for plagiarism detection using the n-gram model for word retrieval is developed and found tri-grams as the best model for comparing the Malayalam text. Based on the probability and the resemblance measures calculated from the n-gram comparison , the text is categorized on a threshold. Texts are compared by variable length n-gram(n={2,3,4}) comparisons. The experiments show that trigram model gives the average acceptable performance with affordable cost in terms of complexity
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.