925 resultados para Free-space method
Resumo:
Blood pressure (BP) profiles were monitored in nine free-ranging sloths (Bradypus variegatus) by coupling one common carotid artery to a BP telemetry transmitter. Animals moved freely in an isolated and temperature-controlled room (24ºC) with 12/12-h artificial light-dark cycles and behaviors were observed during resting, eating and moving. Systolic (SBP) and diastolic (DBP) blood pressures were sampled for 1 min every 15 min for 24 h. BP rhythm over 24 h was analyzed by the cosinor method and the mesor, amplitude, acrophase and percent rhythm were calculated. A total of 764 measurements were made in the light cycle and 721 in the dark cycle. Twenty-four-hour values (mean ± SD) were obtained for SBP (121 ± 22 mmHg), DBP (86 ± 17 mmHg), mean BP (MBP, 98 ± 18 mmHg) and heart rate (73 ± 16 bpm). The SBP, DBP and MBP were significantly higher (unpaired Student t-test) during the light period (125 ± 21, 88 ± 15 and 100 ± 17 mmHg, respectively) than during the dark period (120 ± 21, 85 ± 17 and 97 ± 17 mmHg, respectively) and the acrophase occurred between 16:00 and 17:45 h. This circadian variation is similar to that observed in cats, dogs and marmosets. The BP decreased during "behavioral sleep" (MBP down from 110 ± 19 to 90 ± 19 mmHg at 21:00 to 8:00 h). Both feeding and moving induced an increase in MBP (96 ± 17 to 119 ± 17 mmHg at 17:00 h and 97 ± 19 to 105 ± 12 mmHg at 15:00 h, respectively). The results show that conscious sloths present biphasic circadian fluctuations in BP levels, which are higher during the light period and are mainly synchronized with feeding.
Resumo:
The objective of the present study was to determine the levels of amino acids in maternal plasma, placental intervillous space and fetal umbilical vein in order to identify the similarities and differences in amino acid levels in these compartments of 15 term newborns from normal pregnancies and deliveries. All amino acids, except tryptophan, were present in at least 186% higher concentrations in the intervillous space than in maternal venous blood, with the difference being statistically significant. This result contradicted the initial hypothesis of the study that the plasma amino acid levels in the placental intervillous space should be similar to those of maternal plasma. When the maternal venous compartment was compared with the umbilical vein, we observed values 103% higher on the fetal side which is compatible with currently accepted mechanisms of active amino acid transport. Amino acid levels of the placental intervillous space were similar to the values of the umbilical vein except for proline, glycine and aspartic acid, whose levels were significantly higher than fetal umbilical vein levels (average 107% higher). The elevated levels of the intervillous space are compatible with syncytiotrophoblast activity, which maintain high concentrations of free amino acids inside syncytiotrophoblast cells, permitting asymmetric efflux or active transport from the trophoblast cells to the blood in the intervillous space. The plasma amino acid levels in the umbilical vein of term newborns probably may be used as a standard of local normality for clinical studies of amino acid profiles.
Resumo:
Our objective was to determine whether anthropometric measurements of the midarm (MA) could identify subjects with whole body fat-free mass (FFM) depletion. Fifty-five patients (31% females; age: 64.6 ± 9.3 years) with mild/very severe chronic obstructive pulmonary disease (COPD), 18 smokers without COPD (39% females; age: 49.0 ± 7.3 years) and 23 never smoked controls (57% females; age: 48.2 ± 9.6 years) were evaluated. Spirometry, muscle strength and MA circumference were measured. MA muscle area was estimated by anthropometry and MA cross-sectional area by computerized tomography (CT) scan. Bioelectrical impedance was used as the reference method for FFM. MA circumference and MA muscle area correlated with FFM and biceps and triceps strength. Receiver operating characteristic curve analysis showed that MA circumference and MA muscle area cut-off points presented sensitivity and specificity >82% to discriminate FFM-depleted subjects. CT scan measurements did not provide improved sensitivity or specificity. For all groups, there was no significant statistical difference between MA muscle area [35.2 (29.3-45.0) cm²] and MA cross-sectional area values [36.4 (28.5-43.3) cm²] and the linear correlation coefficient between tests was r = 0.77 (P < 0.001). However, Bland-Altman plots revealed wide 95% limits of agreement (-14.7 to 15.0 cm²) between anthropometric and CT scan measurements. Anthropometric MA measurements may provide useful information for identifying subjects with whole body FFM depletion. This is a low-cost technique and can be used in a wider patient population to identify those likely to benefit from a complete body composition evaluation.
Resumo:
The objectives of the present study were to describe and compare the body composition variables determined by bioelectrical impedance (BIA) and the deuterium dilution method (DDM), to identify possible correlations and agreement between the two methods, and to construct a linear regression model including anthropometric measures. Obese adolescents were evaluated by anthropometric measures, and body composition was assessed by BIA and DDM. Forty obese adolescents were included in the study. Comparison of the mean values for the following variables: fat body mass (FM; kg), fat-free mass (FFM; kg), and total body water (TBW; %) determined by DDM and by BIA revealed significant differences. BIA overestimated FFM and TBW and underestimated FM. When compared with data provided by DDM, the BIA data presented a significant correlation with FFM (r = 0.89; P < 0.001), FM (r = 0.93; P < 0.001) and TBW (r = 0.62; P < 0.001). The Bland-Altman plot showed no agreement for FFM, FM or TBW between data provided by BIA and DDM. The linear regression models proposed in our study with respect to FFM, FM, and TBW were well adjusted. FFM obtained by DDM = 0.842 x FFM obtained by BIA. FM obtained by DDM = 0.855 x FM obtained by BIA + 0.152 x weight (kg). TBW obtained by DDM = 0.813 x TBW obtained by BIA. The body composition results of obese adolescents determined by DDM can be predicted by using the measures provided by BIA through a regression equation.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
Two flavors of cookies were developed (savory and peppery) containing a mixture of plants such as "guaraná" (Paullinia cupana) and "catuaba" (Anemopaegma mirandum). A test of acceptance and buying intention was applied to 48 consumers through a structured hedonic scale of 9 points. Afterwards, the centesimal compositions of these cookies were obtained as well as their total contents of copper, iron, and zinc through the method of atomic absorption spectrometry with flame. Sensorial tests indicated that the cookies presented good acceptance with potential to sensorial growth. The amount of fibers in the samples, 3 g/100 g, surpassed expectations since the product was not invented with the intention of being a source of this nutrient. The total amount of copper (0.41 mg.100 g-1), iron (4.50 mg.100 g-1), and zinc (1.32 mg.100 g-1) was considered good. The cookies produced can be considered good sources of fibers, copper, iron, and zinc. Furthermore, they are beneficial to people affected by celiac disease because they lack gluten. They also present functional properties. In addition, the medicinal plants used are considered energetic.
Resumo:
Tämä työ käsittelee Oilon Oy:n tehtaan layout -suunnittelua, kun yhdistetään kaksi eri kiinteistössä toimivaa tuotantoyksikköä saman katon alle. Yksiköt ovat Hollolassa toimiva Oilon Home Oy ja Lahden pääkonttorissa toimiva Oilon Industry Oy. Yhdistämisellä haetaan säästöjä tuotantoyksiköiden välisistä logistiikkakuluista sekä kiinteistön vuokra- ja ylläpitokuluista ja samalla tehostetaan tehtaan sisäistä materiaalinkäsittelyä. Oilon Oy on toiminut Lahden pääkonttorissa jo yli 50 vuoden ajan ja sen tuotanto on kokenut historiansa aikana useita pienempiä muutoksia, jotka ovat jättäneet nykyiseen layoutiin paljon toivomisenvaraa. Jotta molemmat tuotantoyksiköt saadaan mahtumaan Lahden yksikköön, on sinne saatava luotua tarvittavat tilat siellä olemassa oleville toiminnoille ja Hollolan yksiköstä saapuville tuotantolinjoille. Työn alussa perehdytään kirjallisuuden avulla siihen kuinka tuotanto on kehittynyt maailmalla lähihistorian aikana, jotta voidaan paremmin ymmärtää Oilon Oy:n nykytilanteeseen johtaneita syitä. Koska Hollolasta saapuville tuotantolinjoille ei ole tarkoitus rakentaa uutta tuotantotilaa, pyritään tilaa vapauttamaan Lahden tehtaalta varastointia tehostamalla ja sen logistiikkaa helpottamalla. Tämän vuoksi vertaillaan yleisimpiä tavaroiden varastointi ratkaisuita sekä tutkitaan kuinka materiaalin käsittelyä voitaisiin tehostaa varastossa ja tuotantosuluissa. Varastoinnin tehostaminen ei kuitenkaan yksin riitä vapauttamaan tarvittavia tiloja, joten layout-muutoksen aikana rakennetaan myös kaksi uuttaa varastohallia. Uudessa layout-suunnitelmassa huomioidaan kunkin solun nykyiset ja tulevat tilantarpeet, sekä rakennetaan ne niin, että työvoimaa voidaan käyttää niissä joustavasti eri tuotteiden valmistamiseen. Tutkimusmetodeina käytetään teemahaastatteluita, joilla selvitetään työntekijöiden tarpeet sekä kirjallisuuden avulla perehdytään Lean-tuotantoon, kanban:iin, 5S:ään ja arvovirta-analyysiin. Näitä soveltamalla saadaan luotua tehokkaasti toimiva kokonaisuus, jolla tavoitellut säästöt saadaan toteutumaan.
Resumo:
Discourse in the provincial education system that includes Aboriginal peoples is a convoluted one-sided affair. This has contributed to the limited academic success for Aboriginal secondary students in the provincial school system. The Office of the Auditor General (2004) announced a 27-28 year gap in Academic success compared to non- Aboriginal students (p. I). Both Aboriginal and non-Aboriginal stakeholders are fiiistrated and confused with the lack of support for long-term solutions to address academic success for Aboriginal students. The boundaries in education that exist between the dominant society of Canada and Aboriginal peoples in education are hindering the development of ethical space in which to negotiate and apply "concrete arguments and concepts" (Ermine, 2000, p. 140) for 'best' solutions across the cultural divide. Recent literature suggests a gap in knowledge to address this cultural divide. This study reveals racism is still prevalent and the problem lies in the fallacy of Euro-Western pedagogical beliefs. There is a need to design ethical space that will assist transformation of cross-relations in education for inclusion of Aboriginal voices and content. I submit that ethical space involves physical and abstract space. This report is a qualitative, exploratory, and single case study of one northern Ontario secondary school attended by First Nations and Metis peoples who comprise 35% of the school population. Twenty-six stakeholders volunteered to participate in six interviews. The volunteers in this study are Aboriginal and non-Aboriginal. Aboriginal peoples are firom two First Nations, and Metis peoples. It is an Aboriginal designed and delivered study that a) describes an Aboriginally-designed research method to gather data across cultural divides in a secondary school, b) reviews Tri-Council Policy Section 6 (TCPS) regarding 'good practices' in ethical research involving Aboriginal peoples, and c) summarizes stakeholder perspectives of the 'best educational environment' for one secondary school.
Resumo:
The Zubarev equation of motion method has been applied to an anharmonic crystal of O( ,,4). All possible decoupling schemes have been interpreted in order to determine finite temperature expressions for the one phonon Green's function (and self energy) to 0()\4) for a crystal in which every atom is on a site of inversion symmetry. In order to provide a check of these results, the Helmholtz free energy expressions derived from the self energy expressions, have been shown to agree in the high temperature limit with the results obtained from the diagrammatic method. Expressions for the correlation functions that are related to the mean square displacement have been derived to 0(1\4) in the high temperature limit.
Resumo:
Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .
Resumo:
Drawing on a growing literature on the interconnection of queer theory, sexuality and space, this thesis critically assesses the development, implementation and impact of a campus-based Positive Space Campaign aimed at raising the visibility and number of respectful, supportive, educational and welcoming spaces for lesbian, gay, bi, trans, two-spirited, queer and questioning (LGBTQ) students staff and faculty. The analysis, based on participatory action research (PAR), interrogates the extent to which the Positive Space Campaign challenges heteronormativity on campus. I contend that the Campaign, in its attempt to challenge dominant notions of sex, gender and sexuality, disrupts heterosexual space. Further, as I consider the meanings of 'queer', I consider the extent to which Positive Space Campaigns may be 'queering' space, by contributing to an 'imagined' campus space free of sexual and gender-based discrimination. The case study contributes to queer theory, the literature on sexuality and space, the literature on queer organizing in educational spaces and to broader queer organizing efforts in Canada.
Resumo:
The present set of experiments was designed to investigate the organization and refmement of young children's face space. Past research has demonstrated that adults encode individual faces in reference to a distinct face prototype that represents the average of all faces ever encountered. The prototype is not a static abstracted norm but rather a malleable face average that is continuously updated by experience (Valentine, 1991); for example, following prolonged viewing of faces with compressed features (a technique referred to as adaptation), adults rate similarly distorted faces as more normal and more attractive (simple attractiveness aftereffects). Recent studies have shown that adults possess category-specific face prototypes (e.g., based on race, sex). After viewing faces from two categories (e.g., Caucasian/Chinese) that are distorted in opposite directions, adults' attractiveness ratings simultaneously shift in opposite directions (opposing aftereffects). The current series of studies used a child-friendly method to examine whether, like adults, 5- and 8-year-old children show evidence for category-contingent opposing aftereffects. Participants were shown a computerized storybook in which Caucasian and Chinese children's faces were distorted in opposite directions (expanded and compressed). Both before and after adaptation (i.e., reading the storybook), participants judged the normality/attractiveness of a small number of expanded, compressed, and undistorted Caucasian and Chinese faces. The method was first validated by testing adults (Experiment I ) and was then refined in order to test 8- (Experiment 2) and 5-yearold (Experiment 4a) children. Five-year-olds (our youngest age group) were also tested in a simple aftereffects paradigm (Experiment 3) and with male and female faces distorted in opposite directions (Experiment 4b). The current research is the first to demonstrate evidence for simple attractiveness aftereffects in children as young as 5, thereby indicating that similar to adults, 5-year-olds utilize norm-based coding. Furthermore, this research provides evidence for racecontingent opposing aftereffects in both 5- and 8-year-olds; however, the opposing aftereffects demonstrated by 5-year-olds were driven largely by simple aftereffects for Caucasian faces. The lack of simple aftereffects for Chinese faces in 5-year-olds may be reflective of young children's limited experience with other-race faces and suggests that children's face space undergoes a period of increasing differentiation over time with respect to race. Lastly, we found no evidence for sex -contingent opposing aftereffects in 5-year-olds, which suggests that young children do not rely on a fully adult-like face space even for highly salient face categories (i.e., male/female) with which they have comparable levels of experience.
Resumo:
De nos jours les cartes d’utilisation/occupation du sol (USOS) à une échelle régionale sont habituellement générées à partir d’images satellitales de résolution modérée (entre 10 m et 30 m). Le National Land Cover Database aux États-Unis et le programme CORINE (Coordination of information on the environment) Land Cover en Europe, tous deux fondés sur les images LANDSAT, en sont des exemples représentatifs. Cependant ces cartes deviennent rapidement obsolètes, spécialement en environnement dynamique comme les megacités et les territoires métropolitains. Pour nombre d’applications, une mise à jour de ces cartes sur une base annuelle est requise. Depuis 2007, le USGS donne accès gratuitement à des images LANDSAT ortho-rectifiées. Des images archivées (depuis 1984) et des images acquises récemment sont disponibles. Sans aucun doute, une telle disponibilité d’images stimulera la recherche sur des méthodes et techniques rapides et efficaces pour un monitoring continue des changements des USOS à partir d’images à résolution moyenne. Cette recherche visait à évaluer le potentiel de telles images satellitales de résolution moyenne pour obtenir de l’information sur les changements des USOS à une échelle régionale dans le cas de la Communauté Métropolitaine de Montréal (CMM), une métropole nord-américaine typique. Les études précédentes ont démontré que les résultats de détection automatique des changements dépendent de plusieurs facteurs tels : 1) les caractéristiques des images (résolution spatiale, bandes spectrales, etc.); 2) la méthode même utilisée pour la détection automatique des changements; et 3) la complexité du milieu étudié. Dans le cas du milieu étudié, à l’exception du centre-ville et des artères commerciales, les utilisations du sol (industriel, commercial, résidentiel, etc.) sont bien délimitées. Ainsi cette étude s’est concentrée aux autres facteurs pouvant affecter les résultats, nommément, les caractéristiques des images et les méthodes de détection des changements. Nous avons utilisé des images TM/ETM+ de LANDSAT à 30 m de résolution spatiale et avec six bandes spectrales ainsi que des images VNIR-ASTER à 15 m de résolution spatiale et avec trois bandes spectrales afin d’évaluer l’impact des caractéristiques des images sur les résultats de détection des changements. En ce qui a trait à la méthode de détection des changements, nous avons décidé de comparer deux types de techniques automatiques : (1) techniques fournissant des informations principalement sur la localisation des changements et (2)techniques fournissant des informations à la fois sur la localisation des changements et sur les types de changement (classes « de-à »). Les principales conclusions de cette recherche sont les suivantes : Les techniques de détection de changement telles les différences d’image ou l’analyse des vecteurs de changements appliqués aux images multi-temporelles LANDSAT fournissent une image exacte des lieux où un changement est survenu d’une façon rapide et efficace. Elles peuvent donc être intégrées dans un système de monitoring continu à des fins d’évaluation rapide du volume des changements. Les cartes des changements peuvent aussi servir de guide pour l’acquisition d’images de haute résolution spatiale si l’identification détaillée du type de changement est nécessaire. Les techniques de détection de changement telles l’analyse en composantes principales et la comparaison post-classification appliquées aux images multi-temporelles LANDSAT fournissent une image relativement exacte de classes “de-à” mais à un niveau thématique très général (par exemple, bâti à espace vert et vice-versa, boisés à sol nu et vice-versa, etc.). Les images ASTER-VNIR avec une meilleure résolution spatiale mais avec moins de bandes spectrales que LANDSAT n’offrent pas un niveau thématique plus détaillé (par exemple, boisés à espace commercial ou industriel). Les résultats indiquent que la recherche future sur la détection des changements en milieu urbain devrait se concentrer aux changements du couvert végétal puisque les images à résolution moyenne sont très sensibles aux changements de ce type de couvert. Les cartes indiquant la localisation et le type des changements du couvert végétal sont en soi très utiles pour des applications comme le monitoring environnemental ou l’hydrologie urbaine. Elles peuvent aussi servir comme des indicateurs des changements de l’utilisation du sol. De techniques telles l’analyse des vecteurs de changement ou les indices de végétation son employées à cette fin.
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.