939 resultados para Time components


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzed the use of two viticultural practices: “crop level” (half crop; HC, and full crop; FC) and “hang times”, and their impact on the composition of four grape cultivars; Pinot gris, Riesling, Cabernet Franc and Cabernet Sauvignon from the Niagara Region and wine volatile composition by GC-MS. It was hypothesized that keeping a full crop with a longer hang time would have a greater impact on wine quality than reducing the crop level. In all cultivars, a reduction of crop level induced reductions in yield, clusters per vine and crop load, with increases in Brix. Extended hang time also increased Brix related to desiccation. The climatic conditions at harvest had an impact on hang time effects. The GC-MS analysis detected the presence of 30 volatile components in the wine, with different odour activity values. Harvest time had a positive impact than crop reduction in almost all compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La fumée du tabac est un aérosol extrêmement complexe constitué de milliers de composés répartis entre la phase particulaire et la phase vapeur. Il a été démontré que les effets toxicologiques de cette fumée sont associés aux composés appartenant aux deux phases. Plusieurs composés biologiquement actifs ont été identifiés dans la fumée du tabac; cependant, il n’y a pas d’études démontrant la relation entre les réponses biologiques obtenues via les tests in vitro ou in vivo et les composés présents dans la fumée entière du tabac. Le but de la présente recherche est de développer des méthodes fiables et robustes de fractionnement de la fumée à l’aide de techniques de séparation analytique et de techniques de détection combinés à des essais in vitro toxicologiques. Une étude antérieure réalisée par nos collaborateurs a démontré que, suite à l’étude des produits de combustion de douze principaux composés du tabac, l’acide chlorogénique s’est avéré être le composé le plus cytotoxique selon les test in vitro du micronoyau. Ainsi, dans cette étude, une méthode par chromatographie préparative en phase liquide a été développée dans le but de fractionner les produits de combustion de l’acide chlorogénique. Les fractions des produits de combustion de l’acide chlorogénique ont ensuite été testées et les composés responsables de la toxicité de l’acide chlorogénique ont été identifiés. Le composé de la sous-fraction responsable en majeure partie de la cytoxicité a été identifié comme étant le catéchol, lequel fut confirmé par chromatographie en phase liquide/ spectrométrie de masse à temps de vol. Des études récentes ont démontré les effets toxicologiques de la fumée entière du tabac et l’implication spécifique de la phase vapeur. C’est pourquoi notre travail a ensuite été focalisé principalement à l’analyse de la fumée entière. La machine à fumer Borgwaldt RM20S® utilisée avec les chambres d’exposition cellulaire de British American Tobacco permettent l’étude in vitro de l’exposition de cellules à différentes concentrations de fumée entière du tabac. Les essais biologiques in vitro ont un degré élevé de variabilité, ainsi, il faut prendre en compte toutes les autres sources de variabilité pour évaluer avec précision la finalité toxicologique de ces essais; toutefois, la fiabilité de la génération de la fumée de la machine n’a jamais été évaluée jusqu’à maintenant. Nous avons donc déterminé la fiabilité de la génération et de la dilution (RSD entre 0,7 et 12 %) de la fumée en quantifiant la présence de deux gaz de référence (le CH4 par détection à ionisation de flamme et le CO par absorption infrarouge) et d’un composé de la phase particulaire, le solanesol (par chromatographie en phase liquide à haute performance). Ensuite, la relation entre la dose et la dilution des composés de la phase vapeur retrouvée dans la chambre d’exposition cellulaire a été caractérisée en utilisant une nouvelle technique d’extraction dite par HSSE (Headspace Stir Bar Sorptive Extraction) couplée à la chromatographie en phase liquide/ spectrométrie de masse. La répétabilité de la méthode a donné une valeur de RSD se situant entre 10 et 13 % pour cinq des composés de référence identifiés dans la phase vapeur de la fumée de cigarette. La réponse offrant la surface maximale d’aire sous la courbe a été obtenue en utilisant les conditions expérimentales suivantes : intervalle de temps d’exposition/ désorption de 10 0.5 min, température de désorption de 200°C pour 2 min et température de concentration cryogénique (cryofocussing) de -75°C. La précision de la dilution de la fumée est linéaire et est fonction de l’abondance des analytes ainsi que de la concentration (RSD de 6,2 à 17,2 %) avec des quantités de 6 à 450 ng pour les composés de référence. Ces résultats démontrent que la machine à fumer Borgwaldt RM20S® est un outil fiable pour générer et acheminer de façon répétitive et linéaire la fumée de cigarette aux cultures cellulaires in vitro. Notre approche consiste en l’élaboration d’une méthodologie permettant de travailler avec un composé unique du tabac, pouvant être appliqué à des échantillons plus complexes par la suite ; ex : la phase vapeur de la fumée de cigarette. La méthodologie ainsi développée peut potentiellement servir de méthode de standardisation pour l’évaluation d’instruments ou de l’identification de produits dans l’industrie de tabac.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, certain continuous time inventory problems with positive service time under local purchase guided by N/T-policy are analysed. In most of the cases analysed, we arrive at stochastic decomposition of system states, that is, the joint distribution of the system states is obtained as the product of marginal distributions of the components. The thesis is divided into ve chapters

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the relationship between higher education and the requirement of the world of work with an emphasis on the effect of problem-based learning (PBL) on graduates' competencies. The implementation of full PBL method is costly (Albanese & Mitchell, 1993; Berkson, 1993; Finucane, Shannon, & McGrath, 2009). However, the implementation of PBL in a less than curriculum-wide mode is more achievable in a broader context (Albanese, 2000). This means higher education institutions implement only a few PBL components in the curriculum. Or a teacher implements a few PBL components at the courses level. For this kind of implementation there is a need to identify PBL components and their effects on particular educational outputs (Hmelo-Silver, 2004; Newman, 2003). So far, however there has been little research about this topic. The main aims of this study were: (1) to identify each of PBL components which were manifested in the development of a valid and reliable PBL implementation questionnaire and (2) to determine the effect of each identified PBL component to specific graduates' competencies. The analysis was based on quantitative data collected in the survey of medicine graduates of Gadjah Mada University, Indonesia. A total of 225 graduates responded to the survey. The result of confirmatory factor analysis (CFA) showed that all individual constructs of PBL and graduates' competencies had acceptable GOFs (Goodness-of-fit). Additionally, the values of the factor loadings (standardize loading estimates), the AVEs (average variance extracted), CRs (construct reliability), and ASVs (average shared squared variance) showed the proof of convergent and discriminant validity. All values indicated valid and reliable measurements. The investigation of the effects of PBL showed that each PBL component had specific effects on graduates' competencies. Interpersonal competencies were affected by Student-centred learning (β = .137; p < .05) and Small group components (β = .078; p < .05). Problem as stimulus affected Leadership (β = .182; p < .01). Real-world problems affected Personal and organisational competencies (β = .140; p < .01) and Interpersonal competencies (β = .114; p < .05). Teacher as facilitator affected Leadership (β = 142; p < .05). Self-directed learning affected Field-related competencies (β = .080; p < .05). These results can help higher education institution and educator to have informed choice about the implementation of PBL components. With this information higher education institutions and educators could fulfil their educational goals and in the same time meet their limited resources. This study seeks to improve prior studies' research method in four major ways: (1) by indentifying PBL components based on theory and empirical data; (2) by using latent variables in the structural equation modelling instead of using a variable as a proxy of a construct; (3) by using CFA to validate the latent structure of the measurement, thus providing better evidence of validity; and (4) by using graduate survey data which is suitable for analysing PBL effects in the frame work of the relationship between higher education and the world of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology. The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The male and female homosexual orientation has substantial prevalence in humans and can be explained by determinants of various levels: biological, genetic, psychological, social and cultural. However, the biological and genetic evidence have been the main hypotheses tested in scientific research in the world. This article aims to review research studies about the existence of genetic and biological evidence that determine homosexual orientation. Was conducted a review of the literature, using the database MedLine/PubMed and Google scholar. The papers and books were searched in Portuguese and English, using the following keywords: sexual orientation, sexual behavior, homosexuality, developmental Biology and genetics. Was selected papers of the last 22 years. Were found five main theories about the biological components: (1) fraternal birth order, (2) brain androgenization and 2D:4D ratio; (3) brain activation by pheromones; and (4) epigenetic inheritance; and four theories about the genetic components: (1) genetic polymorphism; (2) pattern of X-linked inheritance; (3) monozygotic twins; and (4) sexual antagonistic selection. Concluded that there were many scientific evidence found over time to explain some of biological and genetic components of homosexuality, especially in males. However, today, there is no definitive explanation about what are the determinants of homosexual orientation components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major uncertainties in the ability to predict future climate change, and hence its impacts, is the lack of knowledge of the earth's climate sensitivity. Here, data are combined from the 1985-96 Earth Radiation Budget Experiment (ERBE) with surface temperature change information and estimates of radiative forcing to diagnose the climate sensitivity. Importantly, the estimate is completely independent of climate model results. A climate feedback parameter of 2.3 +/- 1.4 W m(-2) K-1 is found. This corresponds to a 1.0-4.1-K range for the equilibrium warming due to a doubling of carbon dioxide (assuming Gaussian errors in observable parameters, which is approximately equivalent to a uniform "prior" in feedback parameter). The uncertainty range is due to a combination of the short time period for the analysis as well as uncertainties in the surface temperature time series and radiative forcing time series, mostly the former. Radiative forcings may not all be fully accounted for; however, all argument is presented that the estimate of climate sensitivity is still likely to be representative of longer-term climate change. The methodology can be used to 1) retrieve shortwave and longwave components of climate feedback and 2) suggest clear-sky and cloud feedback terms. There is preliminary evidence of a neutral or even negative longwave feedback in the observations, suggesting that current climate models may not be representing some processes correctly if they give a net positive longwave feedback.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: To investigate the changes in the surface properties of Lactobacillus rhamnosus GG during growth, and relate them with the ability of the Lactobacillus cells to adhere to Caco-2 cells. Methods and Results: Lactobacillus rhamnosus GG was grown in complex medium, and cell samples taken at four time points and freeze dried. Untreated and trypsin treated freeze dried samples were analysed for their composition using SDS-PAGE analysis and Fourier transform infrared spectroscopy (FTIR), hydrophobicity and zeta potential, and for their ability to adhere to Caco-2 cells. The results suggested that in the case of early exponential phase samples (4 and 8 h), the net surface properties, i.e. hydrophobicity and charge, were determined to a large extent by anionic hydrophilic components, whereas in the case of stationary phase samples (13 and 26 h), hydrophobic proteins seemed to play the biggest role. Considerable differences were also observed between the ability of the different samples to adhere to Caco-2 cells; maximum adhesion was observed for the early stationary phase sample (13 h). The results suggested that the adhesion to Caco-2 cells was influenced by both proteins and non-proteinaceous compounds present on the surface of the Lactobacillus cells. Conclusion: The surface properties of Lact. rhamnosus GG changed during growth, which in return affected the ability of the Lactobacillus cells to adhere to Caco-2 cells. Significance and Impact of the Study: The levels of adhesion of Lactobacillus cells to Caco-2 cells were influenced by the growth time and reflected changes on the bacterial surface. This study provides critical information on the physicochemical factors that influence bacterial adhesion to intestinal cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The in vitro antioxidant activity and the protective effect against human low density lipoprotein oxidation of coffees prepared using different degrees of roasting was evaluated. Coffees with the highest amount of brown pigments (dark coffee) showed the highest peroxyl radical scavenging activity. These coffees also protected human low-density lipoprotein (LDL) against oxidation, although green coffee extracts showed more protection. In a different experiment, coffee extracts were incubated with human plasma prior to isolation of LDL particles. This showed, for the first time, that incubation of plasma with dark, but not green coffee extracts protected the LDL against oxidation by copper or by the thermolabile azo compound AAPH. Antioxidants in the dark coffee extracts must therefore have become associated with the LDL particles. Brown compounds, especially those derived from the Maillard reaction, are the compounds most likely to be responsible for this activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Epidemiological data suggest that those who consume a diet rich in quercetin-containing foods may have a reduced risk of CVD. Furthermore, in vitro and ex vivo studies have observed the inhibition of collagen-induced platelet activation by quercetin. The aim of the present study was to investigate the possible inhibitory effects of quercetin ingestion from a dietary source on collagen-stimulated platelet aggregation and signalling. A double-blind randomised cross-over pilot study was undertaken. Subjects ingested a soup containing either a high or a low amount of quercetin. Plasma quercetin concentrations and platelet aggregation and signalling were assessed after soup ingestion. The high-quercetin soup contained 69 mg total quercetin compared with the low-quercetin soup containing 5 mg total quercetin. Plasma quercetin concentrations were significantly higher after high-quercetin soup ingestion than after low-quercetin soup ingestion and peaked at 2.59 (SEM 0.42) mu mol/l. Collagen-stimulated (0.5 mu g/ml) platelet aggregation was inhibited after ingestion of the high-quercetin soup in a time-dependent manner. Collagen-stimulated tyrosine phosphorylation of a key component of the collagen-signalling pathway via glycoprotein VI, Syk, was significantly inhibited by ingestion of the high-quercetin soup. The inhibition of Syk tyrosine phosphorylation was correlated with the area under the curve for the high-quercetin plasma profile. In conclusion, the ingestion of quercetin from a dietary source of onion soup could inhibit some aspects of collagen-stimulated platelet aggregation and signalling ex vivo. This further substantiates the epidemiological data suggesting that those who preferentially consume high amounts of quercetin-containing foods have a reduced risk of thrombosis and potential CVD risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data. Copyright (C) 2004 John Wiley Sons, Ltd.