924 resultados para non-stationary loads
Resumo:
Most panel unit root tests are designed to test the joint null hypothesis of a unit root for each individual series in a panel. After a rejection, it will often be of interest to identify which series can be deemed to be stationary and which series can be deemed nonstationary. Researchers will sometimes carry out this classification on the basis of n individual (univariate) unit root tests based on some ad hoc significance level. In this paper, we demonstrate how to use the false discovery rate (FDR) in evaluating I(1)=I(0) classifications based on individual unit root tests when the size of the cross section (n) and time series (T) dimensions are large. We report results from a simulation experiment and illustrate the methods on two data sets.
Différents procédés statistiques pour détecter la non-stationnarité dans les séries de précipitation
Resumo:
Ce mémoire a pour objectif de déterminer si les précipitations convectives estivales simulées par le modèle régional canadien du climat (MRCC) sont stationnaires ou non à travers le temps. Pour répondre à cette question, nous proposons une méthodologie statistique de type fréquentiste et une de type bayésien. Pour l'approche fréquentiste, nous avons utilisé le contrôle de qualité standard ainsi que le CUSUM afin de déterminer si la moyenne a augmenté à travers les années. Pour l'approche bayésienne, nous avons comparé la distribution a posteriori des précipitations dans le temps. Pour ce faire, nous avons modélisé la densité \emph{a posteriori} d'une période donnée et nous l'avons comparée à la densité a posteriori d'une autre période plus éloignée dans le temps. Pour faire la comparaison, nous avons utilisé une statistique basée sur la distance d'Hellinger, la J-divergence ainsi que la norme L2. Au cours de ce mémoire, nous avons utilisé l'ARL (longueur moyenne de la séquence) pour calibrer et pour comparer chacun de nos outils. Une grande partie de ce mémoire sera donc dédiée à l'étude de l'ARL. Une fois nos outils bien calibrés, nous avons utilisé les simulations pour les comparer. Finalement, nous avons analysé les données du MRCC pour déterminer si elles sont stationnaires ou non.
Resumo:
L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We analyze the large time behavior of a stochastic model for the lay down of fibers on a moving conveyor belt in the production process of nonwovens. It is shown that under weak conditions this degenerate diffusion process has a unique invariant distribution and is even geometrically ergodic. This generalizes results from previous works [M. Grothaus and A. Klar, SIAM J. Math. Anal., 40 (2008), pp. 968–983; J. Dolbeault et al., arXiv:1201.2156] concerning the case of a stationary conveyor belt, in which the situation of a moving conveyor belt has been left open.
Resumo:
Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element
Resumo:
Demand response is believed by some to become a major contributor towards system balancing in future electricity networks. Shifting or reducing demand at critical moments can reduce the need for generation capacity, help with the integration of renewables, support more efficient system operation and thereby potentially lead to cost and carbon reductions for the entire energy system. In this paper we review the nature of the response resource of consumers from different non-domestic sectors in the UK, based on extensive half hourly demand profiles and observed demand responses. We further explore the potential to increase the demand response capacity through changes in the regulatory and market environment. The analysis suggests that present demand response measures tend to stimulate stand-by generation capacity in preference to load shifting and we propose that extended response times may favour load based demand response, especially in sectors with significant thermal loads.
Resumo:
Background and Aims: We have reported that adverse effects on flow-mediated dilation of an acute elevation of non-esterified fatty acids rich in saturated fat (SFA) are reversed following addition of long-chain (LC) n-3 polyunsaturated fatty acids (PUFA), and hypothesised that these effects may be mediated through alterations in insulin signalling pathways. In a subgroup, we explored the effects of raised NEFA enriched with SFA, with or without LC n-3 PUFA, on whole body insulin sensitivity (SI) and responsiveness of the endothelium to insulin infusion. Methods and Results: Thirty adults (mean age 27.8 y, BMI 23.2 kg/m2) consumed oral fat loads on separate occasions with continuous heparin infusion to elevate NEFA between 60-390 min. For the final 150 min, a hyperinsulinaemic-euglycaemic clamp was performed, whilst FMD and circulating markers of endothelial function were measured at baseline, pre-clamp (240 min) and post-clamp (390 min). NEFA elevation during the SFA-rich drinks was associated with impaired FMD (P=0.027) whilst SFA+LC n-3 PUFA improved FMD at 240 min (P=0.003). In males, insulin infusion attenuated the increase in FMD with SFA+LC n-3 PUFA (P=0.049), with SI 10% greater with SFA+LC n-3 PUFA than SFA (P=0.041). Conclusion: This study provides evidence that NEFA composition during acute elevation influences both FMD and SI, with some indication of a difference by gender. However our findings are not consistent with the hypothesis that the effects of fatty acids on endothelial function and SI operate through a common pathway. Trial registered at clinicaltrials.gov, NCT01351324.
Resumo:
The cold shock response in bacteria involves the expression of low-molecular weight cold shock proteins (CSPs) containing a nucleic acid-binding cold shock domain (CSD), which are known to destabilize secondary structures on mRNAs, facilitating translation at low temperatures. Caulobacter crescentus cspA and cspB are induced upon cold shock, while cspC and cspD are induced during stationary phase. In this work, we determined a new coding sequence for the cspC gene, revealing that it encodes a protein containing two CSDs. The phenotypes of C. crescentus csp mutants were analyzed, and we found that cspC is important for cells to maintain viability during extended periods in stationary phase. Also, cspC and cspCD strains presented altered morphology, with frequent non-viable filamentous cells, and cspCD also showed a pronounced cell death at late stationary phase. In contrast, the cspAB mutant presented increased viability in this phase, which is accompanied by an altered expression of both cspC and cspD, but the triple cspABD mutant loses this characteristic. Taken together, our results suggest that there is a hierarchy of importance among the csp genes regarding stationary phase viability, which is probably achieved by a fine tune balance of the levels of these proteins.
Resumo:
We extend the renormalization operator introduced in [A. de Carvalho, M. Martens and M. Lyubich. Renormalization in the Henon family, I: universality but non-rigidity. J. Stat. Phys. 121(5/6) (2005), 611-669] from period-doubling Henon-like maps to Henon-like maps with arbitrary stationary combinatorics. We show that the renonnalization picture also holds in this case if the maps are taken to be strongly dissipative. We study infinitely renormalizable maps F and show that they have an invariant Cantor set O on which F acts like a p-adic adding machine for some p > 1. We then show, as for the period-doubling case in the work of de Carvalho, Martens and Lyubich [Renormalization in the Henon family, I: universality but non-rigidity. J. Stat. Phys. 121(5/6) (2005), 611-669], that the sequence of renormalizations has a universal form, but that the invariant Cantor set O is non-rigid. We also show that O cannot possess a continuous invariant line field.
Resumo:
Fiber reinforced polymer composites have been widely applied in the aeronautical field. However, composite processing, which uses unlocked molds, should be avoided in view of the tight requirements and also due to possible environmental contamination. To produce high performance structural frames meeting aeronautical reproducibility and low cost criteria, the Brazilian industry has shown interest to investigate the resin transfer molding process (RTM) considering being a closed-mold pressure injection system which allows faster gel and cure times. Due to the fibrous composite anisotropic and non homogeneity characteristics, the fatigue behavior is a complex phenomenon quite different from to metals materials crucial to be investigated considering the aeronautical application. Fatigue sub-scale specimens of intermediate modulus carbon fiber non-crimp multi-axial reinforcement and epoxy mono-component system composite were produced according to the ASTM 3039 D. Axial fatigue tests were carried out according to ASTM D 3479. A sinusoidal load of 10 Hz frequency and load ratio R = 0.1. It was observed a high fatigue interval obtained for NCF/RTM6 composites. Weibull statistical analysis was applied to describe the failure probability of materials under cyclic loads and fractures pattern was observed by scanning electron microscopy. (C) 2010 Published by Elsevier Ltd.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Objective. To determine the influence of cement thickness and ceramic/cement bonding on stresses and failure of CAD/CAM crowns, using both multi-physics finite element analysis and monotonic testing.Methods. Axially symmetric FEA models were created for stress analysis of a stylized monolithic crown having resin cement thicknesses from 50 to 500 mu m under occlusal loading. Ceramic-cement interface was modeled as bonded or not-bonded (cement-dentin as bonded). Cement polymerization shrinkage was simulated as a thermal contraction. Loads necessary to reach stresses for radial cracking from the intaglio surface were calculated by FEA. Experimentally, feldspathic CAD/CAM crowns based on the FEA model were machined having different occlusal cementation spaces, etched and cemented to dentin analogs. Non-bonding of etched ceramic was achieved using a thin layer of poly(dimethylsiloxane). Crowns were loaded to failure at 5 N/s, with radial cracks detected acoustically.Results. Failure loads depended on the bonding condition and the cement thickness for both FEA and physical testing. Average fracture loads for bonded crowns were: 673.5 N at 50 mu m cement and 300.6 N at 500 mu m. FEA stresses due to polymerization shrinkage increased with the cement thickness overwhelming the protective effect of bonding, as was also seen experimentally. At 50 mu m cement thickness, bonded crowns withstood at least twice the load before failure than non-bonded crowns.Significance. Occlusal "fit" can have structural implications for CAD/CAM crowns; pre-cementation spaces around 50-100 mu m being recommended from this study. Bonding benefits were lost at thickness approaching 450-500 mu m due to polymerization shrinkage stresses. (C) 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this work, the problem in the loads transport (in platforms or suspended by cables) it is considered. The system in subject is composed for mono-rail system and was modeled through the system: inverted pendulum, car and motor and the movement equations were obtained through the Lagrange equations. In the model, was considered the interaction among of the motor and system dynamics for several potencies motor, that is, the case studied is denominated a non-ideal periodic problem. The non-ideal periodic problem dynamics was analyzed, qualitatively, through the comparison of the stability diagrams, numerically obtained, for several motor torque constants. Furthermore, one was made it analyzes quantitative of the problem through the analysis of the Floquet multipliers. Finally, the non-ideal problem was controlled. The method that was used for analysis and control of non-ideal periodic systems is based on the Chebyshev polynomial expansion, in the Picard iterative method and in the Lyapunov-Floquet transformation (L-F trans formation). This method was presented recently in [3-9].
Resumo:
Standard Test Methods (e.g. ASTM, DIN) for materials characterization in general, and for fatigue in particular, do not contemplate specimens with complex geometries, as well as the combination of axial and in-plane bending loads in their methodologies. The present study refers to some patents and the new configuration or configurations of specimens (non-standardized by the status quo of test methods) and a device developed to induce axial and bending combined forces resultants from axial loads applied by any one test equipment (dynamic or monotonic) which possesses such limitation, towards obtaining more realistic results on the fatigue behavior, or even basic mechanical properties, from geometrically complex structures. Motivated by a specific and geometrically complex aeronautic structure (motor-cradle), non-standardized welded tubular specimens made from AISI 4130 steel were fatigue-tested at room temperature, by using a constant amplitude sinusoidal load of 20 Hz frequency, load ratio R = 0.1 with and without the above referred auxiliary fatigue apparatus. The results showed the fatigue apparatus was efficient for introducing higher stress concentration factor at the welded specimen joints, consequently reducing the fatigue strength when compared to other conditions. From the obtained results it is possible to infer that with small modifications the proposed apparatus will be capable to test a great variety of specimen configurations such as: squared tubes and plates with welded or melted junctions, as well as other materials such as aluminum, titanium, composites, polymeric, plastics, etc. © 2009 Bentham Science Publishers Ltd.