986 resultados para Small-error approximation
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
This article is concerned with the numerical detection of bifurcation points of nonlinear partial differential equations as some parameter of interest is varied. In particular, we study in detail the numerical approximation of the Bratu problem, based on exploiting the symmetric version of the interior penalty discontinuous Galerkin finite element method. A framework for a posteriori control of the discretization error in the computed critical parameter value is developed based upon the application of the dual weighted residual (DWR) approach. Numerical experiments are presented to highlight the practical performance of the proposed a posteriori error estimator.
Resumo:
This lecture course covers the theory of so-called duality-based a posteriori error estimation of DG finite element methods. In particular, we formulate consistent and adjoint consistent DG methods for the numerical approximation of both the compressible Euler and Navier-Stokes equations; in the latter case, the viscous terms are discretized based on employing an interior penalty method. By exploiting a duality argument, adjoint-based a posteriori error indicators will be established. Moreover, application of these computable bounds within automatic adaptive finite element algorithms will be developed. Here, a variety of isotropic and anisotropic adaptive strategies, as well as $hp$-mesh refinement will be investigated.
Resumo:
In this work, we propose an inexpensive laboratory practice for an introductory physics course laboratory for any grade of science and engineering study. This practice was very well received by our students, where a smartphone (iOS, Android, or Windows) is used together with mini magnets (similar to those used on refrigerator doors), a 20 cm long school rule, a paper, and a free application (app) that needs to be downloaded and installed that measures magnetic fields using the smartphone's magnetic field sensor or magnetometer. The apps we have used are: Magnetometer (iOS), Magnetometer Metal Detector, and Physics Toolbox Magnetometer (Android). Nothing else is needed. Cost of this practice: free. The main purpose of the practice is that students determine the dependence of the component x of the magnetic field produced by different magnets (including ring magnets and sphere magnets). We obtained that the dependency of the magnetic field with the distance is of the form x-3, in total agreement with the theoretical analysis. The secondary objective is to apply the technique of least squares fit to obtain this exponent and the magnetic moment of the magnets, with the corresponding absolute error.
Resumo:
New materials for OLED applications with low singlet–triplet energy splitting have been recently synthesized in order to allow for the conversion of triplet into singlet excitons (emitting light) via a Thermally Activated Delayed Fluorescence (TADF) process, which involves excited-states with a non-negligible amount of Charge-Transfer (CT). The accurate modeling of these states with Time-Dependent Density Functional Theory (TD-DFT), the most used method so far because of the favorable trade-off between accuracy and computational cost, is however particularly challenging. We carefully address this issue here by considering materials with small (high) singlet–triplet gap acting as emitter (host) in OLEDs and by comparing the accuracy of TD-DFT and the corresponding Tamm-Dancoff Approximation (TDA), which is found to greatly reduce error bars with respect to experiments thanks to better estimates for the lowest singlet–triplet transition. Finally, we quantitatively correlate the singlet–triplet splitting values with the extent of CT, using for it a simple metric extracted from calculations with double-hybrid functionals, that might be applied in further molecular engineering studies.
Resumo:
Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.
Resumo:
This paper estimates Bejarano and Charry (2014)’s small open economy with financial frictions model for the Colombian economy using Bayesian estimation techniques. Additionally, I compute the welfare gains of implementing an optimal response to credit spreads into an augmented Taylor rule. The main result is that a reaction to credit spreads does not imply significant welfare gains unless the economic disturbances increases its volatility, like the disruption implied by a financial crisis. Otherwise its impact over the macroeconomic variables is null.
Resumo:
The aim of the study was to analyze the frequency of epidermal growth factor receptor (EGFR) mutations in Brazilian non-small cell lung cancer patients and to correlate these mutations with response to benefit of platinum-based chemotherapy in non-small cell lung cancer (NSCLC). Our cohort consisted of prospective patients with NSCLCs who received chemotherapy (platinum derivates plus paclitaxel) at the [UNICAMP], Brazil. EGFR exons 18-21 were analyzed in tumor-derived DNA. Fifty patients were included in the study (25 with adenocarcinoma). EGFR mutations were identified in 6/50 (12 %) NSCLCs and in 6/25 (24 %) adenocarcinomas; representing the frequency of EGFR mutations in a mostly self-reported White (82.0 %) southeastern Brazilian population of NSCLCs. Patients with NSCLCs harboring EGFR exon 19 deletions or the exon 21 L858R mutation were found to have a higher chance of response to platinum-paclitaxel (OR 9.67 [95 % CI 1.03-90.41], p = 0.047). We report the frequency of EGFR activating mutations in a typical southeastern Brazilian population with NSCLC, which are similar to that of other countries with Western European ethnicity. EGFR mutations seem to be predictive of a response to platinum-paclitaxel, and additional studies are needed to confirm or refute this relationship.
Resumo:
The purpose of this study was to compare the behavior of full-term small-for-gestational age (SGA) with full-term appropriate-for gestational age (AGA) infants in the first year of life. We prospectively evaluated 68 infants in the 2nd month, 67 in the 6th month and 69 in the 12th month. The Bayley Scales of Infant Development-II were used, with emphasis on the Behavior Rating Scale (BRS). The groups were similar concerning the item interest in test materials and stimuli; there was a trend toward differences in the items negative affect, hypersensitivity to test materials and adaptation to change in test materials. The mean of Raw Score was significantly lower for the SGA group in the items predominant state, liability of state of arousal, positive affect, soothability when upset, energy, exploration of objects and surroundings, orientation toward examiner. A lower BRS score was associated with the SGA group in the 2nd month.
Resumo:
The reconstruction of the external ear to correct congenital deformities or repair following trauma remains a significant challenge in reconstructive surgery. Previously, we have developed a novel approach to create scaffold-free, tissue engineering elastic cartilage constructs directly from a small population of donor cells. Although the developed constructs appeared to adopt the structural appearance of native auricular cartilage, the constructs displayed limited expression and poor localization of elastin. In the present study, the effect of growth factor supplementation (insulin, IGF-1, or TGF-β1) was investigated to stimulate elastogenesis as well as to improve overall tissue formation. Using rabbit auricular chondrocytes, bioreactor-cultivated constructs supplemented with either insulin or IGF-1 displayed increased deposition of cartilaginous ECM, improved mechanical properties, and thicknesses comparable to native auricular cartilage after 4 weeks of growth. Similarly, growth factor supplementation resulted in increased expression and improved localization of elastin, primarily restricted within the cartilaginous region of the tissue construct. Additional studies were conducted to determine whether scaffold-free engineered auricular cartilage constructs could be developed in the 3D shape of the external ear. Isolated auricular chondrocytes were grown in rapid-prototyped tissue culture molds with additional insulin or IGF-1 supplementation during bioreactor cultivation. Using this approach, the developed tissue constructs were flexible and had a 3D shape in very good agreement to the culture mold (average error <400 µm). While scaffold-free, engineered auricular cartilage constructs can be created with both the appropriate tissue structure and 3D shape of the external ear, future studies will be aimed assessing potential changes in construct shape and properties after subcutaneous implantation.
Resumo:
77
Resumo:
Prey size is an important factor in food consumption. In studies of feeding ecology, prey items are usually measured individually using calipers or ocular micrometers. Among amphibians and reptiles, there are species that feed on large numbers of small prey items (e.g. ants, termites). This high intake makes it difficult to estimate prey size consumed by these animals. We addressed this problem by developing and evaluating a procedure for subsampling the stomach contents of such predators in order to estimate prey size. Specifically, we developed a protocol based on a bootstrap procedure to obtain a subsample with a precision error of at the most 5%, with a confidence level of at least 95%. This guideline should reduce the sampling effort and facilitate future studies on the feeding habits of amphibians and reptiles, and also provide a means of obtaining precise estimates of prey size.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
In Brazil, the study of pedestrian-induced vibration on footbridges has been undertaken since the early 1990s, for concrete and steel footbridges. However, there are no recorded studies of this kind for timber footbridges. Brazilian code ABNT NBR 7190 (1997) gives design requirements only for static loads in the case of timber footbridges, without considering the serviceability limit state from pedestrian-induced vibrations. The aim of this work is to perform a theoretical dynamic, numerical and experimental analysis on simply-supported timber footbridges, by using a small-scale model developed from a 24 m span and 2 m width timber footbridge, with two main timber beams. Span and width were scaled down (1:4) to 6 m e 0.5 in, respectively. Among the conclusions reached herein, it is emphasized that the Euler-Bernoulli beam theory is suitable for calculating the vertical and lateral first natural frequencies in simply-supported timber footbridges; however, special attention should be given to the evaluation of lateral bending stiffness, as it leads to conservative values.
Resumo:
This work is part of a research under construction since 2000, in which the main objective is to measure small dynamic displacements by using L1 GPS receivers. A very sensible way to detect millimetric periodic displacements is based on the Phase Residual Method (PRM). This method is based on the frequency domain analysis of the phase residuals resulted from the L1 double difference static data processing of two satellites in almost orthogonal elevation angle. In this article, it is proposed to obtain the phase residuals directly from the raw phase observable collected in a short baseline during a limited time span, in lieu of obtaining the residual data file from regular GPS processing programs which not always allow the choice of the aimed satellites. In order to improve the ability to detect millimetric oscillations, two filtering techniques are introduced. One is auto-correlation which reduces the phase noise with random time behavior. The other is the running mean to separate low frequency from the high frequency phase sources. Two trials have been carried out to verify the proposed method and filtering techniques. One simulates a 2.5 millimeter vertical antenna displacement and the second uses the GPS data collected during a bridge load test. The results have shown a good consistency to detect millimetric oscillations.