991 resultados para Box-Jenkins method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The display tray holds the specimens over a thin cotton layer glued to a thick paper attached to the cd holder tray. Althought only a temporary storing method, it is a good alternative when compared to other layer models. It has the advantages of low cost, protection of specimens, minimal or no damage, as well as good visibility through its cover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bridge deck deterioration due to corrosive effect of deicers on reinforcing steel is a major problem facing many agencies. Cathodic protection is one method used to prevent reinforcing steel corrosion. The application of a direct current to the embedded reinforcing steel and a sacrificial anode protects the steel from corrosion. This 1992 project involved placing an Elgard Titanium Anode Mesh Cathodic Protection System on a bridge deck. The anode was fastened to the deck after the Class A repair-work and the overlay was placed using the Iowa Low Slump Dense Concrete System. The system was set up initially at 1 mA/sq ft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a method of analyzing Rigid Frames by use of the Conjugate Beam Theory. The development of the method along with an example is given. This method has been used to write a computer program for the analysis of twin box culverts. The culverts may be analyzed under any fill height and any of the standard truck loadings. The wall and slab thickness are increased by the computer program as necessary. The final result is steel requirements both for moment and shear, and the slab and wall thickness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is presented for determining the time to first division of individual bacterial cells growing on agar media. Bacteria were inoculated onto agar-coated slides and viewed by phase-contrast microscopy. Digital images of the growing bacteria were captured at intervals and the time to first division estimated by calculating the "box area ratio". This is the area of the smallest rectangle that can be drawn around an object, divided by the area of the object itself. The box area ratios of cells were found to increase suddenly during growth at a time that correlated with cell division as estimated by visual inspection of the digital images. This was caused by a change in the orientation of the two daughter cells that occurred when sufficient flexibility arose at their point of attachment. This method was used successfully to generate lag time distributions for populations of Escherichia coli, Listeria monocytogenes and Pseudomonas aeruginosa, but did not work with the coccoid organism Staphylococcus aureus. This method provides an objective measure of the time to first cell division, whilst automation of the data processing allows a large number of cells to be examined per experiment. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://www.ime.usp.br/similar to egbirgin/tango/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global optimization seeks a minimum or maximum of a multimodal function over a discrete or continuous domain. In this paper, we propose a hybrid heuristic-based on the CGRASP and GENCAN methods-for finding approximate solutions for continuous global optimization problems subject to box constraints. Experimental results illustrate the relative effectiveness of CGRASP-GENCAN on a set of benchmark multimodal test functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The negative-dimensional integration method is a technique which can be applied, with success, in usual covariant gauge calculations. We consider three two-loop diagrams: the scalar massless non-planar double-box with six propagators and the scalar pentabox in two cases, where six virtual particles have the same mass, and in the case all of them are massless. Our results are given in terms of hypergeometric functions of Mandelstam variables and also for arbitrary exponents of propagators and dimension D.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a measurement of the top quark mass with the matrix element method in the lepton+jets final state. As the energy scale for calorimeter jets represents the dominant source of systematic uncertainty, the matrix element likelihood is extended by an additional parameter, which is defined as a global multiplicative factor applied to the standard energy scale. The top quark mass is obtained from a fit that yields the combined statistical and systematic jet energy scale uncertainty. Using a data set of 0.4 fb(-1) taken with the D0 experiment at Run II of the Fermilab Tevatron Collider, the mass of the top quark is measured using topological information to be: m(top)(center dot+jets)(topo)=169.2(-7.4)(+5.0)(stat+JES)(-1.4)(+1.5)(syst) GeV, and when information about identified b jets is included: m(top)(center dot+jets)(b-tag)=170.3(-4.5)(+4.1)(stat+ JES)(-1.8)(+1.2)(syst) GeV. The measurements yield a jet energy scale consistent with the reference scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the influence of cavity design and photocuring method on the marginal seal of resin composite restorations. METHOD AND MATERIALS: Seventy-two bovine teeth were divided into 2 groups: group 1 received box-type cavity preparations, and group 2 received plate-type preparations. Each group was divided into 3 subgroups. After etching and bonding, Z250 resin composite (3M Espe) was applied in 2 equal increments and cured with 1 of 3 techniques: (1) conventional curing for 30 seconds at 650 mW/cm2; (2) 2-step photocuring, in which the first step was performed 14 mm from the restoration for 10 seconds at 180 mW/cm2 and the second step was performed in direct contact for 20 seconds at 650 mW/cm2; or (3) progressive curing using Jetlite 4000 (J. Morita) for 8 seconds at 125 mW/cm2 and then 22 seconds at 125 mW/cm2 up to 500 mW/cm2. The specimens were thermocycled for 500 cycles and then submitted to dye penetration with a 50% silver nitrate solution. Microleakage was assessed using a stereomicroscope. Data were analyzed using analysis of variance and Tukey test (5% level of significance). RESULTS: A statistically significant difference was found between groups when a double interaction between photocuring and cavity preparation was considered (P = .029). CONCLUSIONS: No one type of cavity preparation or photocuring method prevented micro-leakage. The plate-type preparation showed the worst dye penetration when conventional and progressive photocuring methods were used. The best results were found using the 2-step photocuring with the plate-type preparation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Box-Cox transformation is a technique mostly utilized to turn the probabilistic distribution of a time series data into approximately normal. And this helps statistical and neural models to perform more accurate forecastings. However, it introduces a bias when the reversion of the transformation is conducted with the predicted data. The statistical methods to perform a bias-free reversion require, necessarily, the assumption of Gaussianity of the transformed data distribution, which is a rare event in real-world time series. So, the aim of this study was to provide an effective method of removing the bias when the reversion of the Box-Cox transformation is executed. Thus, the developed method is based on a focused time lagged feedforward neural network, which does not require any assumption about the transformed data distribution. Therefore, to evaluate the performance of the proposed method, numerical simulations were conducted and the Mean Absolute Percentage Error, the Theil Inequality Index and the Signal-to-Noise ratio of 20-step-ahead forecasts of 40 time series were compared, and the results obtained indicate that the proposed reversion method is valid and justifies new studies. (C) 2014 Elsevier B.V. All rights reserved.