917 resultados para Generalized Least Squares Estimation
Resumo:
In 1957, the Iowa State Highway Commission, with financial assistance from the aluminum industry, constructed a 220-ft (67-m) long, four-span continuous, aluminum girder bridge to carry traffic on Clive Road (86th Street) over Interstate 80 near Des Moines, Iowa. The bridge had four, welded I-shape girders that were fabricated in pairs with welded diaphragms between an exterior and an interior girder. The interior diaphragms between the girder pairs were bolted to girder brackets. A composite, reinforced concrete deck served as the roadway surface. The bridge, which had performed successfully for about 35 years of service, was removed in the fall of 1993 to make way for an interchange at the same location. Prior to the bridge demolition, load tests were conducted to monitor girder and diaphragm bending strains and deflections in the northern end span. Fatigue testing of the aluminum girders that were removed from the end spans were conducted by applying constant-amplitude, cyclic loads. These tests established the fatigue strength of an existing, welded, flange-splice detail and added, welded, flange-cover plates and horizontal web plate attachment details. This part, Part 2, of the final report focuses on the fatigue tests of the aluminum girder sections that were removed from the bridge and on the analysis of the experimental data to establish the fatigue strength of full-size specimens. Seventeen fatigue fractures that were classified as Category E weld details developed in the seven girder test specimens. Linear regression analyses of the fatigue test results established both nominal and experimental stress-range versus load cycle relationships (SN curves) for the fatigue strength of fillet-welded connections. The nominal strength SN curve obtained by this research essentially matched the SN curve for Category E aluminum weldments given in the AASHTO LRFD specifications. All of the Category E fatigue fractures that developed in the girder test specimens satisfied the allowable SN relationship specified by the fatigue provisions of the Aluminum Association. The lower-bound strength line that was set at two standard deviations below the least squares regression line through the fatigue fracture data points related well with the Aluminum Association SN curve. The results from the experimental tests of this research have provided additional information regarding behavioral characteristics of full-size, aluminum members and have confirmed that aluminum has the strength properties needed for highway bridge girders.
Resumo:
O objetivo deste trabalho foi parametrizar e avaliar o modelo DSSAT/Canegro para cinco variedades brasileiras de cana-de-açúcar. A parametrização foi realizada a partir do uso de dados biométricos e de crescimento das variedades CTC 4, CTC 7, CTC 20, RB 86-7515 e RB 83-5486, obtidos em cinco localidades brasileiras. Foi realizada análise de sensibilidade local para os principais parâmetros. A parametrização do modelo foi feita por meio da técnica de estimativa da incerteza de probabilidade generalizada ("generalized likelihood uncertainty estimation", Glue). Para a avaliação das predições, foram utilizados, como indicadores estatísticos, o coeficiente de determinação (R²), o índice D de Willmott e a raiz quadrada do erro-médio (RMSE). As variedades CTC apresentaram índice D entre 0,870 e 0,944, para índice de área foliar, altura de colmo, perfilhamento e teor de sacarose. A variedade RB 83-5486 apresentou resultados similares para teor de sacarose e massa de matéria fresca do colmo, enquanto a variedade RB 86-7515 apresentou valores entre 0,665 e 0,873, para as variáveis avaliadas.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
Tämän tutkielman tavoitteena on selvittää Venäjän, Slovakian, Tsekin, Romanian, Bulgarian, Unkarin ja Puolan osakemarkkinoiden heikkojen ehtojen tehokkuutta. Tämä tutkielma on kvantitatiivinen tutkimus ja päiväkohtaiset indeksin sulkemisarvot kerättiin Datastreamin tietokannasta. Data kerättiin pörssien ensimmäisestä kaupankäyntipäivästä aina vuoden 2006 elokuun loppuun saakka. Analysoinnin tehostamiseksi dataa tutkittiin koko aineistolla, sekä kahdella aliperiodilla. Osakemarkkinoiden tehokkuutta on testattu neljällä tilastollisella metodilla, mukaan lukien autokorrelaatiotesti ja epäparametrinen runs-testi. Tavoitteena on myös selvittääesiintyykö kyseisillä markkinoilla viikonpäiväanomalia. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Viikonpäiväanomalia on löydettävissä kaikilta edellä mainituilta osakemarkkinoilta paitsi Tsekin markkinoilta. Merkittävää, positiivista tai negatiivista autokorrelaatiota, on löydettävissä kaikilta osakemarkkinoilta, myös Ljung-Box testi osoittaa kaikkien markkinoiden tehottomuutta täydellä periodilla. Osakemarkkinoiden satunnaiskulku hylätään runs-testin perusteella kaikilta muilta paitsi Slovakian osakemarkkinoilla, ainakin tarkastellessa koko aineistoa tai ensimmäistä aliperiodia. Aineisto ei myöskään ole normaalijakautunut minkään indeksin tai aikajakson kohdalla. Nämä havainnot osoittavat, että kyseessä olevat markkinat eivät ole heikkojen ehtojen mukaan tehokkaita
Resumo:
Tämän tutkimuksen tarkoituksena on tarkastella esiintyykö Venäjän osakemarkkinoilla kalenterianomalioita. Tutkimus keskittyy Halloween-, kuukausi-, kuunvaihde-, viikonpäivä- ja juhlapäiväanomalioiden tarkasteluun. Tutkimusaineistona käytetään RTS (Russian Trading System) indeksiä. Tarkasteluaika alkaa 1. syyskuuta 1995 ja loppuu 31. joulukuuta 2005. Havaintojen kokonaismäärä on 2584. Tutkimusmenetelmänä käytetään pienimmän neliösumman menetelmää (OLS). Tutkimustulokset osoittavat, että Venäjän osakemarkkinoilla esiintyy Halloween-, kuunvaihde- ja viikonpäiväanomalioita. Sen sijaan kuukausi- ja juhlapäiväanomalioita ei tulosten mukaanesiinny Venäjän osakemarkkinoilla. Tulokset osoittavat lisäksi, että suurin osaanomalioista on merkittävämpiä nykyään kuin Venäjän osakemarkkinoiden ensimmäisinä vuosina. Näiden tulosten perusteella voidaan todeta, että Venäjän osakemarkkinat eivät ole vielä tehokkaat.
Resumo:
Tämän tutkielman tavoitteena on tarkastella Kiinan osakemarkkinoiden tehokkuutta ja random walk -hypoteesin voimassaoloa. Tavoitteena on myös selvittää esiintyykö viikonpäiväanomalia Kiinan osakemarkkinoilla. Tutkimusaineistona käytetään Shanghain osakepörssin A-sarjan,B-sarjan ja yhdistelmä-sarjan ja Shenzhenin yhdistelmä-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 21.2.1992-30.12.2005 sekä Shenzhenin osakepörssin A-sarjan ja B-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 5.10.1992-30.12.2005. Tutkimusmenetelminä käytetään neljä tilastollista menetelmää, mukaan lukien autokorrelaatiotestiä, epäparametrista runs-testiä, varianssisuhdetestiä sekä Augmented Dickey-Fullerin yksikköjuuritestiä. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Testejä tehdään sekä koko aineistolla että kolmella erillisellä ajanjaksolla. Tämän tutkielman empiiriset tulokset tukevat aikaisempia tutkimuksia Kiinan osakemarkkinoiden tehottomuudesta. Lukuun ottamatta yksikköjuuritestien saatuja tuloksia, autokorrelaatio-, runs- ja varianssisuhdetestien perusteella random walk-hypoteesi hylättiin molempien Kiinan osakemarkkinoiden kohdalla. Tutkimustulokset osoittavat, että molemmilla osakepörssillä B-sarjan indeksien käyttäytyminenon ollut huomattavasti enemmän random walk -hypoteesin vastainen kuin A-sarjan indeksit. Paitsi B-sarjan markkinat, molempien Kiinan osakemarkkinoiden tehokkuus näytti myös paranevan vuoden 2001 markkinabuumin jälkeen. Tutkimustulokset osoittavat myös viikonpäiväanomalian esiintyvän Shanghain osakepörssillä, muttei kuitenkaan Shenzhenin osakepörssillä koko tarkasteluajanjaksolla.
Resumo:
Osakemarkkinoilta on jo useiden vuosien ajan julkaistu lukuisia tutkimuksia, joissa on esitetty havaintoja ajallisesta säännönmukaisuudesta osakkeiden hinnoissa, joita ei pystytä selittämään markkinakohtaisilla fundamenteilla. Nämä niin kutsutut kalenterianomaliat esiintyvät tyypillisesti ajallisissa käännepisteissä, kuten vuoden, kuukauden tai viikon vaihtuessa seuraavaksi. Myös erilaisten katkosten, kuten juhlapyhien, kaupankäyntirutiineissa on havaittu aiheuttavan anomalioita. Tutkimuksen tavoitteena oli tutkia osakemarkkinoilla havaittujen kalenterianomalioiden esiintymistä pohjoismaisilla sähkömarkkinoilla. Tutkitut anomaliat olivat viikonpäivä- kuukausi-, kuunvaihde- ja juhlapyhäanomalia. Näiden lisäksi tutkittiin tuottojen käyttäytymistä optioiden erääntymispäivien läheisyydessä. Yksittäisten tuotteiden sijasta tarkastelut suoritettiin sesonki- ja kvartaalituotteista muodostetuilla vuosituotteilla. Testauksessa käytettiin pienimmän neliösumman menetelmää, huomioidenheteroskedastisuuden, autokorrelaation ja multikollineaarisuuden vaikutukset. Pelkkien kalenterimuuttujien lisäksi testit suoritettiin regressiomalleilla, joissa lisäselittäjinä käytettiin spot-hintaa, päästöoikeuden hintaa ja/tai sade-ennusteita. Tarkastelujakso koostui vuosista 1998-2006.
Resumo:
There has been a lack of quick, simple and reliable methods for determination of nanoparticle size. An investigation of the size of hydrophobic (CdSe) and hydrophilic (CdSe/ZnS) quantum dots was performed by using the maximum position of the corresponding fluorescence spectrum. It has been found that fluorescence spectroscopy is a simple and reliable methodology to estimate the size of both quantum dot types. For a given solution, the homogeneity of the size of quantum dots is correlated to the relationship between the fluorescence maximum position (FMP) and the quantum dot size. This methodology can be extended to the other fluorescent nanoparticles. The employment of evolving factor analysis and multivariate curve resolution-alternating least squares for decomposition of the series of quantum dots fluorescence spectra recorded by a specific measuring procedure reveals the number of quantum dot fractions having different diameters. The size of the quantum dots in a particular group is defined by the FMP of the corresponding component in the decomposed spectrum. These results show that a combination of the fluorescence and appropriate statistical method for decomposition of the emission spectra of nanoparticles may be a quick and trusted method for the screening of the inhomogeneity of their solution.
Resumo:
We present the first density model of Stromboli volcano (Aeolian Islands, Italy) obtained by simultaneously inverting land-based (543) and sea-surface (327) relative gravity data. Modern positioning technology, a 1 x 1 m digital elevation model, and a 15 x 15 m bathymetric model made it possible to obtain a detailed 3-D density model through an iteratively reweighted smoothness-constrained least-squares inversion that explained the land-based gravity data to 0.09 mGal and the sea-surface data to 5 mGal. Our inverse formulation avoids introducing any assumptions about density magnitudes. At 125 m depth from the land surface, the inferred mean density of the island is 2380 kg m(-3), with corresponding 2.5 and 97.5 percentiles of 2200 and 2530 kg m-3. This density range covers the rock densities of new and previously published samples of Paleostromboli I, Vancori, Neostromboli and San Bartolo lava flows. High-density anomalies in the central and southern part of the island can be related to two main degassing faults crossing the island (N41 and NM) that are interpreted as preferential regions of dyke intrusions. In addition, two low-density anomalies are found in the northeastern part and in the summit area of the island. These anomalies seem to be geographically related with past paroxysmal explosive phreato-magmatic events that have played important roles in the evolution of Stromboli Island by forming the Scari caldera and the Neostromboli crater, respectively. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
This paper examines the role of assortative mating in the intergenerational economic mobility in Spain. Sons and daughters usually marry individuals with similar characteristics, which may lower mobility. Our empirical strategy employs the Two-sample two-stage least squares estimator to estimate the intergenerational income elasticity in absence of data for two generations not residing in the same household. Our findings suggest that assortative mating plays an important role in the intergenerational transmission process. On average about 50 per 100 of the covariance between parents’ income and child family’s incomecan be accounted for by the person the child is married to
Resumo:
The determination of zirconium-hafnium mixtures is one of the most critical problem of the analytical chemistry, on account of the close similarity of their chemical properties. The spectrophotometric determination proposed by Yagodin et al. show not many practical applications due to the significant spectral interference on the 200-220 nm region. In this work we propound the use of a multivariate calibration method called partial least squares ( PLS ) for colorimetric determination of these mixtures. By using PLS and 16 calibration mixtures we obtained a model which permits determination of zirconium and hafnium with accuracy of about 1-2% and 10-20%, respectively. Using conventional univariate calibration the inaccuracy of the determination is about 10-25% for zirconium and above 57% for hafnium.
Resumo:
The aim of this work is to present a tutorial on Multivariate Calibration, a tool which is nowadays necessary in basically most laboratories but very often misused. The basic concepts of preprocessing, principal component analysis (PCA), principal component regression (PCR) and partial least squares (PLS) are given. The two basic steps on any calibration procedure: model building and validation are fully discussed. The concepts of cross validation (to determine the number of factors to be used in the model), leverage and studentized residuals (to detect outliers) for the validation step are given. The whole calibration procedure is illustrated using spectra recorded for ternary mixtures of 2,4,6 trinitrophenolate, 2,4 dinitrophenolate and 2,5 dinitrophenolate followed by the concentration prediction of these three chemical species during a diffusion experiment through a hydrophobic liquid membrane. MATLAB software is used for numerical calculations. Most of the commands for the analysis are provided in order to allow a non-specialist to follow step by step the analysis.
Resumo:
Genetic algorithm was used for variable selection in simultaneous determination of mixtures of glucose, maltose and fructose by mid infrared spectroscopy. Different models, using partial least squares (PLS) and multiple linear regression (MLR) with and without data pre-processing, were used. Based on the results obtained, it was verified that a simpler model (multiple linear regression with variable selection by genetic algorithm) produces results comparable to more complex methods (partial least squares). The relative errors obtained for the best model was around 3% for the sugar determination, which is acceptable for this kind of determination.