917 resultados para Least-squares technique


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Methods: Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135. ms) and long (1500. ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Results: Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. Conclusions: The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. Significance: This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Internet is present in each step of a trip planning. The constant technological advances has made major changes in the tourism industry. This is noticeable by the growing number of people who share their travel experiences on the Internet. This study has aimed to analyze the factors that influence the use of the Online Travel Reviews (OTR) in choosing an accommodation. It was done an investigation into the comments available on the internet about information on touristic products and services, specifically about accommodations. The research proposed to understand the influencing factors of OTR, in the Brazilian context, through the Technology Acceptance Model, Motivational Theory, Similarity, and Trustworthiness. The methodology used was a descriptive-exploratory study, with a quantitative approach, and bibliographic research. The study used a Structural Equation Modeling technique called Partial Least Squares (PLS), to test and evaluate the proposed research model. Data collection was performed with 308 guests hosted in five hotels in Ponta Negra (Natal/RN), who have used the OTRs in choosing an accommodation. The research tested fifteen hypotheses, where nine were confirmed, and six were rejected. The results showed that guests have attitude and intention to use the OTRs to choose an accommodation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on the quantitative analysis of diatom assemblages preserved in 274 surface sediment samples recovered in the Pacific, Atlantic and western Indian sectors of the Southern Ocean we have defined a new reference database for quantitative estimation of late-middle Pleistocene Antarctic sea ice fields using the transfer function technique. The Detrended Canonical Analysis (DCA) of the diatom data set points to a unimodal distribution of the diatom assemblages. Canonical Correspondence Analysis (CCA) indicates that winter sea ice (WSI) but also summer sea surface temperature (SSST) represent the most prominent environmental variables that control the spatial species distribution. To test the applicability of transfer functions for sea ice reconstruction in terms of concentration and occurrence probability we applied four different methods, the Imbrie and Kipp Method (IKM), the Modern Analog Technique (MAT), Weighted Averaging (WA), and Weighted Averaging Partial Least Squares (WAPLS), using logarithm-transformed diatom data and satellite-derived (1981-2010) sea ice data as a reference. The best performance for IKM results was obtained using a subset of 172 samples with 28 diatom taxa/taxa groups, quadratic regression and a three-factor model (IKM-D172/28/3q) resulting in root mean square errors of prediction (RMSEP) of 7.27% and 11.4% for WSI and summer sea ice (SSI) concentration, respectively. MAT estimates were calculated with different numbers of analogs (4, 6) using a 274-sample/28-taxa reference data set (MAT-D274/28/4an, -6an) resulting in RMSEP's ranging from 5.52% (4an) to 5.91% (6an) for WSI as well as 8.93% (4an) to 9.05% (6an) for SSI. WA and WAPLS performed less well with the D274 data set, compared to MAT, achieving WSI concentration RMSEP's of 9.91% with WA and 11.29% with WAPLS, recommending the use of IKM and MAT. The application of IKM and MAT to surface sediment data revealed strong relations to the satellite-derived winter and summer sea ice field. Sea ice reconstructions performed on an Atlantic- and a Pacific Southern Ocean sediment core, both documenting sea ice variability over the past 150,000 years (MIS 1 - MIS 6), resulted in similar glacial/interglacial trends of IKM and MAT-based sea-ice estimates. On the average, however, IKM estimates display smaller WSI and slightly higher SSI concentration and probability at lower variability in comparison with MAT. This pattern is a result of different estimation techniques with integration of WSI and SSI signals in one single factor assemblage by applying IKM and selecting specific single samples, thus keeping close to the original diatom database and included variability, by MAT. In contrast to the estimation of WSI, reconstructions of past SSI variability remains weaker. Combined with diatom-based estimates, the abundance and flux pattern of biogenic opal represents an additional indication for the WSI and SSI extent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compression where either near-lossless or lossless coding is required. In this dissertation, two different approaches to improve lossless coding of volumetric medical images, such as Magnetic Resonance and Computed Tomography, were studied and implemented using the latest standard High Efficiency Video Encoder (HEVC). In a first approach, the use of geometric transformations to perform inter-slice prediction was investigated. For the second approach, a pixel-wise prediction technique, based on Least-Squares prediction, that exploits inter-slice redundancy was proposed to extend the current HEVC lossless tools. Experimental results show a bitrate reduction between 45% and 49%, when compared with DICOM recommended encoders, and 13.7% when compared with standard HEVC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Indian monsoon system is an important climate feature of the northern Indian Ocean. Small variations of the wind and precipitation patterns have fundamental influence on the societal, agricultural, and economic development of India and its neighboring countries. To understand current trends, sensitivity to forcing, or natural variation, records beyond the instrumental period are needed. However, high-resolution archives of past winter monsoon variability are scarce. One potential archive of such records are marine sediments deposited on the continental slope in the NE Arabian Sea, an area where present-day conditions are dominated by the winter monsoon. In this region, winter monsoon conditions lead to distinctive changes in surface water properties, affecting marine plankton communities that are deposited in the sediment. Using planktic foraminifera as a sensitive and well-preserved plankton group, we first characterize the response of their species distribution on environmental gradients from a dataset of surface sediment samples in the tropical and sub-tropical Indian Ocean. Transfer functions for quantitative paleoenvironmental reconstructions were applied to a decadal-scale record of assemblage counts from the Pakistan Margin spanning the last 2000?years. The reconstructed temperature record reveals an intensification of winter monsoon intensity near the year 100 CE. Prior to this transition, winter temperatures were >1.5°C warmer than today. Conditions similar to the present seem to have established after 450 CE, interrupted by a singular event near 950 CE with warmer temperatures and accordingly weak winter monsoon. Frequency analysis revealed significant 75-, 40-, and 37-year cycles, which are known from decadal- to centennial-scale resolution records of Indian summer monsoon variability and interpreted as solar irradiance forcing. Our first independent record of Indian winter monsoon activity confirms that winter and summer monsoons were modulated on the same frequency bands and thus indicates that both monsoon systems are likely controlled by the same driving force.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work outlines the theoretical advantages of multivariate methods in biomechanical data, validates the proposed methods and outlines new clinical findings relating to knee osteoarthritis that were made possible by this approach. New techniques were based on existing multivariate approaches, Partial Least Squares (PLS) and Non-negative Matrix Factorization (NMF) and validated using existing data sets. The new techniques developed, PCA-PLS-LDA (Principal Component Analysis – Partial Least Squares – Linear Discriminant Analysis), PCA-PLS-MLR (Principal Component Analysis – Partial Least Squares –Multiple Linear Regression) and Waveform Similarity (based on NMF) were developed to address the challenging characteristics of biomechanical data, variability and correlation. As a result, these new structure-seeking technique revealed new clinical findings. The first new clinical finding relates to the relationship between pain, radiographic severity and mechanics. Simultaneous analysis of pain and radiographic severity outcomes, a first in biomechanics, revealed that the knee adduction moment’s relationship to radiographic features is mediated by pain in subjects with moderate osteoarthritis. The second clinical finding was quantifying the importance of neuromuscular patterns in brace effectiveness for patients with knee osteoarthritis. I found that brace effectiveness was more related to the patient’s unbraced neuromuscular patterns than it was to mechanics, and that these neuromuscular patterns were more complicated than simply increased overall muscle activity, as previously thought.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When a company desires to invest in a project, it must obtain resources needed to make the investment. The alternatives are using firm s internal resources or obtain external resources through contracts of debt and issuance of shares. Decisions involving the composition of internal resources, debt and shares in the total resources used to finance the activities of a company related to the choice of its capital structure. Although there are studies in the area of finance on the debt determinants of firms, the issue of capital structure is still controversial. This work sought to identify the predominant factors that determine the capital structure of Brazilian share capital, non-financial firms. This work was used a quantitative approach, with application of the statistical technique of multiple linear regression on data in panel. Estimates were made by the method of ordinary least squares with model of fixed effects. About 116 companies were selected to participate in this research. The period considered is from 2003 to 2007. The variables and hypotheses tested in this study were built based on theories of capital structure and in empirical researches. Results indicate that the variables, such as risk, size, and composition of assets and firms growth influence their indebtedness. The profitability variable was not relevant to the composition of indebtedness of the companies analyzed. However, analyzing only the long-term debt, comes to the conclusion that the relevant variables are the size of firms and, especially, the composition of its assets (tangibility).This sense, the smaller the size of the undertaking or the greater the representation of fixed assets in total assets, the greater its propensity to long-term debt. Furthermore, this research could not identify a predominant theory to explain the capital structure of Brazilian

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent decades the public sector comes under pressure in order to improve its performance. The use of Information Technology (IT) has been a tool increasingly used in reaching that goal. Thus, it has become an important issue in public organizations, particularly in institutions of higher education, determine which factors influence the acceptance and use of technology, impacting on the success of its implementation and the desired organizational results. The Technology Acceptance Model - TAM was used as the basis for this study and is based on the constructs perceived usefulness and perceived ease of use. However, when it comes to integrated management systems due to the complexity of its implementation,organizational factors were added to thus seek further explanation of the acceptance of such systems. Thus, added to the model five TAM constructs related to critical success factors in implementing ERP systems, they are: support of top management, communication, training, cooperation, and technological complexity (BUENO and SALMERON, 2008). Based on the foregoing, launches the following research problem: What factors influence the acceptance and use of SIE / module academic at the Federal University of Para, from the users' perception of teachers and technicians? The purpose of this study was to identify the influence of organizational factors, and behavioral antecedents of behavioral intention to use the SIE / module academic UFPA in the perspective of teachers and technical users. This is applied research, exploratory and descriptive, quantitative with the implementation of a survey, and data collection occurred through a structured questionnaire applied to a sample of 229 teachers and 30 technical and administrative staff. Data analysis was carried out through descriptive statistics and structural equation modeling with the technique of partial least squares (PLS). Effected primarily to assess the measurement model, which were verified reliability, convergent and discriminant validity for all indicators and constructs. Then the structural model was analyzed using the bootstrap resampling technique like. In assessing statistical significance, all hypotheses were supported. The coefficient of determination (R ²) was high or average in five of the six endogenous variables, so the model explains 47.3% of the variation in behavioral intention. It is noteworthy that among the antecedents of behavioral intention (BI) analyzed in this study, perceived usefulness is the variable that has a greater effect on behavioral intention, followed by ease of use (PEU) and attitude (AT). Among the organizational aspects (critical success factors) studied technological complexity (TC) and training (ERT) were those with greatest effect on behavioral intention to use, although these effects were lower than those produced by behavioral factors (originating from TAM). It is pointed out further that the support of senior management (TMS) showed, among all variables, the least effect on the intention to use (BI) and was followed by communications (COM) and cooperation (CO), which exert a low effect on behavioral intention (BI). Therefore, as other studies on the TAM constructs were adequate for the present research. Thus, the study contributed towards proving evidence that the Technology Acceptance Model can be applied to predict the acceptance of integrated management systems, even in public. Keywords: Technology

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are a great number of evidences showing that education is extremely important in many economic and social dimensions. In Brazil, education is a right guaranteed by the Federal Constitution; however, in the Brazilian legislation the right to the three stages of basic education: Kindergarten, Elementary and High School is better promoted and supported than the right to education at College level. According to educational census data (INEP, 2009), 78% of all enrolments in College education are in private schools, while the reverse is found in High School: 84% of all matriculations are in public schools, which shows a contradiction in the admission into the universities. The Brazilian scenario presents that public universities receive mostly students who performed better and were prepared in elementary and high school education in private schools, while private universities attend students who received their basic education in public schools, which are characterized as low quality. These facts have led researchers to raise the possible determinants of student performance on standardized tests, such as the Brazilian Vestibular exam, to guide the development of policies aimed at equal access to College education. Seeking inspiration in North American models of affirmative action policies, some Brazilian public universities have suggested rate policies to enable and facilitate the entry of "minorities" (blacks, pardos1, natives, people of low income and public school students) to free College education. At the Federal University of the state Rio Grande do Norte (UFRN), the first incentives for candidates from public schools emerged in 2006, being improved and widespread during the last 7 years. This study aimed to analyse and discuss the Argument of Inclution (AI) - the affirmative action policy that provides additional scoring for students from public schools. From an extensive database, the Ordinary Least Squares (OLS) technique was used as well as a Quantile Regression considering as control the variables of personal, socioeconomic and educational characteristics of the candidates from the Brazilian Vestibular exam 2010 of the Federal University of the state Rio Grande do Norte (UFRN). The results demonstrate the importance of this incentive system, besides the magnitude of other variables

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study presents a proposal of speed servomechanisms without the use of mechanical sensors (sensorless) using induction motors. A comparison is performed and propose techniques for pet rotor speed, analyzing performance in different conditions of speed and load. For the determination of control technique, initially, is performed an analysis of the technical literature of the main control and speed estimation used, with their characteristics and limitations. The proposed technique for servo sensorless speed induction motor uses indirect field-oriented control (IFOC), composed of four controllers of the proportional-integral type (PI): rotor flux controller, speed controller and current controllers in the direct and quadrature shaft. As the main focus of the work is in the speed control loop was implemented in Matlab the recursive least squares algorithm (RLS) for identification of mechanical parameters, such as moment of inertia and friction coefficient. Thus, the speed of outer loop controller gains can be self adjusted to compensate for any changes in the mechanical parameters. For speed estimation techniques are analyzed: MRAS by rotóricos fluxes MRAS by counter EMF, MRAS by instantaneous reactive power, slip, locked loop phase (PLL) and sliding mode. A proposition of estimation in sliding mode based on speed, which is performed a change in rotor flux observer structure is displayed. To evaluate the techniques are performed theoretical analyzes in Matlab simulation environment and experimental platform in electrical machinery drives. The DSP TMS320F28069 was used for experimental implementation of speed estimation techniques and check the performance of the same in a wide speed range, including load insertion. From this analysis is carried out to implement closed-loop control of sensorless speed IFOC structure. The results demonstrated the real possibility of replacing mechanical sensors for estimation techniques proposed and analyzed. Among these, the estimator based on PLL demonstrated the best performance in various conditions, while the technique based on sliding mode has good capacity estimation in steady state and robustness to parametric variations.