918 resultados para least squares matching
Resumo:
(1)H HR-MAS NMR spectroscopy was applied to apple tissue samples deriving from 3 different cultivars. The NMR data were statistically evaluated by analysis of variance (ANOVA), principal component analysis (PCA), and partial least-squares-discriminant analysis (PLS-DA). The intra-apple variability of the compounds was found to be significantly lower than the inter-apple variability within one cultivar. A clear separation of the three different apple cultivars could be obtained by multivariate analysis. Direct comparison of the NMR spectra obtained from apple tissue (with HR-MAS) and juice (with liquid-state HR NMR) showed distinct differences in some metabolites, which are probably due to changes induced by juice preparation. This preliminary study demonstrates the feasibility of (1)H HR-MAS NMR in combination with multivariate analysis as a tool for future chemometric studies applied to intact fruit tissues, e.g. for investigating compositional changes due to physiological disorders, specific growth or storage conditions.
Resumo:
Objective This article seeks to explain the puzzle of why incumbents spend so much on campaigns despite most research finding that their spending has almost no effect on voters. Methods The article uses ordinary least squares, instrumental variables, and fixed-effects regression to estimate the impact of incumbent spending on election outcomes. The estimation includes an interaction term between incumbent and challenger spending to allow the effect of incumbent spending to depend on the level of challenger spending. Results The estimation provides strong evidence that spending by the incumbent has a larger positive impact on votes received the more money the challenger spends. Conclusion Campaign spending by incumbents is most valuable in the races where the incumbent faces a serious challenge. Raising large sums of money to be used in close races is thus a rational choice by incumbents.
Resumo:
Over the recent years chirped-pulse, Fourier-transform microwave (CP-FTMW) spectrometers have chan- ged the scope of rotational spectroscopy. The broad frequency and large dynamic range make possible structural determinations in molecular systems of increasingly larger size from measurements of heavy atom (13C, 15N, 18O) isotopes recorded in natural abundance in the same spectrum as that of the parent isotopic species. The design of a broadband spectrometer operating in the 2–8 GHz frequency range with further improvements in sensitivity is presented. The current CP-FTMW spectrometer performance is benchmarked in the analyses of the rotational spectrum of the water heptamer, (H2O)7, in both 2– 8 GHz and 6–18 GHz frequency ranges. Two isomers of the water heptamer have been observed in a pulsed supersonic molecular expansion. High level ab initio structural searches were performed to pro- vide plausible low-energy candidates which were directly compared with accurate structures provided from broadband rotational spectra. The full substitution structure of the most stable species has been obtained through the analysis of all possible singly-substituted isotopologues (H218O and HDO), and a least-squares rm(1) geometry of the oxygen framework determined from 16 different isotopic species compares with the calculated O–O equilibrium distances at the 0.01 Å level.
Resumo:
Over the recent years chirped-pulse, Fourier-transform microwave (CP-FTMW) spectrometers have changed the scope of rotational spectroscopy. The broad frequency and large dynamic range make possible structural determinations in molecular systems of increasingly larger size from measurements of heavy atom (C-13, N-15, O-18) isotopes recorded in natural abundance in the same spectrum as that of the parent isotopic species. The design of a broadband spectrometer operating in the 2-8 GHz frequency range with further improvements in sensitivity is presented. The current CP-FTMW spectrometer performance is benchmarked in the analyses of the rotational spectrum of the water heptamer, (H2O)(7), in both 2-8 GHz and 6-18 GHz frequency ranges. Two isomers of the water heptamer have been observed in a pulsed supersonic molecular expansion. High level ab initio structural searches were performed to provide plausible low-energy candidates which were directly compared with accurate structures provided from broadband rotational spectra. The full substitution structure of the most stable species has been obtained through the analysis of all possible singly-substituted isotopologues ((H2O)-O-18 and HDO), and a least-squares r(m)((1)) geometry of the oxygen framework determined from 16 different isotopic species compares with the calculated O-O equilibrium distances at the 0.01 angstrom level. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Classical liquid-state high-resolution (HR) NMR spectroscopy has proved a powerful tool in the metabonomic analysis of liquid food samples like fruit juices. In this paper the application of (1)H high-resolution magic angle spinning (HR-MAS) NMR spectroscopy to apple tissue is presented probing its potential for metabonomic studies. The (1)H HR-MAS NMR spectra are discussed in terms of the chemical composition of apple tissue and compared to liquid-state NMR spectra of apple juice. Differences indicate that specific metabolic changes are induced by juice preparation. The feasibility of HR-MAS NMR-based multivariate analysis is demonstrated by a study distinguishing three different apple cultivars by principal component analysis (PCA). Preliminary results are shown from subsequent studies comparing three different cultivation methods by means of PCA and partial least squares discriminant analysis (PLS-DA) of the HR-MAS NMR data. The compounds responsible for discriminating organically grown apples are discussed. Finally, an outlook of our ongoing work is given including a longitudinal study on apples.
Resumo:
This thesis examines two panel data sets of 48 states from 1981 to 2009 and utilizes ordinary least squares (OLS) and fixed effects models to explore the relationship between rural Interstate speed limits and fatality rates and whether rural Interstate speed limits affect non-Interstate safety. Models provide evidence that rural Interstate speed limits higher than 55 MPH lead to higher fatality rates on rural Interstates though this effect is somewhat tempered by reductions in fatality rates for roads other than rural Interstates. These results provide some but not unanimous support for the traffic diversion hypothesis that rural Interstate speed limit increases lead to decreases in fatality rates of other roads. To the author’s knowledge, this paper is the first econometric study to differentiate between the effects of 70 MPH speed limits and speed limits above 70 MPH on fatality rates using a multi-state data set. Considering both rural Interstates and other roads, rural Interstate speed limit increases above 55 MPH are responsible for 39,700 net fatalities, 4.1 percent of total fatalities from 1987, the year limits were first raised, to 2009.
Resumo:
Carbon dioxide (CO2) has been of recent interest due to the issue of greenhouse cooling in the upper atmosphere by species such as CO2 and NO. In the Earth’s upper atmosphere, between altitudes of 75 and 110 km, a collisional energy exchange occurs between CO2 and atomic oxygen, which promotes a population of ground state CO2 to the bend excited state. The relaxation of CO2 following this excitation is characterized by spontaneous emission of 15-μm. Most of this energy is emitted away from Earth. Due to the low density in the upper atmosphere, most of this energy is not reabsorbed and thus escapes into space, leading to a local cooling effect in the upper atmosphere. To determine the efficiency of the CO2- O atom collisional energy exchange, transient diode laser absorption spectroscopy was used to monitor the population of the first vibrationally excited state, 13CO2(0110) or ν2, as a function of time. The rate coefficient, kO(ν2), for the vibrational relaxation 13CO2 (ν2)-O was determined by fitting laboratory measurements using a home-written linear least squares algorithm. The rate coefficient, kO(ν2), of the vibrational relaxation of 13CO2(ν2), by atomic oxygen at room temperature was determined to be (1.6 ± 0.3 x 10-12 cm3 s-1), which is within the uncertainty of the rate coefficient previously found in this group for 12CO2(ν2) relaxation. The cold temperature kO(ν2) values were determined to be: (2.1 ± 0.8) x 10-12 cm3 s-1 at Tfinal = 274 K, (1.8 ± 0.3) x 10-12 cm3 s-1 at Tfinal = 239 K, (2 ± 1) x 10-12 cm3 s-1 at Tfinal = 208 K, and (1.7 ± 0.3) x 10-12 cm3 s-1 at Tfinal = 186 K. These data did not show a definitive negative temperature dependence comparable to that found for 12CO2 previously.
Resumo:
2D-3D registration of pre-operative 3D volumetric data with a series of calibrated and undistorted intra-operative 2D projection images has shown great potential in CT-based surgical navigation because it obviates the invasive procedure of the conventional registration methods. In this study, a recently introduced spline-based multi-resolution 2D-3D image registration algorithm has been adapted together with a novel least-squares normalized pattern intensity (LSNPI) similarity measure for image guided minimally invasive spine surgery. A phantom and a cadaver together with their respective ground truths were specially designed to experimentally assess possible factors that may affect the robustness, accuracy, or efficiency of the registration. Our experiments have shown that it is feasible for the assessed 2D-3D registration algorithm to achieve sub-millimeter accuracy in a realistic setup in less than one minute.
Resumo:
The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.
Resumo:
Similarity measure is one of the main factors that affect the accuracy of intensity-based 2D/3D registration of X-ray fluoroscopy to CT images. Information theory has been used to derive similarity measure for image registration leading to the introduction of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. Previous attempt to incorporate spatial information into mutual information either requires computing the entropy of higher dimensional probability distributions, or is not robust to outliers. In this paper, we show how to incorporate spatial information into mutual information without suffering from these problems. Using a variational approximation derived from the Kullback-Leibler bound, spatial information can be effectively incorporated into mutual information via energy minimization. The resulting similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on datasets of two applications: (a) intra-operative patient pose estimation from a few (e.g. 2) calibrated fluoroscopic images, and (b) post-operative cup alignment estimation from single X-ray radiograph with gonadal shielding.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
Heroin prices are a reflection of supply and demand, and similar to any other market, profits motivate participation. The intent of this research is to examine the change in Afghan opium production due to political conflict affecting Europe’s heroin market and government policies. If the Taliban remain in power, or a new Afghan government is formed, the changes will affect the heroin market in Europe to a certain degree. In the heroin market, the degree of change is dependent on many socioeconomic forces such as law enforcement, corruption, and proximity to Afghanistan. An econometric model that examines the degree of these socioeconomic effects has not been applied to the heroin trade in Afghanistan before. This research uses a two-stage least squares econometric model to reveal the supply and demand of heroin in 36 different countries from the Middle East to Western Europe in 2008. An application of the two-stage least squares model to the heroin market in Europe will attempt to predict the socioeconomic consequences of Afghanistan opium production.
Resumo:
INTRODUCTION: Ultra-high-field whole-body systems (7.0 T) have a high potential for future human in vivo magnetic resonance imaging (MRI). In musculoskeletal MRI, biochemical imaging of articular cartilage may benefit, in particular. Delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) and T2 mapping have shown potential at 3.0 T. Although dGEMRIC, allows the determination of the glycosaminoglycan content of articular cartilage, T2 mapping is a promising tool for the evaluation of water and collagen content. In addition, the evaluation of zonal variation, based on tissue anisotropy, provides an indicator of the nature of cartilage ie, hyaline or hyaline-like articular cartilage.Thus, the aim of our study was to show the feasibility of in vivo dGEMRIC, and T2 and T2* relaxation measurements, at 7.0 T MRI; and to evaluate the potential of T2 and T2* measurements in an initial patient study after matrix-associated autologous chondrocyte transplantation (MACT) in the knee. MATERIALS AND METHODS: MRI was performed on a whole-body 7.0 T MR scanner using a dedicated circular polarization knee coil. The protocol consisted of an inversion recovery sequence for dGEMRIC, a multiecho spin-echo sequence for standard T2 mapping, a gradient-echo sequence for T2* mapping and a morphologic PD SPACE sequence. Twelve healthy volunteers (mean age, 26.7 +/- 3.4 years) and 4 patients (mean age, 38.0 +/- 14.0 years) were enrolled 29.5 +/- 15.1 months after MACT. For dGEMRIC, 5 healthy volunteers (mean age, 32.4 +/- 11.2 years) were included. T1 maps were calculated using a nonlinear, 2-parameter, least squares fit analysis. Using a region-of-interest analysis, mean cartilage relaxation rate was determined as T1 (0) for precontrast measurements and T1 (Gd) for postcontrast gadopentate dimeglumine [Gd-DTPA(2-)] measurements. T2 and T2* maps were obtained using a pixelwise, monoexponential, non-negative least squares fit analysis; region-of-interest analysis was carried out for deep and superficial cartilage aspects. Statistical evaluation was performed by analyses of variance. RESULTS: Mean T1 (dGEMRIC) values for healthy volunteers showed slightly different results for femoral [T1 (0): 1259 +/- 277 ms; T1 (Gd): 683 +/- 141 ms] compared with tibial cartilage [T1 (0): 1093 +/- 281 ms; T1 (Gd): 769 +/- 150 ms]. Global mean T2 relaxation for healthy volunteers showed comparable results for femoral (T2: 56.3 +/- 15.2 ms; T2*: 19.7 +/- 6.4 ms) and patellar (T2: 54.6 +/- 13.0 ms; T2*: 19.6 +/- 5.2 ms) cartilage, but lower values for tibial cartilage (T2: 43.6 +/- 8.5 ms; T2*: 16.6 +/- 5.6 ms). All healthy cartilage sites showed a significant increase from deep to superficial cartilage (P < 0.001). Within healthy cartilage sites in MACT patients, adequate values could be found for T2 (56.6 +/- 13.2 ms) and T2* (18.6 +/- 5.3 ms), which also showed a significant stratification. Within cartilage repair tissue, global mean values showed no difference, with 55.9 +/- 4.9 ms for T2 and 16.2 +/- 6.3 ms for T2*. However, zonal assessment showed only a slight and not significant increase from deep to superficial cartilage (T2: P = 0.174; T2*: P = 0.150). CONCLUSION: In vivo T1 dGEMRIC assessment in healthy cartilage, and T2 and T2* mapping in healthy and reparative articular cartilage, seems to be possible at 7.0 T MRI. For T2 and T2*, zonal variation of articular cartilage could also be evaluated at 7.0 T. This zonal assessment of deep and superficial cartilage aspects shows promising results for the differentiation of healthy and affected articular cartilage. In future studies, optimized protocol selection, and sophisticated coil technology, together with increased signal at ultra-high-field MRI, may lead to advanced biochemical cartilage imaging.