918 resultados para quasi-least squares
Resumo:
BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).
Resumo:
Body mass and body condition are often tightly linked to animal health and fitness in the wild and thus are key measures for ecophysiologists and behavioral ecologists. In some animals, such as large seabird species, obtaining indexes of structural size is relatively easy, whereas measuring body mass under specific field circumstances may be more of a challenge. Here, we suggest an alternative, easily measurable, and reliable surrogate of body mass in field studies, that is, body girth. Using 234 free-living king penguins (Aptenodytes patagonicus) at various stages of molt and breeding, we measured body girth under the flippers, body mass, and bill and flipper length. We found that body girth was strongly and positively related to body mass in both molting (R(2) = 0.91) and breeding (R(2) = 0.73) birds, with the mean error around our predictions being 6.4%. Body girth appeared to be a reliable proxy measure of body mass because the relationship did not vary according to year and experimenter, bird sex, or stage within breeding groups. Body girth was, however, a weak proxy of body mass in birds at the end of molt, probably because most of those birds had reached a critical depletion of energy stores. Body condition indexes established from ordinary least squares regressions of either body girth or body mass on structural size were highly correlated (r(s) = 0.91), suggesting that body girth was as good as body mass in establishing body condition indexes in king penguins. Body girth may prove a useful proxy to body mass for estimating body condition in field investigations and could likely provide similar information in other penguins and large animals that may be complicated to weigh in the wild.
Resumo:
Lisdexamfetamine dimesylate (LDX) is a long-acting, prodrug stimulant therapy for patients with attention-deficit/hyperactivity disorder (ADHD). This randomized placebo-controlled trial of an optimized daily dose of LDX (30, 50 or 70 mg) was conducted in children and adolescents (aged 6-17 years) with ADHD. To evaluate the efficacy of LDX throughout the day, symptoms and behaviors of ADHD were evaluated using an abbreviated version of the Conners' Parent Rating Scale-Revised (CPRS-R) at 1000, 1400 and 1800 hours following early morning dosing (0700 hours). Osmotic-release oral system methylphenidate (OROS-MPH) was included as a reference treatment, but the study was not designed to support a statistical comparison between LDX and OROS-MPH. The full analysis set comprised 317 patients (LDX, n = 104; placebo, n = 106; OROS-MPH, n = 107). At baseline, CPRS-R total scores were similar across treatment groups. At endpoint, differences (active treatment - placebo) in least squares (LS) mean change from baseline CPRS-R total scores were statistically significant (P < 0.001) throughout the day for LDX (effect sizes: 1000 hours, 1.42; 1400 hours, 1.41; 1800 hours, 1.30) and OROS-MPH (effect sizes: 1000 hours, 1.04; 1400 hours, 0.98; 1800 hours, 0.92). Differences in LS mean change from baseline to endpoint were statistically significant (P < 0.001) for both active treatments in all four subscales of the CPRS-R (ADHD index, oppositional, hyperactivity and cognitive). In conclusion, improvements relative to placebo in ADHD-related symptoms and behaviors in children and adolescents receiving a single morning dose of LDX or OROS-MPH were maintained throughout the day and were ongoing at the last measurement in the evening (1800 hours).
Resumo:
La principal aportación de este trabajo es poner de manifiesto que la capacidad absortiva de las economías cambia en función de si el país es el líder o es un seguidor. Aunque tampoco olvidamos otras variables como son la I+D interna, la I+D externa, el desarrollo del sistema financiero y las instituciones. Para ello, primero se prueba la presencia de una raíz unitaria y después se asegura una relación de cointegración entre las variables implicadas en el modelo para poder sacar conclusiones a largo plazo. Y por último, para estimar el modelo, se utilizará una técnica econométrica que combina el tratamiento tradicional de los datos de panel con las técnicas de cointegración: los Dynamics Ordinary Least Squares (DOLS). Esta técnica soluciona las limitaciones de los OLS, ya que su distribución no suele ser estándar por la presencia de un sesgo de muestras finitas (causado bien por la endogeneidad de las variables explicativas bien por la correlación serial de la perturbación). Utilizando un panel de datos que comprende 8 países de la OECD entre 1973-2004 y para el Business Sector, se encuentran diversos resultados, entre los que destacamos que la I+D interna, la I+D externa, la frontera tecnológica, la capacidad absortiva y el desarrollo de las instituciones tienen un impacto positivo sobre el nivel de la PTF. En cambio, el desarrollo del sistema financiero tiene un impacto negativo. Palabras claves: fuentes de la I+D, frontera tecnológica, capacidad absortiva, raíces unitarias, cointegración, DOLS.
Resumo:
Na pesquisa aqui relatada, visa-se investigar os antecedentes da intenção de uso de sistemas de home broker sob a ótica dos investidores do mercado acionário. Para atingir esse objetivo, por meio de referencial teórico baseado em teorias de aceitação de sistemas de informação, difusão da inovação, confiança em ambientes virtuais e satisfação do usuário, foi elaborado um modelo teórico e foram propostas hipóteses de pesquisa. Por meio de técnicas de equações estruturais baseadas em Partial Least Squares (PLS), a partir de 152 questionários válidos, coletados via web survey junto a investidores do mercado acionário brasileiro, foram testados o modelo proposto e as hipóteses de pesquisa. Identificaram-se, assim, os fatores compatibilidade, utilidade percebida e facilidade de uso percebida como antecedentes estatisticamente significantes do fator satisfação do usuário com o sistema de home broker, o qual, por sua vez, teve efeito estatisticamente significante na intenção de uso do sistema. São apresentadas, ainda, as implicações acadêmicas e gerenciais do trabalho, assim como suas limitações e uma agenda de pesquisa para essa importante área do conhecimento.
Resumo:
Accumulation of fat in the liver increases the risk to develop fibrosis and cirrhosis and is associated with development of the metabolic syndrome. Here, to identify genes or gene pathways that may underlie the genetic susceptibility to fat accumulation in liver, we studied A/J and C57Bl/6 mice that are resistant and sensitive to diet-induced hepatosteatosis and obesity, respectively. We performed comparative transcriptomic and lipidomic analysis of the livers of both strains of mice fed a high fat diet for 2, 10, and 30 days. We found that resistance to steatosis in A/J mice was associated with the following: (i) a coordinated up-regulation of 10 genes controlling peroxisome biogenesis and β-oxidation; (ii) an increased expression of the elongase Elovl5 and desaturases Fads1 and Fads2. In agreement with these observations, peroxisomal β-oxidation was increased in livers of A/J mice, and lipidomic analysis showed increased concentrations of long chain fatty acid-containing triglycerides, arachidonic acid-containing lysophosphatidylcholine, and 2-arachidonylglycerol, a cannabinoid receptor agonist. We found that the anti-inflammatory CB2 receptor was the main hepatic cannabinoid receptor, which was highly expressed in Kupffer cells. We further found that A/J mice had a lower pro-inflammatory state as determined by lower plasma levels and IL-1β and granulocyte-CSF and reduced hepatic expression of their mRNAs, which were found only in Kupffer cells. This suggests that increased 2-arachidonylglycerol production may limit Kupffer cell activity. Collectively, our data suggest that genetic variations in the expression of peroxisomal β-oxidation genes and of genes controlling the production of an anti-inflammatory lipid may underlie the differential susceptibility to diet-induced hepatic steatosis and pro-inflammatory state.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l(1) and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l(2) model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l(1)-norm solutions as they can resolve both sharp and smooth interfaces within the same model. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.