920 resultados para Least-Squares prediction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lisdexamfetamine dimesylate (LDX) is a long-acting, prodrug stimulant therapy for patients with attention-deficit/hyperactivity disorder (ADHD). This randomized placebo-controlled trial of an optimized daily dose of LDX (30, 50 or 70 mg) was conducted in children and adolescents (aged 6-17 years) with ADHD. To evaluate the efficacy of LDX throughout the day, symptoms and behaviors of ADHD were evaluated using an abbreviated version of the Conners' Parent Rating Scale-Revised (CPRS-R) at 1000, 1400 and 1800 hours following early morning dosing (0700 hours). Osmotic-release oral system methylphenidate (OROS-MPH) was included as a reference treatment, but the study was not designed to support a statistical comparison between LDX and OROS-MPH. The full analysis set comprised 317 patients (LDX, n = 104; placebo, n = 106; OROS-MPH, n = 107). At baseline, CPRS-R total scores were similar across treatment groups. At endpoint, differences (active treatment - placebo) in least squares (LS) mean change from baseline CPRS-R total scores were statistically significant (P < 0.001) throughout the day for LDX (effect sizes: 1000 hours, 1.42; 1400 hours, 1.41; 1800 hours, 1.30) and OROS-MPH (effect sizes: 1000 hours, 1.04; 1400 hours, 0.98; 1800 hours, 0.92). Differences in LS mean change from baseline to endpoint were statistically significant (P < 0.001) for both active treatments in all four subscales of the CPRS-R (ADHD index, oppositional, hyperactivity and cognitive). In conclusion, improvements relative to placebo in ADHD-related symptoms and behaviors in children and adolescents receiving a single morning dose of LDX or OROS-MPH were maintained throughout the day and were ongoing at the last measurement in the evening (1800 hours).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La principal aportación de este trabajo es poner de manifiesto que la capacidad absortiva de las economías cambia en función de si el país es el líder o es un seguidor. Aunque tampoco olvidamos otras variables como son la I+D interna, la I+D externa, el desarrollo del sistema financiero y las instituciones. Para ello, primero se prueba la presencia de una raíz unitaria y después se asegura una relación de cointegración entre las variables implicadas en el modelo para poder sacar conclusiones a largo plazo. Y por último, para estimar el modelo, se utilizará una técnica econométrica que combina el tratamiento tradicional de los datos de panel con las técnicas de cointegración: los Dynamics Ordinary Least Squares (DOLS). Esta técnica soluciona las limitaciones de los OLS, ya que su distribución no suele ser estándar por la presencia de un sesgo de muestras finitas (causado bien por la endogeneidad de las variables explicativas bien por la correlación serial de la perturbación). Utilizando un panel de datos que comprende 8 países de la OECD entre 1973-2004 y para el Business Sector, se encuentran diversos resultados, entre los que destacamos que la I+D interna, la I+D externa, la frontera tecnológica, la capacidad absortiva y el desarrollo de las instituciones tienen un impacto positivo sobre el nivel de la PTF. En cambio, el desarrollo del sistema financiero tiene un impacto negativo. Palabras claves: fuentes de la I+D, frontera tecnológica, capacidad absortiva, raíces unitarias, cointegración, DOLS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Na pesquisa aqui relatada, visa-se investigar os antecedentes da intenção de uso de sistemas de home broker sob a ótica dos investidores do mercado acionário. Para atingir esse objetivo, por meio de referencial teórico baseado em teorias de aceitação de sistemas de informação, difusão da inovação, confiança em ambientes virtuais e satisfação do usuário, foi elaborado um modelo teórico e foram propostas hipóteses de pesquisa. Por meio de técnicas de equações estruturais baseadas em Partial Least Squares (PLS), a partir de 152 questionários válidos, coletados via web survey junto a investidores do mercado acionário brasileiro, foram testados o modelo proposto e as hipóteses de pesquisa. Identificaram-se, assim, os fatores compatibilidade, utilidade percebida e facilidade de uso percebida como antecedentes estatisticamente significantes do fator satisfação do usuário com o sistema de home broker, o qual, por sua vez, teve efeito estatisticamente significante na intenção de uso do sistema. São apresentadas, ainda, as implicações acadêmicas e gerenciais do trabalho, assim como suas limitações e uma agenda de pesquisa para essa importante área do conhecimento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accumulation of fat in the liver increases the risk to develop fibrosis and cirrhosis and is associated with development of the metabolic syndrome. Here, to identify genes or gene pathways that may underlie the genetic susceptibility to fat accumulation in liver, we studied A/J and C57Bl/6 mice that are resistant and sensitive to diet-induced hepatosteatosis and obesity, respectively. We performed comparative transcriptomic and lipidomic analysis of the livers of both strains of mice fed a high fat diet for 2, 10, and 30 days. We found that resistance to steatosis in A/J mice was associated with the following: (i) a coordinated up-regulation of 10 genes controlling peroxisome biogenesis and β-oxidation; (ii) an increased expression of the elongase Elovl5 and desaturases Fads1 and Fads2. In agreement with these observations, peroxisomal β-oxidation was increased in livers of A/J mice, and lipidomic analysis showed increased concentrations of long chain fatty acid-containing triglycerides, arachidonic acid-containing lysophosphatidylcholine, and 2-arachidonylglycerol, a cannabinoid receptor agonist. We found that the anti-inflammatory CB2 receptor was the main hepatic cannabinoid receptor, which was highly expressed in Kupffer cells. We further found that A/J mice had a lower pro-inflammatory state as determined by lower plasma levels and IL-1β and granulocyte-CSF and reduced hepatic expression of their mRNAs, which were found only in Kupffer cells. This suggests that increased 2-arachidonylglycerol production may limit Kupffer cell activity. Collectively, our data suggest that genetic variations in the expression of peroxisomal β-oxidation genes and of genes controlling the production of an anti-inflammatory lipid may underlie the differential susceptibility to diet-induced hepatic steatosis and pro-inflammatory state.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l(1) and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l(2) model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l(1)-norm solutions as they can resolve both sharp and smooth interfaces within the same model. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A aplicação de técnicas espectroscópicas que utilizam a radiação infravermelha (NIRS-Near Infrared Spectroscopy e DRIFTS-Diffuse Reflectance Fourier Transformed Spectroscopy) na análise inorgânica do solo tem sido proposta desde a década de 1970, mas até os dias atuais são raros os métodos implementados rotineiramente no Brasil. Isso deve-se à dificuldade em construir modelos de calibração, por meio de métodos estatísticos multivariados, utilizando-se amostras reais de solo, de constituição complexa, que varia geograficamente e de acordo com o manejo. Por isso, os objetivos deste trabalho foram construir modelos de calibração em NIRS e DRIFTS para a quantificação das frações de argila e areia, em amostras de solos de classes diferentes - Latossolo Vermelho (predominante), Nitossolo, Argissolo Vermelho e Neossolo Quartzarênico - e avaliar qual dessas duas técnicas é mais adequada para essa finalidade, assim como a interferência do agrupamento de amostras e da seleção de variáveis espectrais na qualidade desses modelos. Para isso, valores de referência obtidos pelo método do densímetro, método largamente utilizado nos laboratórios de análise de solo, foram correlacionados com valores de absorbância em NIRS e DRIFTS pela ferramenta estatística PLS (Partial Least Squares), obtendo-se altos coeficientes de determinação (R²), de 0,95, 0,90 e 0,91 para argila, silte e areia, respectivamente, na validação externa. Isso confirma a aplicabilidade das técnicas espectroscópicas na análise granulométrica do solo para fins agrícolas. O agrupamento das amostras segundo a localização e a seleção de variáveis espectrais pouco influenciou na qualidade dos modelos. A técnica espectroscópica mais indicada para essa finalidade foi a DRIFTS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cichlids of East Africa are renowned as one of the most spectacular examples of adaptive radiation. They provide a unique opportunity to investigate the relationships between ecology, morphological diversity, and phylogeny in producing such remarkable diversity. Nevertheless, the parameters of the adaptive radiations of these fish have not been satisfactorily quantified yet. Lake Tanganyika possesses all of the major lineages of East African cichlid fish, so by using geometric morphometrics and comparative analyses of ecology and morphology, in an explicitly phylogenetic context, we quantify the role of ecology in driving adaptive speciation. We used geometric morphometric methods to describe the body shape of over 1000 specimens of East African cichlid fish, with a focus on the Lake Tanganyika species assemblage, which is composed of more than 200 endemic species. The main differences in shape concern the length of the whole body and the relative sizes of the head and caudal peduncle. We investigated the influence of phylogeny on similarity of shape using both distance-based and variance partitioning methods, finding that phylogenetic inertia exerts little influence on overall body shape. Therefore, we quantified the relative effect of major ecological traits on shape using phylogenetic generalized least squares and disparity analyses. These analyses conclude that body shape is most strongly predicted by feeding preferences (i.e., trophic niches) and the water depths at which species occur. Furthermore, the morphological disparity within tribes indicates that even though the morphological diversification associated with explosive speciation has happened in only a few tribes of the Tanganyikan assemblage, the potential to evolve diverse morphologies exists in all tribes. Quantitative data support the existence of extensive parallelism in several independent adaptive radiations in Lake Tanganyika. Notably, Tanganyikan mouthbrooders belonging to the C-lineage and the substrate spawning Lamprologini have evolved a multitude of different shapes from elongated and Lamprologus-like hypothetical ancestors. Together, these data demonstrate strong support for the adaptive character of East African cichlid radiations.