882 resultados para the least squares distance method
Resumo:
We want to shed some light on the development of person mobility by analysing the repeated cross-sectional data of the four National Travel Surveys (NTS) that were conducted in Germany since the mid seventies. The above mentioned driving forces operate on different levels of the system that generates the spatial behaviour we observe: Travel demand is derived from the needs and desires of individuals to participate in spatially separated activities. Individuals organise their lives in an interactive process within the context they live in, using given infrastructure. Essential determinants of their demand are the individual's socio-demographic characteristics, but also the opportunities and constraints defined by the household and the environment are relevant for the behaviour which ultimately can be realised. In order to fully capture the context which determines individual behaviour, the (nested) hierarchy of persons within households within spatial settings has to be considered. The data we will use for our analysis contains information on these three levels. With the analysis of this micro-data we attempt to improve our understanding of the afore subsumed macro developments. In addition we will investigate the prediction power of a few classic sociodemographic variables for the daily travel distance of individuals in the four NTS data sets, with a focus on the evolution of this predictive power. The additional task to correctly measure distances travelled by means of the NTS is threatened by the fact that although these surveys measure the same variables, different sampling designs and data collection procedures were used. So the aim of the analysis is also to detect variables whose control corrects for the known measurement error, as a prerequisite to apply appropriate models in order to better understand the development of individual travel behaviour in a multilevel context. This task is complicated by the fact that variables that inform on survey procedures and outcomes are only provided with the data set for 2002 (see Infas and DIW Berlin, 2003).
Resumo:
Lisdexamfetamine dimesylate (LDX) is a long-acting, prodrug stimulant therapy for patients with attention-deficit/hyperactivity disorder (ADHD). This randomized placebo-controlled trial of an optimized daily dose of LDX (30, 50 or 70 mg) was conducted in children and adolescents (aged 6-17 years) with ADHD. To evaluate the efficacy of LDX throughout the day, symptoms and behaviors of ADHD were evaluated using an abbreviated version of the Conners' Parent Rating Scale-Revised (CPRS-R) at 1000, 1400 and 1800 hours following early morning dosing (0700 hours). Osmotic-release oral system methylphenidate (OROS-MPH) was included as a reference treatment, but the study was not designed to support a statistical comparison between LDX and OROS-MPH. The full analysis set comprised 317 patients (LDX, n = 104; placebo, n = 106; OROS-MPH, n = 107). At baseline, CPRS-R total scores were similar across treatment groups. At endpoint, differences (active treatment - placebo) in least squares (LS) mean change from baseline CPRS-R total scores were statistically significant (P < 0.001) throughout the day for LDX (effect sizes: 1000 hours, 1.42; 1400 hours, 1.41; 1800 hours, 1.30) and OROS-MPH (effect sizes: 1000 hours, 1.04; 1400 hours, 0.98; 1800 hours, 0.92). Differences in LS mean change from baseline to endpoint were statistically significant (P < 0.001) for both active treatments in all four subscales of the CPRS-R (ADHD index, oppositional, hyperactivity and cognitive). In conclusion, improvements relative to placebo in ADHD-related symptoms and behaviors in children and adolescents receiving a single morning dose of LDX or OROS-MPH were maintained throughout the day and were ongoing at the last measurement in the evening (1800 hours).
Resumo:
Laboratory safety data are routinely collected in clinical studies for safety monitoring and assessment. We have developed a truncated robust multivariate outlier detection method for identifying subjects with clinically relevant abnormal laboratory measurements. The proposed method can be applied to historical clinical data to establish a multivariate decision boundary that can then be used for future clinical trial laboratory safety data monitoring and assessment. Simulations demonstrate that the proposed method has the ability to detect relevant outliers while automatically excluding irrelevant outliers. Two examples from actual clinical studies are used to illustrate the use of this method for identifying clinically relevant outliers.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.
Resumo:
Soil water properties are related to crop growth and environmental aspects and are influenced by the degree of soil compaction. The objective of this study was to determine the water infiltration and hydraulic conductivity of saturated soil under field conditions in terms of the compaction degree of two Oxisols under a no-tillage (NT). Two commercial fields were studied in the state of Rio Grande do Sul, Brazil: one a Haplortox after 14 years under NT; the other a Hapludox after seven years under NT. Maps (50 x 30 m) of the levels of mechanical penetration resistance (PR) were drawn based on the kriging method, differentiating three compaction degrees (CD): high, intermediate and low. In each CD area, the infiltration rate (initial and steady-state) and cumulative water infiltration were measured using concentric rings, with six replications, and the saturated hydraulic conductivity (K(θs)) was determined using the Guelph permeameter. Statistical evaluation was performed based on a randomized design, using the least significant difference (LSD) test and regression analysis. The steady-state infiltration rate was not influenced by the compaction degree, with mean values of 3 and 0.39 cm h-1 in the Haplortox and the Hapludox, respectively. In the Haplortox, saturated soil hydraulic conductivity was 26.76 cm h-1 at a low CD and 9.18 cm h-1 at a high CD, whereas in the Hapludox, this value was 5.16 cm h-1 and 1.19 cm h-1 for the low and high CD, respectively. The compaction degree did not affect the initial and steady-state water infiltration rate, nor the cumulative water infiltration for either soil type, although the values were higher for the Haplortox than the Hapludox.
Resumo:
The agricultural potential of Latosols of the Brazilian Cerrado region is high, but when intensively cultivated under inappropriate management systems, the porosity can be seriously reduced, leading to rapid soil degradation. Consequently, accelerated erosion and sedimentation of springs and creeks have been observed. Therefore, the objective of this study was to evaluate structural changes of Latosols in Rio Verde, Goiás, based on the Least Limiting Water Range (LLWR), and relationships between LLWR and other physical properties. Soil samples were collected from the B horizons of five oxidic Latosols representing the textural variability of the Latosols of the Cerrado biome. LLWR and other soil physical properties were determined at various soil compaction degrees induced by uniaxial compression. Soil compaction caused effects varying from enhanced plant growth due to higher water retention, to severe restriction of edaphic functions. Also, inverse relationships were observed between clay content and bulk density values (Bd) under different structural conditions. Bd values corresponding to critical soil macroporosity (BdcMAC) were more restrictive to a sustainable use of the studied Latosols than the critical Bd corresponding to LLWR (BdcLLWR). The high tolerable compression potential of these oxidic Latosols was related to the high aeration porosity associated to the granular structure.
Resumo:
This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
PURPOSE: To compare different techniques for positive contrast imaging of susceptibility markers with MRI for three-dimensional visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. MATERIALS AND METHODS: Six different positive contrast techniques are investigated for their ability to image at 3 Tesla a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. RESULTS: The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided, and strengths and weaknesses of the different approaches are discussed. CONCLUSION: The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data are now available.