32 resultados para root-mean-square radius
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Surface topography and light scattering were measured on 15 samples ranging from those having smooth surfaces to others with ground surfaces. The measurement techniques included an atomic force microscope, mechanical and optical profilers, confocal laser scanning microscope, angle-resolved scattering, and total scattering. The samples included polished and ground fused silica, silicon carbide, sapphire, electroplated gold, and diamond-turned brass. The measurement instruments and techniques had different surface spatial wavelength band limits, so the measured roughnesses were not directly comparable. Two-dimensional power spectral density (PSD) functions were calculated from the digitized measurement data, and we obtained rms roughnesses by integrating areas under the PSD curves between fixed upper and lower band limits. In this way, roughnesses measured with different instruments and techniques could be directly compared. Although smaller differences between measurement techniques remained in the calculated roughnesses, these could be explained mostly by surface topographical features such as isolated particles that affected the instruments in different ways.
Resumo:
By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.
Resumo:
This paper investigates the comparative performance of five small areaestimators. We use Monte Carlo simulation in the context of boththeoretical and empirical populations. In addition to the direct andindirect estimators, we consider the optimal composite estimator withpopulation weights, and two composite estimators with estimatedweights: one that assumes homogeneity of within area variance andsquare bias, and another one that uses area specific estimates ofvariance and square bias. It is found that among the feasibleestimators, the best choice is the one that uses area specificestimates of variance and square bias.
Resumo:
A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
A new, quantitative, inference model for environmental reconstruction (transfer function), based for the first time on the simultaneous analysis of multigroup species, has been developed. Quantitative reconstructions based on palaeoecological transfer functions provide a powerful tool for addressing questions of environmental change in a wide range of environments, from oceans to mountain lakes, and over a range of timescales, from decades to millions of years. Much progress has been made in the development of inferences based on multiple proxies but usually these have been considered separately, and the different numeric reconstructions compared and reconciled post-hoc. This paper presents a new method to combine information from multiple biological groups at the reconstruction stage. The aim of the multigroup work was to test the potential of the new approach to making improved inferences of past environmental change by improving upon current reconstruction methodologies. The taxonomic groups analysed include diatoms, chironomids and chrysophyte cysts. We test the new methodology using two cold-environment training-sets, namely mountain lakes from the Pyrenees and the Alps. The use of multiple groups, as opposed to single groupings, was only found to increase the reconstruction skill slightly, as measured by the root mean square error of prediction (leave-one-out cross-validation), in the case of alkalinity, dissolved inorganic carbon and altitude (a surrogate for air-temperature), but not for pH or dissolved CO2. Reasons why the improvement was less than might have been anticipated are discussed. These can include the different life-forms, environmental responses and reaction times of the groups under study.
Resumo:
Abstract Purpose- There is a lack of studies on tourism demand forecasting that use non-linear models. The aim of this paper is to introduce consumer expectations in time-series models in order to analyse their usefulness to forecast tourism demand. Design/methodology/approach- The paper focuses on forecasting tourism demand in Catalonia for the four main visitor markets (France, the UK, Germany and Italy) combining qualitative information with quantitative models: autoregressive (AR), autoregressive integrated moving average (ARIMA), self-exciting threshold autoregressions (SETAR) and Markov switching regime (MKTAR) models. The forecasting performance of the different models is evaluated for different time horizons (one, two, three, six and 12 months). Findings- Although some differences are found between the results obtained for the different countries, when comparing the forecasting accuracy of the different techniques, ARIMA and Markov switching regime models outperform the rest of the models. In all cases, forecasts of arrivals show lower root mean square errors (RMSE) than forecasts of overnight stays. It is found that models with consumer expectations do not outperform benchmark models. These results are extensive to all time horizons analysed. Research limitations/implications- This study encourages the use of qualitative information and more advanced econometric techniques in order to improve tourism demand forecasting. Originality/value- This is the first study on tourism demand focusing specifically on Catalonia. To date, there have been no studies on tourism demand forecasting that use non-linear models such as self-exciting threshold autoregressions (SETAR) and Markov switching regime (MKTAR) models. This paper fills this gap and analyses forecasting performance at a regional level. Keywords Tourism, Forecasting, Consumers, Spain, Demand management Paper type Research paper
Resumo:
Does Independent Component Analysis (ICA) denature EEG signals? We applied ICA to two groups of subjects (mild Alzheimer patients and control subjects). The aim of this study was to examine whether or not the ICA method can reduce both group di®erences and within-subject variability. We found that ICA diminished Leave-One- Out root mean square error (RMSE) of validation (from 0.32 to 0.28), indicative of the reduction of group di®erence. More interestingly, ICA reduced the inter-subject variability within each group (¾ = 2:54 in the ± range before ICA, ¾ = 1:56 after, Bartlett p = 0.046 after Bonfer- roni correction). Additionally, we present a method to limit the impact of human error (' 13:8%, with 75.6% inter-cleaner agreement) during ICA cleaning, and reduce human bias. These ¯ndings suggests the novel usefulness of ICA in clinical EEG in Alzheimer's disease for reduction of subject variability.
Resumo:
The semiclassical Wigner-Kirkwood ̄h expansion method is used to calculate shell corrections for spherical and deformed nuclei. The expansion is carried out up to fourth order in ̄h. A systematic study of Wigner-Kirkwood averaged energies is presented as a function of the deformation degrees of freedom. The shell corrections, along with the pairing energies obtained by using the Lipkin-Nogami scheme, are used in the microscopic-macroscopic approach to calculate binding energies. The macroscopic part is obtained from a liquid drop formula with six adjustable parameters. Considering a set of 367 spherical nuclei, the liquid drop parameters are adjusted to reproduce the experimental binding energies, which yields a root mean square (rms) deviation of 630 keV. It is shown that the proposed approach is indeed promising for the prediction of nuclear masses.
BioSuper: A web tool for the superimposition of biomolecules and assemblies with rotational symmetry
Resumo:
Background Most of the proteins in the Protein Data Bank (PDB) are oligomeric complexes consisting of two or more subunits that associate by rotational or helical symmetries. Despite the myriad of superimposition tools in the literature, we could not find any able to account for rotational symmetry and display the graphical results in the web browser. Results BioSuper is a free web server that superimposes and calculates the root mean square deviation (RMSD) of protein complexes displaying rotational symmetry. To the best of our knowledge, BioSuper is the first tool of its kind that provides immediate interactive visualization of the graphical results in the browser, biomolecule generator capabilities, different levels of atom selection, sequence-dependent and structure-based superimposition types, and is the only web tool that takes into account the equivalence of atoms in side chains displaying symmetry ambiguity. BioSuper uses ICM program functionality as a core for the superimpositions and displays the results as text, HTML tables and 3D interactive molecular objects that can be visualized in the browser or in Android and iOS platforms with a free plugin. Conclusions BioSuper is a fast and functional tool that allows for pairwise superimposition of proteins and assemblies displaying rotational symmetry. The web server was created after our own frustration when attempting to superimpose flexible oligomers. We strongly believe that its user-friendly and functional design will be of great interest for structural and computational biologists who need to superimpose oligomeric proteins (or any protein). BioSuper web server is freely available to all users at http://ablab.ucsd.edu/BioSuper webcite.
Resumo:
The CORNISH project is the highest resolution radio continuum survey of the Galactic plane to date. It is the 5 GHz radio continuum part of a series of multi-wavelength surveys that focus on the northern GLIMPSE region (10° < l < 65°), observed by the Spitzer satellite in the mid-infrared. Observations with the Very Large Array in B and BnA configurations have yielded a 1.''5 resolution Stokes I map with a root mean square noise level better than 0.4 mJy beam 1. Here we describe the data-processing methods and data characteristics, and present a new, uniform catalog of compact radio emission. This includes an implementation of automatic deconvolution that provides much more reliable imaging than standard CLEANing. A rigorous investigation of the noise characteristics and reliability of source detection has been carried out. We show that the survey is optimized to detect emission on size scales up to 14'' and for unresolved sources the catalog is more than 90% complete at a flux density of 3.9 mJy. We have detected 3062 sources above a 7σ detection limit and present their ensemble properties. The catalog is highly reliable away from regions containing poorly sampled extended emission, which comprise less than 2% of the survey area. Imaging problems have been mitigated by down-weighting the shortest spacings and potential artifacts flagged via a rigorous manual inspection with reference to the Spitzer infrared data. We present images of the most common source types found: H II regions, planetary nebulae, and radio galaxies. The CORNISH data and catalog are available online at http://cornish.leeds.ac.uk.
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
The mutual information of independent parallel Gaussian-noise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signalling constellations with limited peak-to-average ratios (m-PSK, m-QAM, etc) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum mean-square error proves key to solving the power allocation problem.
Resumo:
We study the minimum mean square error (MMSE) and the multiuser efficiency η of large dynamic multiple access communication systems in which optimal multiuser detection is performed at the receiver as the number and the identities of active users is allowed to change at each transmission time. The system dynamics are ruled by a Markov model describing the evolution of the channel occupancy and a large-system analysis is performed when the number of observations grow large. Starting on the equivalent scalar channel and the fixed-point equation tying multiuser efficiency and MMSE, we extend it to the case of a dynamic channel, and derive lower and upper bounds for the MMSE (and, thus, for η as well) holding true in the limit of large signal–to–noise ratios and increasingly large observation time T.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.