98 resultados para Meshfree particle methods
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
We study the BPE (Brownian particle equation) model of the Burgers equationpresented in the preceeding article [6]. More precisely, we are interestedin establishing the existence and uniqueness properties of solutions usingprobabilistic techniques.
Resumo:
Two concentration methods for fast and routine determination of caffeine (using HPLC-UV detection) in surface, and wastewater are evaluated. Both methods are based on solid-phase extraction (SPE) concentration with octadecyl silica sorbents. A common “offline” SPE procedure shows that quantitative recovery of caffeine is obtained with 2 mL of an elution mixture solvent methanol-water containing at least 60% methanol. The method detection limit is 0.1 μg L−1 when percolating 1 L samples through the cartridge. The development of an “online” SPE method based on a mini-SPE column, containing 100 mg of the same sorbent, directly connected to the HPLC system allows the method detection limit to be decreased to 10 ng L−1 with a sample volume of 100 mL. The “offline” SPE method is applied to the analysis of caffeine in wastewater samples, whereas the “on-line” method is used for analysis in natural waters from streams receiving significant water intakes from local wastewater treatment plants
Resumo:
The aim of this work was to determine whether the filters used in microirrigation systems can remove potentially emitter-clogging particles. The particle size and volume distributions of different effluents and their filtrates were established, and the efficiency of the removal of these particles and total suspended solids by screen, disc and sand filters determined. In most of the effluents and filtrates, the number of particles with a diameter > 20 μm was minimal. By analysing the particle volume distribution it was found that particles larger than the disc and screen filter pores appeared in the filtrates. However, the sand filter was able to retain particles larger than the pore size. The filtration efficiency depended more on the type of effluent than on the filter. It was also found that the particle size distribution followed a potential law. Analysis of the β exponents showed that the filters did not significantly modify the particle size distribution of the effluents
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
Context.It has been proposed that the origin of the very high-energy photons emitted from high-mass X-ray binaries with jet-like features, so-called microquasars (MQs), is related to hadronic interactions between relativistic protons in the jet and cold protons of the stellar wind. Leptonic secondary emission should be calculated in a complete hadronic model that includes the effects of pairs from charged pion decays inside the jets and the emission from pairs generated by gamma-ray absorption in the photosphere of the system. Aims.We aim at predicting the broadband spectrum from a general hadronic microquasar model, taking into account the emission from secondaries created by charged pion decay inside the jet. Methods.The particle energy distribution for secondary leptons injected along the jets is consistently derived taking the energy losses into account. The spectral energy distribution resulting from these leptons is calculated after assuming different values of the magnetic field inside the jets. We also compute the spectrum of the gamma-rays produced by neutral pion-decay and processed by electromagnetic cascades under the stellar photon field. Results.We show that the secondary emission can dominate the spectral energy distribution at low energies (~1 MeV). At high energies, the production spectrum can be significantly distorted by the effect of electromagnetic cascades. These effects are phase-dependent, and some variability modulated by the orbital period is predicted.
Resumo:
The action of botulinum neurotoxin on acetylcholine release, and on the structural changes at the presynaptic membrane associated with the transmitter release,was studied by using a subcellular fraction of cholinergic nerve terminals (synaptosomes) isolated from the Torpedo electric organ. Acetylcholine and ATP release were continuously monitored by chemiluminescent methods.To catch the membrane morphological changes, the quick-freezing method was applied. Our results show that botulinum neurotoxin inhibits the release of acetylcholine from these isolated nerve terminals in a dose-dependent manner, whereas ATP release is not affected. The maximal inhibition (70%) is achieved at neurotoxin concentrations as low as 125 pM with an incubation time of 6 min. This effect is not linked to an alteration of the integrity of the synaptosomes since, after poisoning by botulinum neurotoxin type A, they show a nonmodified occluded lactate dehydrogenase activity. Moreover, membrane potential is not altered by the toxin with respect to the control, either in resting condition or after potassium depolarization. In addition to acetylcholine release inhibition, botulinum neurotoxin blocks the rearrangement of the presynaptic intramembrane particles induced by potassium stimulation. The action of botulinum neurotoxin suggests that the intramembrane particle rearrangement is related to the acetylcholine secretion induced by potassium stimulation in synaptosomes isolated from the electric organ of Torpedo marmorata.
Resumo:
Bulk and single-particle properties of hot hyperonic matter are studied within the Brueckner-Hartree-Fock approximation extended to finite temperature. The bare interaction in the nucleon sector is the Argonne V18 potential supplemented with an effective three-body force to reproduce the saturating properties of nuclear matter. The modern Nijmegen NSC97e potential is employed for the hyperon-nucleon and hyperon-hyperon interactions. The effect of temperature on the in-medium effective interaction is found to be, in general, very small and the single-particle potentials differ by at most 25% for temperatures in the range from 0 to 60 MeV. The bulk properties of infinite matter of baryons, either nuclear isospin symmetric or a Beta-stable composition that includes a nonzero fraction of hyperons, are obtained. It is found that the presence of hyperons can modify the thermodynamical properties of the system in a non-negligible way.