996 resultados para Sparse Point Sampling
Resumo:
We consider the problem of developing efficient sampling schemes for multiband sparse signals. Previous results on multicoset sampling implementations that lead to universal sampling patterns (which guarantee perfect reconstruction), are based on a set of appropriate interleaved analog to digital converters, all of them operating at the same sampling frequency. In this paper we propose an alternative multirate synchronous implementation of multicoset codes, that is, all the analog to digital converters in the sampling scheme operate at different sampling frequencies, without need of introducing any delay. The interleaving is achieved through the usage of different rates, whose sum is significantly lower than the Nyquist rate of the multiband signal. To obtain universal patterns the sampling matrix is formulated and analyzed. Appropriate choices of the parameters, that is the block length and the sampling rates, are also proposed.
Resumo:
Many problems in digital communications involve wideband radio signals. As the most recent example, the impressive advances in Cognitive Radio systems make even more necessary the development of sampling schemes for wideband radio signals with spectral holes. This is equivalent to considering a sparse multiband signal in the framework of Compressive Sampling theory. Starting from previous results on multicoset sampling and recent advances in compressive sampling, we analyze the matrix involved in the corresponding reconstruction equation and define a new method for the design of universal multicoset codes, that is, codes guaranteeing perfect reconstruction of the sparse multiband signal.
Resumo:
Long reach-passive optical networks (LR-PON) are being proposed as a means of enabling ubiquitous fiber-to-the-home (FTTH) by massive sharing of network resources and therefore reducing per customer costs to affordable levels. In this paper, we analyze the chain solutions for LR-PON deployment in urban and rural areas at 100-Gb/s point-to-point transmission using dual polarization-quaternary phase shift-keying (DP-QPSK) modulation. The numerical analysis shows that with appropriate finite impulse response (FIR) filter designs, 100-Gb/s transmission can be achieved with at least 512 way split and up to 160 km total distance, which is sufficient for many of the optical paths in a practical situation, for point-to-point link from one LR-PON to another LR-PON through the optical switch at the metro nodes and across a core light path through the core network without regeneration.
Resumo:
Consider a discrete locally finite subset Gamma of R(d) and the cornplete graph (Gamma, E), with vertices Gamma and edges E. We consider Gibbs measures on the set of sub-graphs with vertices Gamma and edges E` subset of E. The Gibbs interaction acts between open edges having a vertex in common. We study percolation properties of the Gibbs distribution of the graph ensemble. The main results concern percolation properties of the open edges in two cases: (a) when Gamma is sampled from a homogeneous Poisson process; and (b) for a fixed Gamma with sufficiently sparse points. (c) 2010 American Institute of Physics. [doi:10.1063/1.3514605]
Resumo:
Using NONMEM, the population pharmacokinetics of perhexiline were studied in 88 patients (34 F, 54 M) who were being treated for refractory angina. Their mean +/- SD (range) age was 75 +/- 9.9 years (46-92), and the length of perhexiline treatment was 56 +/- 77 weeks (0.3-416). The sampling time after a dose was 14.1 +/- 21.4 hours (0.5-200), and the perhexiline plasma concentrations were 0.39 +/- 0.32 mg/L (0.03-1.56). A one-compartment model with first-order absorption was fitted to the data using the first-order (FO) approximation. The best model contained 2 subpopulations (obtained via the $MIXTURE subroutine) of 77 subjects (subgroup A) and 11 subjects (subgroup B) that had typical values for clearance (CL/F) of 21.8 L/h and 2.06 L/h, respectively. The volumes of distribution (V/F) were 1470 L and 260 L, respectively, which suggested a reduction in presystemic metabolism in subgroup B. The interindividual variability (CV%) was modeled logarithmically and for CL/F ranged from 69.1% (subgroup A) to 86.3% (subgroup B). The interindividual variability in V/F was 111%. The residual variability unexplained by the population model was 28.2%. These results confirm and extend the existing pharmacokinetic data on perhexiline, especially the bimodal distribution of CL/F manifested via an inherited deficiency in hepatic and extrahepatic CYP2D6 activity.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
13th International Conference on Autonomous Robot Systems (Robotica), 2013, Lisboa
Resumo:
This study aims to provide a passive sampling approach which can be routinely used to investigate polychlorinated biphenyl (PCB) sources in rivers. The approach consists of deploying low density polyethylene (LDPE) strips downstream and upstream of potential PCB sources as well as in their water discharges. Concentrations of indicator PCBs (iPCBs) absorbed in samplers (Cs) from upstream and downstream sites are compared with each other to reveal increases of PCB levels. Cs measured in water discharges are used to determine if released amounts of PCBs are compatible with increases revealed in the river. As water velocity can greatly vary along a river stretch and influences the uptake at each site in a different way, differences in velocity have to be taken into account to correctly interpret Cs. LDPE strips were exposed to velocities between 1.6 and 37 cm s−1 using a channel system built in the field. Relationships between velocity and Cs were established for each iPCB to determine the expected change in Cs due to velocity variations. For PCBs 28 and 52, this change does not exceed a factor 2 for velocity variations in the range from 1.6 to 100 cm s−1 (extrapolated data above 37 cm s−1). For PCBs 101, 138, 153 and 180, this change only exceeds a factor 2 in the case of large velocity variations. The approach was applied in the Swiss river Venoge to first conduct a primary investigation of potential PCB sources and then conduct thorough investigations of two suspected sources.
Resumo:
Volumetric soil water content (theta) can be evaluated in the field by direct or indirect methods. Among the direct, the gravimetric method is regarded as highly reliable and thus often preferred. Its main disadvantages are that sampling and laboratory procedures are labor intensive, and that the method is destructive, which makes resampling of a same point impossible. Recently, the time domain reflectometry (TDR) technique has become a widely used indirect, non-destructive method to evaluate theta. In this study, evaluations of the apparent dielectric number of soils (epsilon) and samplings for the gravimetrical determination of the volumetric soil water content (thetaGrav) were carried out at four sites of a Xanthic Ferralsol in Manaus - Brazil. With the obtained epsilon values, theta was estimated using empirical equations (thetaTDR), and compared with thetaGrav derived from disturbed and undisturbed samples. The main objective of this study was the comparison of thetaTDR estimates of horizontally as well as vertically inserted probes with the thetaGrav values determined by disturbed and undisturbed samples. Results showed that thetaTDR estimates of vertically inserted probes and the average of horizontally measured layers were only slightly and insignificantly different. However, significant differences were found between the thetaTDR estimates of different equations and between disturbed and undisturbed samples in the thetaGrav determinations. The use of the theoretical Knight et al. model, which permits an evaluation of the soil volume assessed by TDR probes, is also discussed. It was concluded that the TDR technique, when properly calibrated, permits in situ, nondestructive measurements of q in Xanthic Ferralsols of similar accuracy as the gravimetric method.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.
Resumo:
Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS) models for estimating the area under the plasma concentration versus time curve (AUC) and the peak plasma concentration (Cmax) of 4-methylaminoantipyrine (MAA), an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336), measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias <1.5%, precision between 3.1 and 8.3%) by LSS models based on two sampling times. Validation tests indicate that the most informative 2-point LSS models developed for one formulation provide good estimates (R²>0.85) of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h), but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4%) as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%). Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.
Resumo:
Nowadays problem of solving sparse linear systems over the field GF(2) remain as a challenge. The popular approach is to improve existing methods such as the block Lanczos method (the Montgomery method) and the Wiedemann-Coppersmith method. Both these methods are considered in the thesis in details: there are their modifications and computational estimation for each process. It demonstrates the most complicated parts of these methods and gives the idea how to improve computations in software point of view. The research provides the implementation of accelerated binary matrix operations computer library which helps to make the progress steps in the Montgomery and in the Wiedemann-Coppersmith methods faster.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.