929 resultados para non-uniform scale perturbation finite difference scheme
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
Mathematics Subject Classification: 26A33, 74B20, 74D10, 74L15
Resumo:
We investigate return-to-zero (RZ) to non-return-to-zero (NRZ) format conversion by means of the linear time-invariant system theory. It is shown that the problem of converting random RZ stream to NRZ stream can be reduced to constructing an appropriate transfer function for the linear filter. This approach is then used to propose novel optimally-designed single fiber Bragg grating (FBG) filter scheme for RZ-OOK/DPSK/DQPSK to NRZ-OOK/DPSK/DQPSK format conversion. The spectral response of the FBG is designed according to the optical spectra of the algebraic difference between isolated NRZ and RZ pulses, and the filter order is optimized for the maximum Q-factor of the output NRZ signals. Experimental results as well as simulations show that such an optimallydesigned FBG can successfully perform RZ-OOK/DPSK/DQPSK to NRZOOK/ DPSK/DQPSK format conversion.
Resumo:
Given an n-ary k-valued function f, gap(f) denotes the essential arity gap of f which is the minimal number of essential variables in f which become fictive when identifying any two distinct essential variables in f. In the present paper we study the properties of the symmetric function with non-trivial arity gap (2 ≤ gap(f)). We prove several results concerning decomposition of the symmetric functions with non-trivial arity gap with its minors or subfunctions. We show that all non-empty sets of essential variables in symmetric functions with non-trivial arity gap are separable. ACM Computing Classification System (1998): G.2.0.
Resumo:
A novel multichannel carrier-suppressed return-to-zero (CSRZ) to non-return-to-zero (NRZ) format conversion scheme based on a single custom-designed fiber Bragg grating (FBG) with comb spectra is proposed. The spectral response of each channel is designed according to the algebraic difference between the CSRZ and NRZ spectra outlines. The tailored group delays are introduced to minimize the maximum refractive index modulation. Numerical results show that four-channel 200-GHz-spaced CSRZ signals at 40 Gbits/s can be converted into NRZ signals with high Q-factor and wide-range robustness. It is shown that our proposed FBG is robust to deviations of bandwidth and central wavelength detuning. Another important merit of this scheme is that the pattern effects are efficiently reduced owing to the well-designed spectra response.
Resumo:
This study investigated the proposition density, sentence and clause type usage and non-finite verbal usage in two college textbooks. The teaching implications are presented.
Resumo:
Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability. As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.
Resumo:
Peer reviewed
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Light confinement and controlling an optical field has numerous applications in the field of telecommunications for optical signals processing. When the wavelength of the electromagnetic field is on the order of the period of a photonic microstructure, the field undergoes reflection, refraction, and coherent scattering. This produces photonic bandgaps, forbidden frequency regions or spectral stop bands where light cannot exist. Dielectric perturbations that break the perfect periodicity of these structures produce what is analogous to an impurity state in the bandgap of a semiconductor. The defect modes that exist at discrete frequencies within the photonic bandgap are spatially localized about the cavity-defects in the photonic crystal. In this thesis the properties of two tight-binding approximations (TBAs) are investigated in one-dimensional and two-dimensional coupled-cavity photonic crystal structures We require an efficient and simple approach that ensures the continuity of the electromagnetic field across dielectric interfaces in complex structures. In this thesis we develop \textrm{E} -- and \textrm{D} --TBAs to calculate the modes in finite 1D and 2D two-defect coupled-cavity photonic crystal structures. In the \textrm{E} -- and \textrm{D} --TBAs we expand the coupled-cavity \overrightarrow{E} --modes in terms of the individual \overrightarrow{E} -- and \overrightarrow{D} --modes, respectively. We investigate the dependence of the defect modes, their frequencies and quality factors on the relative placement of the defects in the photonic crystal structures. We then elucidate the differences between the two TBA formulations, and describe the conditions under which these formulations may be more robust when encountering a dielectric perturbation. Our 1D analysis showed that the 1D modes were sensitive to the structure geometry. The antisymmetric \textrm{D} mode amplitudes show that the \textrm{D} --TBA did not capture the correct (tangential \overrightarrow{E} --field) boundary conditions. However, the \textrm{D} --TBA did not yield significantly poorer results compared to the \textrm{E} --TBA. Our 2D analysis reveals that the \textrm{E} -- and \textrm{D} --TBAs produced nearly identical mode profiles for every structure. Plots of the relative difference between the \textrm{E} and \textrm{D} mode amplitudes show that the \textrm{D} --TBA did capture the correct (normal \overrightarrow{E} --field) boundary conditions. We found that the 2D TBA CC mode calculations were 125-150 times faster than an FDTD calculation for the same two-defect PCS. Notwithstanding this efficiency, the appropriateness of either TBA was found to depend on the geometry of the structure and the mode(s), i.e. whether or not the mode has a large normal or tangential component.
Resumo:
Global niobium production is presently dominated by three operations, Araxá and Catalão (Brazil), and Niobec (Canada). Although Brazil accounts for over 90% of the world’s niobium production, a number of high grade niobium deposits exist worldwide. The advancement of these deposits depends largely on the development of operable beneficiation flowsheets. Pyrochlore, as the primary niobium mineral, is typically upgraded by flotation with amine collectors at acidic pH following a complicated flowsheet with significant losses of niobium. This research compares the typical two stage flotation flowsheet to a direct flotation process (i.e. elimination of gangue pre-flotation) with the objective of circuit simplification. In addition, the use of a chelating reagent (benzohydroxamic acid, BHA) was studied as an alternative collector for fine grained, highly disseminated pyrochlore. For the amine based reagent system, results showed that while comparable at the laboratory scale, when scaled up to the pilot level the direct flotation process suffered from circuit instability because of high quantities of dissolved calcium in the process water due to stream recirculation and fine calcite dissolution, which ultimately depressed pyrochlore. This scale up issue was not observed in pilot plant operation of the two stage flotation process as a portion of the highly reactive carbonate minerals was removed prior to acid addition. A statistical model was developed for batch flotation using BHA on carbonatite ore (0.25% Nb2O5) that could not be effectively upgraded using the conventional amine reagent scheme. Results showed that it was possible to produce a concentrate containing 1.54% Nb2O5 with 93% Nb recovery in ~15% of the original mass. Fundamental studies undertaken included FT-IR and XPS, which showed the adsorption of both the protonized amine and the neutral amine onto the surface of the pyrochlore (possibly at niobium sites as indicated by detected shifts in the Nb3d binding energy). The results suggest that the preferential flotation of pyrochlore over quartz with amines at low pH levels can be attributed to a difference in critical hemimicelle concentration (CHC) values for the two minerals. BHA was found to be absorbed on pyrochlore surfaces by a similar mechanism to alkyl hydroxamic acid. It is hoped that this work will assist in improving operability of existing pyrochlore flotation circuits and help promote the development of niobium deposits globally. Future studies should focus on investigation into specific gangue mineral depressants and inadvertent activation phenomenon related to BHA flotation of gangue minerals.
Resumo:
Since core-collapse supernova simulations still struggle to produce robust neutrino-driven explosions in 3D, it has been proposed that asphericities caused by convection in the progenitor might facilitate shock revival by boosting the activity of non-radial hydrodynamic instabilities in the post-shock region. We investigate this scenario in depth using 42 relativistic 2D simulations with multigroup neutrino transport to examine the effects of velocity and density perturbations in the progenitor for different perturbation geometries that obey fundamental physical constraints (like the anelastic condition). As a framework for analysing our results, we introduce semi-empirical scaling laws relating neutrino heating, average turbulent velocities in the gain region, and the shock deformation in the saturation limit of non-radial instabilities. The squared turbulent Mach number, 〈Ma2〉, reflects the violence of aspherical motions in the gain layer, and explosive runaway occurs for 〈Ma2〉 ≳ 0.3, corresponding to a reduction of the critical neutrino luminosity by ∼25∼25 per cent compared to 1D. In the light of this theory, progenitor asphericities aid shock revival mainly by creating anisotropic mass flux on to the shock: differential infall efficiently converts velocity perturbations in the progenitor into density perturbations δρ/ρ at the shock of the order of the initial convective Mach number Maprog. The anisotropic mass flux and ram pressure deform the shock and thereby amplify post-shock turbulence. Large-scale (ℓ = 2, ℓ = 1) modes prove most conducive to shock revival, whereas small-scale perturbations require unrealistically high convective Mach numbers. Initial density perturbations in the progenitor are only of the order of Ma2progMaprog2 and therefore play a subdominant role.