875 resultados para Spectral Difference Method
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.
Resumo:
The focus of this research is to explore the applications of the finite difference formulation based on the latency insertion method (LIM) to the analysis of circuit interconnects. Special attention is devoted to addressing the issues that arise in very large networks such as on-chip signal and power distribution networks. We demonstrate that the LIM has the power and flexibility to handle various types of analysis required at different stages of circuit design. The LIM is particularly suitable for simulations of very large scale linear networks and can significantly outperform conventional circuit solvers (such as SPICE).
Resumo:
Determining effective hydraulic, thermal, mechanical and electrical properties of porous materials by means of classical physical experiments is often time-consuming and expensive. Thus, accurate numerical calculations of material properties are of increasing interest in geophysical, manufacturing, bio-mechanical and environmental applications, among other fields. Characteristic material properties (e.g. intrinsic permeability, thermal conductivity and elastic moduli) depend on morphological details on the porescale such as shape and size of pores and pore throats or cracks. To obtain reliable predictions of these properties it is necessary to perform numerical analyses of sufficiently large unit cells. Such representative volume elements require optimized numerical simulation techniques. Current state-of-the-art simulation tools to calculate effective permeabilities of porous materials are based on various methods, e.g. lattice Boltzmann, finite volumes or explicit jump Stokes methods. All approaches still have limitations in the maximum size of the simulation domain. In response to these deficits of the well-established methods we propose an efficient and reliable numerical method which allows to calculate intrinsic permeabilities directly from voxel-based data obtained from 3D imaging techniques like X-ray microtomography. We present a modelling framework based on a parallel finite differences solver, allowing the calculation of large domains with relative low computing requirements (i.e. desktop computers). The presented method is validated in a diverse selection of materials, obtaining accurate results for a large range of porosities, wider than the ranges previously reported. Ongoing work includes the estimation of other effective properties of porous media.
Resumo:
In design and manufacturing, mesh segmentation is required for FACE construction in boundary representation (BRep), which in turn is central for featurebased design, machining, parametric CAD and reverse engineering, among others -- Although mesh segmentation is dictated by geometry and topology, this article focuses on the topological aspect (graph spectrum), as we consider that this tool has not been fully exploited -- We preprocess the mesh to obtain a edgelength homogeneous triangle set and its Graph Laplacian is calculated -- We then produce a monotonically increasing permutation of the Fiedler vector (2nd eigenvector of Graph Laplacian) for encoding the connectivity among part feature submeshes -- Within the mutated vector, discontinuities larger than a threshold (interactively set by a human) determine the partition of the original mesh -- We present tests of our method on large complex meshes, which show results which mostly adjust to BRep FACE partition -- The achieved segmentations properly locate most manufacturing features, although it requires human interaction to avoid over segmentation -- Future work includes an iterative application of this algorithm to progressively sever features of the mesh left from previous submesh removals
Resumo:
Experimental geophysical fluid dynamics often examines regimes of fluid flow infeasible for computer simulations. Velocimetry of zonal flows present in these regimes brings many challenges when the fluid is opaque and vigorously rotating; spherical Couette flows with molten metals are one such example. The fine structure of the acoustic spectrum can be related to the fluid’s velocity field, and inverse spectral methods can be used to predict and, with sufficient acoustic data, mathematically reconstruct the velocity field. The methods are to some extent inherited from helioseismology. This work develops a Finite Element Method suitable to matching the geometries of experimental setups, as well as modelling the acoustics based on that geometry and zonal flows therein. As an application, this work uses the 60-cm setup Dynamo 3.5 at the University of Maryland Nonlinear Dynamics Laboratory. Additionally, results obtained using a small acoustic data set from recent experiments in air are provided.
Resumo:
Past and recent observations have shown that the local site conditions significantly affect the behavior of seismic waves and its potential to cause destructive earthquakes. Thus, seismic microzonation studies have become crucial for seismic hazard assessment, providing local soil characteristics that can help to evaluate the possible seismic effects. Among the different methods used for estimating the soil characteristics, the ones based on ambient noise measurements, such as the H/V technique, become a cheap, non-invasive and successful way for evaluating the soil properties along a studied area. In this work, ambient noise measurements were taken at 240 sites around the Doon Valley, India, in order to characterize the sediment deposits. First, the H/V analysis has been carried out to estimate the resonant frequencies along the valley. Subsequently, some of this H/V results have been inverted, using the neighborhood algorithm and the available geotechnical information, in order to provide an estimation of the S-wave velocity profiles at the studied sites. Using all these information, we have characterized the sedimentary deposits in different areas of the Doon Valley, providing the resonant frequency, the soil thickness, the mean S-wave velocity of the sediments, and the mean S-wave velocity in the uppermost 30 m.
Resumo:
Purpose: To evaluate the comparative efficiency of graphite furnace atomic absorption spectrometry (GFAAS) and hydride generation atomic absorption spectrometry (HGAAS) for trace analysis of arsenic (As) in natural herbal products (NHPs). Method: Arsenic analysis in natural herbal products and standard reference material was conducted using atomic absorption spectrometry (AAS), namely, hydride generation AAS (HGAAS) and graphite furnace (GFAAS). The samples were digested with HNO3–H2O2 in a ratio of 4:1 using microwaveassisted acid digestion. The methods were validated with the aid of the standard reference material 1515 Apple Leaves (SRM) from NIST Results: Mean recovery of three different samples of NHPs, using HGAAS and GFAAS, ranged from 89.3 - 91.4 %, and 91.7 - 93.0 %, respectively. The difference between the two methods was insignificant. A (P= 0.5), B (P=0.4) and C (P=0.88) Relative standard deviation (RSD) RSD, i.e., precision was 2.5 - 6.5 % and 2.3 - 6.7 % using HGAAS and GFAAS techniques, respectively. Recovery of arsenic in SRM was 98 and 102 % by GFAAS and HGAAS, respectively. Conclusion: GFAAS demonstrates acceptable levels of precision and accuracy. Both techniques possess comparable accuracy and repeatability. Thus, the two methods are recommended as an alternative approach for trace analysis of arsenic in natural herbal products.
Resumo:
The brain is a network spanning multiple scales from subcellular to macroscopic. In this thesis I present four projects studying brain networks at different levels of abstraction. The first involves determining a functional connectivity network based on neural spike trains and using a graph theoretical method to cluster groups of neurons into putative cell assemblies. In the second project I model neural networks at a microscopic level. Using diferent clustered wiring schemes, I show that almost identical spatiotemporal activity patterns can be observed, demonstrating that there is a broad neuro-architectural basis to attain structured spatiotemporal dynamics. Remarkably, irrespective of the precise topological mechanism, this behavior can be predicted by examining the spectral properties of the synaptic weight matrix. The third project introduces, via two circuit architectures, a new paradigm for feedforward processing in which inhibitory neurons have the complex and pivotal role in governing information flow in cortical network models. Finally, I analyze axonal projections in sleep deprived mice using data collected as part of the Allen Institute's Mesoscopic Connectivity Atlas. After normalizing for experimental variability, the results indicate there is no single explanatory difference in the mesoscale network between control and sleep deprived mice. Using machine learning techniques, however, animal classification could be done at levels significantly above chance. This reveals that intricate changes in connectivity do occur due to chronic sleep deprivation.
Resumo:
We present an advanced method to achieve natural modifications when applying a pitch shifting process to singing voice by modifying the spectral envelope of the audio ex- cerpt. To this end, an all-pole spectral envelope model has been selected to describe the global variations of the spectral envelope with the changes of the pitch. We performed a pitch shifting process of some sustained vowels with the envelope processing and without it, and compared both by means of a survey open to volunteers in our website.
Resumo:
With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.
Resumo:
AIMS: In the UK, people tend to have poor knowledge of government guidelines for alcohol use, and lack the motivation and skills required to use them to monitor their drinking. The study aim was to determine whether using glasses marked with such guidelines would improve knowledge and attitudes, increase frequency of counting units and lower alcohol intake. METHODS: A total of 450 adults in the UK participated in an intervention vs control study with 1-month follow-up. The intervention group was encouraged to use glasses supplied by the researchers that indicated the unit content of drinks of different strengths and volumes, and stated the intake guidelines. Data were collected online. A further more in-depth interview with 13 intervention group participants enquired into their experiences of using the glasses. RESULTS: Analyses adjusted for baseline variables showed that the intervention improved the following: knowledge of unit-based guidelines, ability to estimate the unit content of drinks, attitudes toward the guidelines and frequency of counting unit intake. However, there was no significant difference in alcohol consumption between the groups at follow-up. Interviews suggested that the glasses encouraged people to think about their drinking and to discuss alcohol with other people. The design of the glasses was not appealing to all, and their initial impact did not always persist. CONCLUSION: Use of unit-marked glasses led to changes in people's reported use of unit-based guidelines to monitor their drinking but, in the short term, no change in consumption. Qualitative data suggested that the glasses could have an impact at the individual level (on knowledge and attitudes) and at a broader level (by prompting discussion of alcohol use).
Resumo:
A dedicated algorithm for sparse spectral representation of music sound is presented. The goal is to enable the representation of a piece of music signal as a linear superposition of as few spectral components as possible, without affecting the quality of the reproduction. A representation of this nature is said to be sparse. In the present context sparsity is accomplished by greedy selection of the spectral components, from an overcomplete set called a dictionary. The proposed algorithm is tailored to be applied with trigonometric dictionaries. Its distinctive feature being that it avoids the need for the actual construction of the whole dictionary, by implementing the required operations via the fast Fourier transform. The achieved sparsity is theoretically equivalent to that rendered by the orthogonal matching pursuit (OMP) method. The contribution of the proposed dedicated implementation is to extend the applicability of the standard OMP algorithm, by reducing its storage and computational demands. The suitability of the approach for producing sparse spectral representation is illustrated by comparison with the traditional method, in the line of the short time Fourier transform, involving only the corresponding orthonormal trigonometric basis.
Resumo:
Background: Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method: A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results: The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen’s d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions: This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.