212 resultados para Gaussian quadrature formulas
Resumo:
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3%, outperforming a recent HMM-based approach which obtained 71.2%.
Resumo:
We consider online prediction problems where the loss between the prediction and the outcome is measured by the squared Euclidean distance and its generalization, the squared Mahalanobis distance. We derive the minimax solutions for the case where the prediction and action spaces are the simplex (this setup is sometimes called the Brier game) and the \ell_2 ball (this setup is related to Gaussian density estimation). We show that in both cases the value of each sub-game is a quadratic function of a simple statistic of the state, with coefficients that can be efficiently computed using an explicit recurrence relation. The resulting deterministic minimax strategy and randomized maximin strategy are linear functions of the statistic.
Resumo:
In this paper, we derive a new nonlinear two-sided space-fractional diffusion equation with variable coefficients from the fractional Fick’s law. A semi-implicit difference method (SIDM) for this equation is proposed. The stability and convergence of the SIDM are discussed. For the implementation, we develop a fast accurate iterative method for the SIDM by decomposing the dense coefficient matrix into a combination of Toeplitz-like matrices. This fast iterative method significantly reduces the storage requirement of O(n2)O(n2) and computational cost of O(n3)O(n3) down to n and O(nlogn)O(nlogn), where n is the number of grid points. The method retains the same accuracy as the underlying SIDM solved with Gaussian elimination. Finally, some numerical results are shown to verify the accuracy and efficiency of the new method.
Resumo:
This paper presents a combined experimental and numerical study on the behaviour of both circular and square concrete-filled steel tube (CFT) stub columns under local compression. Twelve circular and eight square CFT stub columns were tested to study their bearing capacity and the key influential parameters. A 3D finite element model was established for simulation and parametric study to investigate the structural behaviour of the stub columns. The numerical results agreed well with the experimental results. In addition, analytical formulas were proposed to calculate the load bearing capacity of CFT stub columns under local compression.
Resumo:
The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.
Resumo:
In an estuary, mixing and dispersion result from a combination of large-scale advection and smallscale turbulence, which are complex to estimate. The predictions of scalar transport and mixing are often inferred and rarely accurate, due to inadequate understanding of the contributions of these difference scales to estuarine recirculation. A multi-device field study was conducted in a small sub-tropical estuary under neap tide conditions with near-zero fresh water discharge for about 48 hours. During the study, acoustic Doppler velocimeters (ADV) were sampled at high frequency (50 Hz), while an acoustic Doppler current profiler (ADCP) and global positioning system (GPS) tracked drifters were used to obtain some lower frequency spatial distribution of the flow parameters within the estuary. The velocity measurements were complemented with some continuous measurement of water depth, conductivity, temperature and some other physiochemical parameters. Thorough quality control was carried out by implementation of relevant error removal filters on the individual data set to intercept spurious data. A triple decomposition (TD) technique was introduced to access the contributions of tides, resonance and ‘true’ turbulence in the flow field. The time series of mean flow measurements for both the ADCP and drifter were consistent with those of the mean ADV data when sampled within a similar spatial domain. The tidal scale fluctuation of velocity and water level were used to examine the response of the estuary to tidal inertial current. The channel exhibited a mixed type wave with a typical phase-lag between 0.035π– 0.116π. A striking feature of the ADV velocity data was the slow fluctuations, which exhibited large amplitudes of up to 50% of the tidal amplitude, particularly in slack waters. Such slow fluctuations were simultaneously observed in a number of physiochemical properties of the channel. The ensuing turbulence field showed some degree of anisotropy. For all ADV units, the horizontal turbulence ratio ranged between 0.4 and 0.9, and decreased towards the bed, while the vertical turbulence ratio was on average unity at z = 0.32 m and approximately 0.5 for the upper ADV (z = 0.55 m). The result of the statistical analysis suggested that the ebb phase turbulence field was dominated by eddies that evolved from ejection type process, while that of the flood phase contained mixed eddies with significant amount related to sweep type process. Over 65% of the skewness values fell within the range expected of a finite Gaussian distribution and the bulk of the excess kurtosis values (over 70%) fell within the range of -0.5 and +2. The TD technique described herein allowed the characterisation of a broader temporal scale of fluctuations of the high frequency data sampled within the durations of a few tidal cycles. The study provides characterisation of the ranges of fluctuation required for an accurate modelling of shallow water dispersion and mixing in a sub-tropical estuary.
Resumo:
This paper proposes the addition of a weighted median Fisher discriminator (WMFD) projection prior to length-normalised Gaussian probabilistic linear discriminant analysis (GPLDA) modelling in order to compensate the additional session variation. In limited microphone data conditions, a linear-weighted approach is introduced to increase the influence of microphone speech dataset. The linear-weighted WMFD-projected GPLDA system shows improvements in EER and DCF values over the pooled LDA- and WMFD-projected GPLDA systems in inter-view-interview condition as WMFD projection extracts more speaker discriminant information with limited number of sessions/ speaker data, and linear-weighted GPLDA approach estimates reliable model parameters with limited microphone data.
Resumo:
We propose a novel technique for conducting robust voice activity detection (VAD) in high-noise recordings. We use Gaussian mixture modeling (GMM) to train two generic models; speech and non-speech. We then score smaller segments of a given (unseen) recording against each of these GMMs to obtain two respective likelihood scores for each segment. These scores are used to compute a dissimilarity measure between pairs of segments and to carry out complete-linkage clustering of the segments into speech and non-speech clusters. We compare the accuracy of our method against state-of-the-art and standardised VAD techniques to demonstrate an absolute improvement of 15% in half-total error rate (HTER) over the best performing baseline system and across the QUT-NOISE-TIMIT database. We then apply our approach to the Audio-Visual Database of American English (AVDBAE) to demonstrate the performance of our algorithm in using visual, audio-visual or a proposed fusion of these features.
Resumo:
We extended genetic linkage analysis - an analysis widely used in quantitative genetics - to 3D images to analyze single gene effects on brain fiber architecture. We collected 4 Tesla diffusion tensor images (DTI) and genotype data from 258 healthy adult twins and their non-twin siblings. After high-dimensional fluid registration, at each voxel we estimated the genetic linkage between the single nucleotide polymorphism (SNP), Val66Met (dbSNP number rs6265), of the BDNF gene (brain-derived neurotrophic factor) with fractional anisotropy (FA) derived from each subject's DTI scan, by fitting structural equation models (SEM) from quantitative genetics. We also examined how image filtering affects the effect sizes for genetic linkage by examining how the overall significance of voxelwise effects varied with respect to full width at half maximum (FWHM) of the Gaussian smoothing applied to the FA images. Raw FA maps with no smoothing yielded the greatest sensitivity to detect gene effects, when corrected for multiple comparisons using the false discovery rate (FDR) procedure. The BDNF polymorphism significantly contributed to the variation in FA in the posterior cingulate gyrus, where it accounted for around 90-95% of the total variance in FA. Our study generated the first maps to visualize the effect of the BDNF gene on brain fiber integrity, suggesting that common genetic variants may strongly determine white matter integrity.
Resumo:
Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.
Resumo:
Background: Magnetic resonance diffusion tensor imaging (DTI) shows promise in the early detection of microstructural pathophysiological changes in the brain. Objectives: To measure microstructural differences in the brains of participants with amnestic mild cognitive impairment (MCI) compared with an age-matched control group using an optimised DTI technique with fully automated image analysis tools and to investigate the correlation between diffusivity measurements and neuropsychological performance scores across groups. Methods: 34 participants (17 participants with MCI, 17 healthy elderly adults) underwent magnetic resonance imaging (MRI)-based DTI. To control for the effects of anatomical variation, diffusion images of all participants were registered to standard anatomical space. Significant statistical differences in diffusivity measurements between the two groups were determined on a pixel-by-pixel basis using gaussian random field theory. Results: Significantly raised mean diffusivity measurements (p<0.001) were observed in the left and right entorhinal cortices (BA28), posterior occipital-parietal cortex (BA18 and BA19), right parietal supramarginal gyrus (BA40) and right frontal precentral gyri (BA4 and BA6) in participants with MCI. With respect to fractional anisotropy, participants with MCI had significantly reduced measurements (p<0.001) in the limbic parahippocampal subgyral white matter, right thalamus and left posterior cingulate. Pearson's correlation coefficients calculated across all participants showed significant correlations between neuropsychological assessment scores and regional measurements of mean diffusivity and fractional anisotropy. Conclusions: DTI-based diffusivity measures may offer a sensitive method of detecting subtle microstructural brain changes associated with preclinical Alzheimer's disease.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
Resumo:
Back in 1995, Peter Drahos wrote a futuristic article called ‘Information feudalism in the information society’. It took the form of an imagined history of the information society in the year 2015. Drahos provided a pessimistic vision of the future, in which the information age was ruled by the private owners of intellectual property. He ended with the bleak, Hobbesian image: "It is unimaginable that the information society of the 21st century could be like this. And yet if abstract objects fall out of the intellectual commons and are enclosed by private owners, private, arbitrary, unchecked global power will become a part of life in the information society. A world in which seed rights, algorithms, DNA, and chemical formulas are owned by a few, a world in which information flows can be coordinated by information-media barons, might indeed be information feudalism (p. 222)." This science fiction assumed that a small number of states would dominate the emerging international regulatory order set up under the World Trade Organization. In Information Feudalism: Who Owns the Knowledge Economy?, Peter Drahos and his collaborator John Braithwaite reprise and expand upon the themes first developed in that article. The authors contend: "Information feudalism is a regime of property rights that is not economicallyefficient, and does not get the balance right between rewarding innovation and diffusing it. Like feudalism, it rewards guilds instead of inventive individual citizens. It makes democratic citizens trespassers on knowledge that should be the common heritage of humankind, their educational birthright. Ironically, information feudalism, by dismantling the publicness of knowledge, will eventually rob the knowledge economy of much of its productivity (p. 219)." Drahos and Braithwaite emphasise that the title Information Feudalism is not intended to be taken at face value by literal-minded readers, and crudely equated with medieval feudalism. Rather, the title serves as a suggestive metaphor. It designates the transfer of knowledge from the intellectual commons to private corporation under the regime of intellectual property.
Resumo:
Multinational financial institutions (MNFIs) play a significant role in financing the activities of their clients in developing nations. Consistent with the ‘follow-the-customer’ phenomenon which explains financial institution expansion, these entities are increasingly profiting from activities associated with this growing market. However, not only are MNFIs persistent users of tax havens, but also, more than other industries, have the opportunity to reduce tax through transfer pricing measures. This paper establishes a case for an industry-specific adoption of unitary taxation with formulary apportionment as a viable alternative to the current regime. In doing so, it considers the practicalities of implementing this by examining both definitional issues and possible formulas for MNFIs. This paper argues that, while there would be implementation difficulties to overcome, the current domestic models of formulary apportionment provide important guidance as to how the unitary business and business activities of MNFIs should be defined, as well as the factors that should be included in an allocation formula, and the appropriate weighting. This paper concludes that unitary taxation with formulary apportionment is a viable industry-specific alternative for MNFIs.