171 resultados para Sparsity
Resumo:
This article reports on the influence of the magnetization damping on dynamic hysteresis loops in single-domain particles with uniaxial anisotropy. The approach is based on the Neel-Brown theory and the hierarchy of differential recurrence relations, which follow from averaging over the realizations of the stochastic Landau-Lifshitz equation. A new method of solution is proposed, where the resulting system of differential equations is solved directly using optimized algorithms to explore its sparsity. All parameters involved in uniaxial systems are treated in detail, with particular attention given to the frequency dependence. It is shown that in the ferromagnetic resonance region, novel phenomena are observed for even moderately low values of the damping. The hysteresis loops assume remarkably unusual shapes, which are also followed by a pronounced reduction of their heights. Also demonstrated is that these features remain for randomly oriented ensembles and, moreover, are approximately independent of temperature and particle size. (C) 2012 American Institute of Physics. [doi:10.1063/1.3684629]
Resumo:
We consider a recently proposed finite-element space that consists of piecewise affine functions with discontinuities across a smooth given interface Γ (a curve in two dimensions, a surface in three dimensions). Contrary to existing extended finite element methodologies, the space is a variant of the standard conforming Formula space that can be implemented element by element. Further, it neither introduces new unknowns nor deteriorates the sparsity structure. It is proved that, for u arbitrary in Formula, the interpolant Formula defined by this new space satisfies Graphic where h is the mesh size, Formula is the domain, Formula, Formula, Formula and standard notation has been adopted for the function spaces. This result proves the good approximation properties of the finite-element space as compared to any space consisting of functions that are continuous across Γ, which would yield an error in the Formula-norm of order Graphic. These properties make this space especially attractive for approximating the pressure in problems with surface tension or other immersed interfaces that lead to discontinuities in the pressure field. Furthermore, the result still holds for interfaces that end within the domain, as happens for example in cracked domains.
Resumo:
[EN ]The classical optimal (in the Frobenius sense) diagonal preconditioner for large sparse linear systems Ax = b is generalized and improved. The new proposed approximate inverse preconditioner N is based on the minimization of the Frobenius norm of the residual matrix AM − I, where M runs over a certain linear subspace of n × n real matrices, defined by a prescribed sparsity pattern. The number of nonzero entries of the n×n preconditioning matrix N is less than or equal to 2n, and n of them are selected as the optimal positions in each of the n columns of matrix N. All theoretical results are justified in detail…
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
The weathering of Fe-bearing minerals under extraterrestrial conditions was investigated by Mössbauer (MB) spectroscopy to gain insights into the role of water on the planet Mars. The NASA Mars Exploration Rovers Spirit and Opportunity each carry a miniaturized Mössbauer spectrometer MIMOS II for the in situ investigation of Martian soils and rocks as part of their payload. The MER flight instruments had to be modified in order to work over the Martian diurnal temperature range (180 K – 290 K) and within the unique electronic environment of the rovers. The modification required special calibration procedures. The integration time necessary to obtain a good quality Mössbauer spectrum with the MIMOS II flight instruments was reduced by 30 % through the design of a new collimator. The in situ investigation of rocks along the rover Spirit's traverse in Gusev crater revealed weakly altered olivine basalt on the plains and pervasively altered basalt in the Columbia Hills. Correlation plots of primary Fe-bearing minerals identified by MB spectroscopy such as olivine versus secondary Fe-bearing phases such as nanophase Fe oxides showed that olivine is the mineral which is primarily involved in weathering reactions. This argues for a reduced availability of water. Identification of the Fe-oxyhydroxide goethite in the Columbia Hills is unequivocal evidence for aqueous weathering processes in the Columbia Hills. Experiments in which mineral powders were exposed to components of the Martian atmosphere showed that interaction with the atmosphere alone, in the absence of liquid water, is sufficient to oxidize Martian surface materials. The fine-grained dust suspended in the Martian atmosphere may have been altered solely by gas-solid reactions. Fresh and altered specimens of Martian meteorites were investigated with MIMOS II. The study of Martian meteorites in the lab helped to identify in Bounce Rock the first rock on Mars which is similar in composition to basaltic shergottites, a subgroup of the Martian meteorites. The field of astrobiology includes the study of the origin, evolution and distribution of life in the universe. Water is a prerequisite for life. The MER Mössbauer spectrometers identified aqueous minerals such as jarosite and goethite. The identification of jarosite was crucial to evaluate the habitability of Opportunity's landing site at Meridiani Planum during the formation of the sedimentary outcrop rocks, because jarosite puts strong constrains on pH levels. The identification of olivine in rocks and soils on the Gusev crater plains provide evidence for the sparsity of water under current conditions on Mars. Ratios of Fe2+/Fe3+ were obtained with Mössbauer spectroscopy from basaltic glass samples which were exposed at a deep sea hydrothermal vent. The ratios were used as a measure of potential energy for use by a microbial community. Samples from Mars analogue field sites on Earth exhibiting morphological biosignatures were also investigated.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
Spectrum sensing is currently one of the most challenging design problems in cognitive radio. A robust spectrum sensing technique is important in allowing implementation of a practical dynamic spectrum access in noisy and interference uncertain environments. In addition, it is desired to minimize the sensing time, while meeting the stringent cognitive radio application requirements. To cope with this challenge, cyclic spectrum sensing techniques have been proposed. However, such techniques require very high sampling rates in the wideband regime and thus are costly in hardware implementation and power consumption. In this thesis the concept of compressed sensing is applied to circumvent this problem by utilizing the sparsity of the two-dimensional cyclic spectrum. Compressive sampling is used to reduce the sampling rate and a recovery method is developed for re- constructing the sparse cyclic spectrum from the compressed samples. The reconstruction solution used, exploits the sparsity structure in the two-dimensional cyclic spectrum do-main which is different from conventional compressed sensing techniques for vector-form sparse signals. The entire wideband cyclic spectrum is reconstructed from sub-Nyquist-rate samples for simultaneous detection of multiple signal sources. After the cyclic spectrum recovery two methods are proposed to make spectral occupancy decisions from the recovered cyclic spectrum: a band-by-band multi-cycle detector which works for all modulation schemes, and a fast and simple thresholding method that works for Binary Phase Shift Keying (BPSK) signals only. In addition a method for recovering the power spectrum of stationary signals is developed as a special case. Simulation results demonstrate that the proposed spectrum sensing algorithms can significantly reduce sampling rate without sacrifcing performance. The robustness of the algorithms to the noise uncertainty of the wireless channel is also shown.
Resumo:
This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.
Resumo:
We present a novel approach to the reconstruction of depth from light field data. Our method uses dictionary representations and group sparsity constraints to derive a convex formulation. Although our solution results in an increase of the problem dimensionality, we keep numerical complexity at bay by restricting the space of solutions and by exploiting an efficient Primal-Dual formulation. Comparisons with state of the art techniques, on both synthetic and real data, show promising performances.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
BACKGROUND Panic disorder is characterised by the presence of recurrent unexpected panic attacks, discrete periods of fear or anxiety that have a rapid onset and include symptoms such as racing heart, chest pain, sweating and shaking. Panic disorder is common in the general population, with a lifetime prevalence of 1% to 4%. A previous Cochrane meta-analysis suggested that psychological therapy (either alone or combined with pharmacotherapy) can be chosen as a first-line treatment for panic disorder with or without agoraphobia. However, it is not yet clear whether certain psychological therapies can be considered superior to others. In order to answer this question, in this review we performed a network meta-analysis (NMA), in which we compared eight different forms of psychological therapy and three forms of a control condition. OBJECTIVES To assess the comparative efficacy and acceptability of different psychological therapies and different control conditions for panic disorder, with or without agoraphobia, in adults. SEARCH METHODS We conducted the main searches in the CCDANCTR electronic databases (studies and references registers), all years to 16 March 2015. We conducted complementary searches in PubMed and trials registries. Supplementary searches included reference lists of included studies, citation indexes, personal communication to the authors of all included studies and grey literature searches in OpenSIGLE. We applied no restrictions on date, language or publication status. SELECTION CRITERIA We included all relevant randomised controlled trials (RCTs) focusing on adults with a formal diagnosis of panic disorder with or without agoraphobia. We considered the following psychological therapies: psychoeducation (PE), supportive psychotherapy (SP), physiological therapies (PT), behaviour therapy (BT), cognitive therapy (CT), cognitive behaviour therapy (CBT), third-wave CBT (3W) and psychodynamic therapies (PD). We included both individual and group formats. Therapies had to be administered face-to-face. The comparator interventions considered for this review were: no treatment (NT), wait list (WL) and attention/psychological placebo (APP). For this review we considered four short-term (ST) outcomes (ST-remission, ST-response, ST-dropouts, ST-improvement on a continuous scale) and one long-term (LT) outcome (LT-remission/response). DATA COLLECTION AND ANALYSIS As a first step, we conducted a systematic search of all relevant papers according to the inclusion criteria. For each outcome, we then constructed a treatment network in order to clarify the extent to which each type of therapy and each comparison had been investigated in the available literature. Then, for each available comparison, we conducted a random-effects meta-analysis. Subsequently, we performed a network meta-analysis in order to synthesise the available direct evidence with indirect evidence, and to obtain an overall effect size estimate for each possible pair of therapies in the network. Finally, we calculated a probabilistic ranking of the different psychological therapies and control conditions for each outcome. MAIN RESULTS We identified 1432 references; after screening, we included 60 studies in the final qualitative analyses. Among these, 54 (including 3021 patients) were also included in the quantitative analyses. With respect to the analyses for the first of our primary outcomes, (short-term remission), the most studied of the included psychological therapies was CBT (32 studies), followed by BT (12 studies), PT (10 studies), CT (three studies), SP (three studies) and PD (two studies).The quality of the evidence for the entire network was found to be low for all outcomes. The quality of the evidence for CBT vs NT, CBT vs SP and CBT vs PD was low to very low, depending on the outcome. The majority of the included studies were at unclear risk of bias with regard to the randomisation process. We found almost half of the included studies to be at high risk of attrition bias and detection bias. We also found selective outcome reporting bias to be present and we strongly suspected publication bias. Finally, we found almost half of the included studies to be at high risk of researcher allegiance bias.Overall the networks appeared to be well connected, but were generally underpowered to detect any important disagreement between direct and indirect evidence. The results showed the superiority of psychological therapies over the WL condition, although this finding was amplified by evident small study effects (SSE). The NMAs for ST-remission, ST-response and ST-improvement on a continuous scale showed well-replicated evidence in favour of CBT, as well as some sparse but relevant evidence in favour of PD and SP, over other therapies. In terms of ST-dropouts, PD and 3W showed better tolerability over other psychological therapies in the short term. In the long term, CBT and PD showed the highest level of remission/response, suggesting that the effects of these two treatments may be more stable with respect to other psychological therapies. However, all the mentioned differences among active treatments must be interpreted while taking into account that in most cases the effect sizes were small and/or results were imprecise. AUTHORS' CONCLUSIONS There is no high-quality, unequivocal evidence to support one psychological therapy over the others for the treatment of panic disorder with or without agoraphobia in adults. However, the results show that CBT - the most extensively studied among the included psychological therapies - was often superior to other therapies, although the effect size was small and the level of precision was often insufficient or clinically irrelevant. In the only two studies available that explored PD, this treatment showed promising results, although further research is needed in order to better explore the relative efficacy of PD with respect to CBT. Furthermore, PD appeared to be the best tolerated (in terms of ST-dropouts) among psychological treatments. Unexpectedly, we found some evidence in support of the possible viability of non-specific supportive psychotherapy for the treatment of panic disorder; however, the results concerning SP should be interpreted cautiously because of the sparsity of evidence regarding this treatment and, as in the case of PD, further research is needed to explore this issue. Behaviour therapy did not appear to be a valid alternative to CBT as a first-line treatment for patients with panic disorder with or without agoraphobia.
Resumo:
The application of quantitative and semiquantitative methods to assemblage data from dinoflagellate cysts shows potential for interpreting past environments, both in terms of paleotemperature estimates and in recognizing water masses and circulation patterns. Estimates of winter sea-surface temperature (WSST) were produced by using the Impagidinium Index (II) method, and by applying a winter-temperature transfer function (TFw). Estimates of summer sea-surface temperature (SSST) were produced by using a summer-temperature transfer function (TFs), two methods based on a temperature-distribution chart (ACT and ACTpo), and a method based on the ratio of gonyaulacoid:protoperidinioid specimens (G:P). WSST estimates from the II and TFw methods are in close agreement except where Impagidinium species are sparse. SSST estimates from TFs are more variable. The value of the G:P ratio for the Pliocene data in this paper is limited by the apparent sparsity of protoperidinioids, which results in monotonous SSST estimates of 14-26°C. The ACT methods show two biases for the Pliocene data set: taxonomic substitution may force 'matches' yielding incorrect temperature estimates, and the method is highly sensitive to the end-points of species distributions. Dinocyst assemblage data were applied to reconstruct Pliocene sea-surface temperatures between 3.5-2.5 Ma from DSDP Hole 552A, and ODP Holes 646B and 642B, which are presently located beneath cold and cool-temperate waters north of 56°N. Our initial results suggest that at 3.0 Ma, WSSTs were a few degrees C warmer than the present and that there was a somewhat reduced north-south temperature gradient. For all three sites, it is likely that SSSTs were also warmer, but by an unknown, perhaps large, amount. Past oceanic circulation in the North Atlantic was probably different from the present.
Resumo:
The Southern Westerly Winds (SWW) exert a crucial influence over the world ocean and climate. Nevertheless, a comprehensive understanding of the Holocene temporal and spatial evolution of the SWW remains a significant challenge due to the sparsity of high-resolution marine archives and appropriate SWW proxies. Here, we present a north-south transect of high-resolution planktonic foraminiferal oxygen isotope records from the western South Atlantic. Our proxy records reveal Holocene migrations of the Brazil- Malvinas Confluence (BMC), a highly sensitive feature for changes in the position and strength of the northern portion of the SWW. Through the tight coupling of the BMC position to the large-scale wind field, the records allow a quantitative reconstruction of Holocene latitudinal displacements of the SWW across the South Atlantic. Our data reveal a gradual poleward movement of the SWW by about 1-1.5° from the early to the mid-Holocene. Afterwards variability in the SWW is dominated by millennial-scale displacements in the order of 1° in latitude with no recognizable longer-term trend. These findings are confronted with results from a state-of-the-art transient Holocene climate simulation using a comprehensive coupled atmosphere-ocean general circulation model. Proxy-inferred and modeled SWW shifts compare qualitatively, but the model underestimates both orbitally forced multi-millennial and internal millennial SWW variability by almost an order of magnitude. The underestimated natural variability implies a substantial uncertainty in model projections of future SWW shifts.
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.