893 resultados para wavelet transforms
Resumo:
We study the scaling properties and Kraichnan–Leith–Batchelor (KLB) theory of forced inverse cascades in generalized two-dimensional (2D) fluids (α-turbulence models) simulated at resolution 8192x8192. We consider α=1 (surface quasigeostrophic flow), α=2 (2D Euler flow) and α=3. The forcing scale is well resolved, a direct cascade is present and there is no large-scale dissipation. Coherent vortices spanning a range of sizes, most larger than the forcing scale, are present for both α=1 and α=2. The active scalar field for α=3 contains comparatively few and small vortices. The energy spectral slopes in the inverse cascade are steeper than the KLB prediction −(7−α)/3 in all three systems. Since we stop the simulations well before the cascades have reached the domain scale, vortex formation and spectral steepening are not due to condensation effects; nor are they caused by large-scale dissipation, which is absent. One- and two-point p.d.f.s, hyperflatness factors and structure functions indicate that the inverse cascades are intermittent and non-Gaussian over much of the inertial range for α=1 and α=2, while the α=3 inverse cascade is much closer to Gaussian and non-intermittent. For α=3 the steep spectrum is close to that associated with enstrophy equipartition. Continuous wavelet analysis shows approximate KLB scaling ℰ(k)∝k−2 (α=1) and ℰ(k)∝k−5/3 (α=2) in the interstitial regions between the coherent vortices. Our results demonstrate that coherent vortex formation (α=1 and α=2) and non-realizability (α=3) cause 2D inverse cascades to deviate from the KLB predictions, but that the flow between the vortices exhibits KLB scaling and non-intermittent statistics for α=1 and α=2.
Resumo:
Philosophy has tended to regard poetry primarily in terms of truth and falsity, assuming that its business is to state or describe states of affairs. Speech act theory transforms philosophical debate by regarding poetry in terms of action, showing that its business is primarily to do things. The proposal can sharpen our understanding of types of poetry; examples of the ‘Chaucer-Type’ and its variants demonstrate this. Objections to the proposal can be divided into those that relate to the agent of actions associated with a poem, those that relate to the actions themselves, and those that relate to the things done. These objections can be answered. A significant consequence of the proposal is that it gives prominence to issues of responsibility and commitment. This prominence brings philosophical debate usefully into line with contemporary poetry, whose concern with such issues is manifest in characteristic forms of anxiety.
Resumo:
A dipeptide with a long fatty acid chain at its N-terminus gives hydrogels in phosphate buffer in the pH range 7.0–8.5. The hydrogel with a gelator concentration of 0.45% (w/v) at pH 7.46 (physiological pH) provides a very good platform to study dynamic changes within a supramolecular framework as it exhibits remarkable change in its appearance with time. Interestingly, the first formed transparent hydrogel gradually transforms into a turbid gel within 2 days. These two forms of the hydrogel have been thoroughly investigated by using small angle X-ray scattering (SAXS), powder X-ray diffraction (PXRD), field emission scanning electron microscopic (FE-SEM) and high-resolution transmission electron microscopic (HR-TEM) imaging, FT-IR and rheometric analyses. The SAXS and low angle PXRD studies substantiate different packing arrangements for the gelator molecules for these two different gel states (the freshly prepared and the aged hydrogel). Moreover, rheological studies of these two gels reveal that the aged gel is stiffer than the freshly prepared gel.
Resumo:
Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.
Resumo:
A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing. The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g. electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged auto-mutual information clustering (LAMIC) and Fully automated statistical thresholding (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts.
Resumo:
We utilized an ecosystem process model (SIPNET, simplified photosynthesis and evapotranspiration model) to estimate carbon fluxes of gross primary productivity and total ecosystem respiration of a high-elevation coniferous forest. The data assimilation routine incorporated aggregated twice-daily measurements of the net ecosystem exchange of CO2 (NEE) and satellite-based reflectance measurements of the fraction of absorbed photosynthetically active radiation (fAPAR) on an eight-day timescale. From these data we conducted a data assimilation experiment with fifteen different combinations of available data using twice-daily NEE, aggregated annual NEE, eight-day f AP AR, and average annual fAPAR. Model parameters were conditioned on three years of NEE and fAPAR data and results were evaluated to determine the information content from the different combinations of data streams. Across the data assimilation experiments conducted, model selection metrics such as the Bayesian Information Criterion and Deviance Information Criterion obtained minimum values when assimilating average annual fAPAR and twice-daily NEE data. Application of wavelet coherence analyses showed higher correlations between measured and modeled fAPAR on longer timescales ranging from 9 to 12 months. There were strong correlations between measured and modeled NEE (R2, coefficient of determination, 0.86), but correlations between measured and modeled eight-day fAPAR were quite poor (R2 = −0.94). We conclude that this inability to determine fAPAR on eight-day timescale would improve with the considerations of the radiative transfer through the plant canopy. Modeled fluxes when assimilating average annual fAPAR and annual NEE were comparable to corresponding results when assimilating twice-daily NEE, albeit at a greater uncertainty. Our results support the conclusion that for this coniferous forest twice-daily NEE data are a critical measurement stream for the data assimilation. The results from this modeling exercise indicate that for this coniferous forest, average annuals for satellite-based fAPAR measurements paired with annual NEE estimates may provide spatial detail to components of ecosystem carbon fluxes in proximity of eddy covariance towers. Inclusion of other independent data streams in the assimilation will also reduce uncertainty on modeled values.
Resumo:
Public–private partnerships (PPPs) are new in Russia and represent project implementation in progress. The government is actively pursuing PPP deployment in sectors such as transportation and urban infrastructure, and at all levels including federal, regional and especially local. Despite the lack of pertinent laws and regulations, the PPP public policy quickly transforms into a policy paradigm that provides simplified concepts and solutions and intensifies partnership development. The article delineates an emerging model of Russia’s PPP policy paradigm, whose structure includes the shared understanding of the need for long-term collaboration between the public sector and business, a changing set of government responsibilities that imply an increasing private provision of public services, and new institutional capacities. This article critically appraises the principal dynamics that contribute to an emerging PPP policy paradigm, namely the broad government treatment of the meaning of a partnership and of a contractual PPP; a liberal PPP approval process that lacks clear guidelines and consistency across regions; excessive emphasis on positive PPP externalities and neglect of drawbacks; and unjustifiably extensive government financial support to PPPs. Whilst a paradigm appears to be useful specifically for the policy purpose of PPP expansion, it may also mask inefficiencies such as higher prices of public services and greater government risks.
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
Nickel cyanide is a layered material showing markedly anisotropic behaviour. High-pressure neutron diffraction measurements show that at pressures up to 20.1 kbar, compressibility is much higher in the direction perpendicular to the layers, c, than in the plane of the strongly chemically bonded metal-cyanide sheets. Detailed examination of the behaviour of the tetragonal lattice parameters, a and c, as a function of pressure reveal regions in which large changes in slope occur, for example, in c(P) at 1 kbar. The experimental pressure dependence of the volume data is fitted to a bulk modulus, B0, of 1050 (20) kbar over the pressure range 0–1 kbar, and to 124 (2) kbar over the range 1–20.1 kbar. Raman spectroscopy measurements yield additional information on how the structure and bonding in the Ni(CN)2 layers change with pressure and show that a phase change occurs at about 1 kbar. The new high-pressure phase, (Phase PII), has ordered cyanide groups with sheets of D4h symmetry containing Ni(CN)4 and Ni(NC)4 groups. The Raman spectrum of phase PII closely resembles that of the related layered compound, Cu1/2Ni1/2(CN)2, which has previously been shown to contain ordered C≡N groups. The phase change, PI to PII, is also observed in inelastic neutron scattering studies which show significant changes occurring in the phonon spectra as the pressure is raised from 0.3 to 1.5 kbar. These changes reflect the large reduction in the interlayer spacing which occurs as Phase PI transforms to Phase PII and the consequent increase in difficulty for out-of-plane atomic motions. Unlike other cyanide materials e.g. Zn(CN)2 and Ag3Co(CN)6, which show an amorphization and/or a decomposition at much lower pressures (~100 kbar), Ni(CN)2 can be recovered after pressurising to 200 kbar, albeit in a more ordered form.
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
This continuing study of intragroup light in compact groups of galaxies aims to establish new constraints to models of formation and evolution of galaxy groups, specially of compact groups, which are a key part in the evolution of larger structures, such as clusters. In this paper we present three additional groups (HCG 15, 35 and 51) using deep wide-field B- and R-band images observed with the LAICA camera at the 3.5-m telescope at the Calar Alto observatory (CAHA). This instrument provides us with very stable flat-fielding, a mandatory condition for reliably measuring intragroup diffuse light. The images were analysed with the OV_WAV package, a wavelet technique that allows us to uncover the intragroup component in an unprecedented way. We have detected that 19, 15 and 26 per cent of the total light of HCG 15, 35 and 51, respectively, are in the diffuse component, with colours that are compatible with old stellar populations and with mean surface brightness that can be its low as 28.4 B mag arcsec(-2). Dynamical masses, crossing times and mass-to-light ratios were recalculated using the new group parameters. Also tidal features were analysed using the wavelet technique.
Resumo:
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.
Resumo:
In this work we introduce a new hierarchical surface decomposition method for multiscale analysis of surface meshes. In contrast to other multiresolution methods, our approach relies on spectral properties of the surface to build a binary hierarchical decomposition. Namely, we utilize the first nontrivial eigenfunction of the Laplace-Beltrami operator to recursively decompose the surface. For this reason we coin our surface decomposition the Fiedler tree. Using the Fiedler tree ensures a number of attractive properties, including: mesh-independent decomposition, well-formed and nearly equi-areal surface patches, and noise robustness. We show how the evenly distributed patches can be exploited for generating multiresolution high quality uniform meshes. Additionally, our decomposition permits a natural means for carrying out wavelet methods, resulting in an intuitive method for producing feature-sensitive meshes at multiple scales. Published by Elsevier Ltd.
Resumo:
Texture is one of the most important visual attributes for image analysis. It has been widely used in image analysis and pattern recognition. A partially self-avoiding deterministic walk has recently been proposed as an approach for texture analysis with promising results. This approach uses walkers (called tourists) to exploit the gray scale image contexts in several levels. Here, we present an approach to generate graphs out of the trajectories produced by the tourist walks. The generated graphs embody important characteristics related to tourist transitivity in the image. Computed from these graphs, the statistical position (degree mean) and dispersion (entropy of two vertices with the same degree) measures are used as texture descriptors. A comparison with traditional texture analysis methods is performed to illustrate the high performance of this novel approach. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Bose systems, subject to the action of external random potentials, are considered. For describing the system properties, under the action of spatially random potentials of arbitrary strength, the stochastic mean-field approximation is employed. When the strength of disorder increases, the extended Bose-Einstein condensate fragments into spatially disconnected regions, forming a granular condensate. Increasing the strength of disorder even more transforms the granular condensate into the normal glass. The influence of time-dependent external potentials is also discussed. Fastly varying temporal potentials, to some extent, imitate the action of spatially random potentials. In particular, strong time-alternating potential can induce the appearance of a nonequilibrium granular condensate.