885 resultados para wavelet entropy
Resumo:
The aim of this study was to build a model and analyze how users move in a virtual environment and to explore the experiential dimensions connected with different ways of moving. Due to the lack of previous research on this subject, this was an explorative study. This study also aimed to identify different ways how users move in virtual environments and the background variables connected to them. It was hypothesized that fluent movement in virtual environments is connected to high presence, skill and challenge assessments. Test participants (n = 68) were mostly highly educated young adults. A virtual environment was built using a CAVE -type virtual reality interface. The task was to search for objects that do not belong into a normal house. The participants movement in the virtual house was recorded on a computer. Movement was modelled using a cluster analysis of information entropy based movement measurements, acceleration, amount of stops and time spent being stationary. The experiential dimensions were measured using the EVEQ -questionnaire. We were able to identify four different ways of moving in virtual environments. In respect of background variables, the four groups differed only in the amount of weekly computer usage. However, fluent movement in virtual environments was connected to a high sense of presence. Furthermore, participants who moved fluently in the environment assessed their skills as being high and regarded the use of virtual environment as challenging. The results indicate that different ways of moving affects how people experience virtual environments. Consequently the participants assessment of their skills and level of challenge have an impact on the affective evaluation of the situation at hand. Entropy measures have not been previously applied when studying movement, and in addition the role of movement on the experiential dimensions of virtual environments is an unexplored subject. The movement analysis method introduced here is applicable to other research problems. Finally, this study expands on our knowledge of the special characteristics connected with the experiential dimensions of virtual environments.
Resumo:
Wavenumber-frequency spectral analysis of different atmospheric variables has been carried Out using 25 years of data. The area considered is the tropical belt 25 degrees S-25 degrees N. A combined FFT wavelet analysis method has been used for this purpose. Variables considered are outgoing long wave radiation (OLR), 850 hPa divergence, zonal and meridional winds at 850, 500 and 200 hPa levels, sea level pressure and 850 hPa geopotential height. It is shown that the spectra of different variables have some common properties, but each variable also has few features diffe:rent from the rest. While Kelvin mode is prominent in OLR, and zonal winds, it is not clearly observed in pressure and geopotential height fields; the latter two have a dominant wavenumber zero mode not seen in other variables except in meridional wind at 200 hPa and 850 hPa divergences. Different dominant modes in the tropics show significant variations on sub-seasonal time scales.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. For the probability distribution of the transfer matrix R of the conductor we propose a distribution of maximum information entropy, constrained by the following physical requirements: (1) flux conservation, (2) time-reversal invariance, and (3) scaling with the length of the conductor of the two lowest cumulants of ω, where R=exp(iω→⋅Jbhat). The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.
Resumo:
The fluorescence of N-dansylgalactosamine [N-(5-dimethylaminonaphthalene-1-sulphonyl)galactosamine] was enhanced 11-fold with a 25 nm blue-shift in the emission maximum upon binding to soya-bean agglutinin (SBA). This change was used to determine the association constants and thermodynamic parameters for this interaction. The association constant of 1.51 X 10(6) M-1 at 20 degrees C indicated a very strong binding, which is mainly due to a relatively small entropy value, as revealed by the thermodynamic parameters: delta G = -34.7 kJ X mol-1, delta H = -37.9 kJ X mol-1 and delta S = -10.9 J X mol-1 X K-1. The specific binding of this sugar to SBA shows that the lectin can accommodate a large hydrophobic substituent on the C-2 of galactose. Binding of non-fluorescent ligands, studied by monitoring the fluorescence changes when they are added to a mixture of SBA and N-dansylgalactosamine, indicates that a hydrophobic substituent at the anomeric position increases the affinity of the interaction. The C-6 hydroxy group also stabilizes the binding considerably. Kinetics of binding of N-dansylgalactosamine to SBA studied by stopped-flow spectrofluorimetry are consistent with a single-step mechanism and yielded k+1 = 2.4 X 10(5) M-1 X s-1 and k-1 = 0.2 s-1 at 20 degrees C. The activation parameters indicate an enthalpicly controlled association process.
Resumo:
Effect of heating rate on melting and crystallization of polyamide fibres has been examined using differential scanning calorimetric (DSC) technique. Peak temperature for melting (T m) and crystallization (T k) get suppressed with the increase in the heating rate which has been explained on the basis of chain orientation. Heat of melting (DeltaH m) and crystallization (DeltaH k) have been measured.DeltaH m vs. T m shows a nonlinear dependence which has been explained on the basis of entropy change. Quantitative difference inDeltaH m andDeltaH k values has been explained on the basis of orientation and degradation of the polymer.
Resumo:
Given the limited resources available for weed management, a strategic approach is required to give the best bang for your buck. The current study incorporates: (1) a model ensemble approach to identify areas of uncertainty and commonality regarding a species invasive potential, (2) current distribution of the invaded species, and (3) connectivity of systems to identify target regions and focus efforts for more effective management. Uncertainty in the prediction of suitable habitat for H. amplexicaulis (study species) in Australia was addressed in an ensemble-forecasting approach to compare distributional scenarios from four models (CLIMATCH; CLIMEX; boosted regression trees [BRT]; maximum entropy [Maxent]). Models were built using subsets of occurrence and environmental data. Catchment risk was determined through incorporating habitat suitability, the current abundance and distribution of H. amplexicaulis, and catchment connectivity. Our results indicate geographic differences between predictions of different approaches. Despite these differences a number of catchments in northern, central, and southern Australia were identified as high risk of invasion or further spread by all models suggesting they should be given priority for the management of H. amplexicaulis. The study also highlighted the utility of ensemble approaches in indentifying areas of uncertainty and commonality regarding the species invasive potential.
Resumo:
Dengue dynamics are driven by complex interactions between hosts, vectors and viruses that are influenced by environmental and climatic factors. Several studies examined the role of El Niño Southern Oscillation (ENSO) in dengue incidence. However, the role of Indian Ocean Dipole (IOD), a coupled ocean atmosphere phenomenon in the Indian Ocean, which controls the summer monsoon rainfall in the Indian region, remains unexplored. Here, we examined the effects of ENSO and IOD on dengue incidence in Bangladesh. According to the wavelet coherence analysis, there was a very weak association between ENSO, IOD and dengue incidence, but a highly significant coherence between dengue incidence and local climate variables (temperature and rainfall). However, a distributed lag nonlinear model (DLNM) revealed that the association between dengue incidence and ENSO or IOD were comparatively stronger after adjustment for local climate variables, seasonality and trend. The estimated effects were nonlinear for both ENSO and IOD with higher relative risks at higher ENSO and IOD. The weak association between ENSO, IOD and dengue incidence might be driven by the stronger effects of local climate variables such as temperature and rainfall. Further research is required to disentangle these effects.
Resumo:
Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.
Resumo:
Al-10.98 pct Si-4.9 pct Ni ternary eutectic alloy was unidirectionally solidified at growth rates from 1.39μm/sec to 6.95μm/sec. Binary Al-Ni and Al-Si eutectics prepared from the same purity metals were also solidified under similar conditions to characterize the growth conditions under the conditions of present study. NiAl3 phase appeared as fibers in the binary Al-Ni eutectic and silicon appeared as irregular plates in the binary Al-Si eutectic. However, in the ternary Al-Si-Ni eutectic alloy both NiAl3 and silicon phases appeared as irregular plates dispersed in α-Al phase, without any regular repctitive arrangement. The size and spacing of NiAl3 and Si platelets in cone shaped colonies decreased with an increase in the growth rate of the ternary eutectic. Examination of specimen quenched during unidirectional solidification indicated that the ternary eutectic grows with a non-planar interface with both Si and NiAl3 phases protruding into the liquid. It is concluded that it will be difficult to grow regular ternary eutectic structures even if only one phase has a high entropy of melting. The tensile strength and modulus of unidirectionally solidified Al-Si-Ni eutectic was lower than the chill cast alloys of the same composition, and decreased with a decrease in growth rate. Tensile modulus and strength of ternary Al-Si-Ni eutectic alloys was greater than binary Al-Si eutectic alloy under similar growth conditions, both in the chill cast and in unidirectionally solidified conditions.
Resumo:
Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.
Resumo:
A new method for decomposition of compo,.~itsei gnals is presented. It is shown that high freyuency portion of composite signal spectrum possesses information on echo structure. The proposed technique does not assume the shape of basic wavelet and does not place any restrictions on the amplitudes and arrival times of echoes inm the composite signal. In the absence of noise any desirrd resolution can he obtained The effect of sampling rate and jFequency window function on echo resolutio.~ are di.wussed. Voiced speech segment is considered as an example of conzpxite sigrnl to demonstrate the application of the decomposition technique.
Resumo:
We study the properties of walls of marginal stability for BPS decays in a class of N = 2 theories. These theories arise in N = 2 string compactifications obtained as freely acting orbifolds of N = 4 theories, such theories include the STU model and the FHSV model. The cross sections of these walls for a generic decay in the axion-dilaton plane reduce to lines or circles. From the continuity properties of walls of marginal stability we show that central charges of BPS states do not vanish in the interior of the moduli space. Given a charge vector of a BPS state corresponding to a large black hole in these theories, we show that all walls of marginal stability intersect at the same point in the lower half of the axion-dilaton plane. We isolate a class of decays whose walls of marginal stability always lie in a region bounded by walls formed by decays to small black holes. This enables us to isolate a region in moduli space for which no decays occur within this class. We then study entropy enigma decays for such models and show that for generic values of the moduli, that is when moduli are of order one compared to the charges, entropy enigma decays do not occur in these models.
Resumo:
Multiresolution synthetic aperture radar (SAR) image formation has been proven to be beneficial in a variety of applications such as improved imaging and target detection as well as speckle reduction. SAR signal processing traditionally carried out in the Fourier domain has inherent limitations in the context of image formation at hierarchical scales. We present a generalized approach to the formation of multiresolution SAR images using biorthogonal shift-invariant discrete wavelet transform (SIDWT) in both range and azimuth directions. Particularly in azimuth, the inherent subband decomposition property of wavelet packet transform is introduced to produce multiscale complex matched filtering without involving any approximations. This generalized approach also includes the formulation of multilook processing within the discrete wavelet transform (DWT) paradigm. The efficiency of the algorithm in parallel form of execution to generate hierarchical scale SAR images is shown. Analytical results and sample imagery of diffuse backscatter are presented to validate the method.
Resumo:
Glycosaminoglycans (GAGs) are complex highly charged linear polysaccharides that have a variety of roles in biological processes. We report the first use of molecular dynamics (MD) free energy calculations using the MM/PBSA method to investigate the binding of GAGs to protein molecules, namely the platelet endothelial cell adhesion molecule 1 (PECAM-1) and annexin A2. Calculations of the free energy of the binding of heparin fragments of different sizes reveal the existence of a region of low GAG-binding affinity in domains 5-6 of PECAM-1 and a region of high affinity in domains 2-3, consistent with experimental data and ligand-protein docking studies. A conformational hinge movement between domains 2 and 3 was observed, which allows the binding of heparin fragments of increasing size (pentasaccharides to octasaccharides) with an increasingly higher binding affinity. Similar simulations of the binding of a heparin fragment to annexin A2 reveal the optimization of electrostatic and hydrogen bonding interactions with the protein and protein-bound calcium ions. In general, these free energy calculations reveal that the binding of heparin to protein surfaces is dominated by strong electrostatic interactions for longer fragments, with equally important contributions from van der Waals interactions and vibrational entropy changes, against a large unfavorable desolvation penalty due to the high charge density of these molecules.