69 resultados para Empirical Mode Decomposition, vibration-based analysis, damage detection, signal decomposition
Resumo:
Empirical mode decomposition (EMD) is a data-driven method used to decompose data into oscillatory components. This paper examines to what extent the defined algorithm for EMD might be susceptible to data format. Two key issues with EMD are its stability and computational speed. This paper shows that for a given signal there is no significant difference between results obtained with single (binary32) and double (binary64) floating points precision. This implies that there is no benefit in increasing floating point precision when performing EMD on devices optimised for single floating point format, such as graphical processing units (GPUs).
Resumo:
The existing dual-rate blind linear detectors, which operate at either the low-rate (LR) or the high-rate (HR) mode, are not strictly blind at the HR mode and lack theoretical analysis. This paper proposes the subspace-based LR and HR blind linear detectors, i.e., bad decorrelating detectors (BDD) and blind MMSE detectors (BMMSED), for synchronous DS/CDMA systems. To detect an LR data bit at the HR mode, an effective weighting strategy is proposed. The theoretical analyses on the performance of the proposed detectors are carried out. It has been proved that the bit-error-rate of the LR-BDD is superior to that of the HR-BDD and the near-far resistance of the LR blind linear detectors outperforms that of its HR counterparts. The extension to asynchronous systems is also described. Simulation results show that the adaptive dual-rate BMMSED outperform the corresponding non-blind dual-rate decorrelators proposed by Saquib, Yates and Mandayam (see Wireless Personal Communications, vol. 9, p.197-216, 1998).
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
Although Mar del Plata is the most important city on the Atlantic coast of Argentina, mosquitoes inhabiting such area are almost uncharacterized. To increase our knowledge in their distribution, we sampled specimens of natural populations. After the morphological identification based on taxonomic keys, sequences of DNA from small ribosomal subunit (18S rDNA) and cytochrome c oxidase I (COI) genes were obtained from native species and the phylogenetic analysis of these sequences were done. Fourteen species from the genera Uranotaenia, Culex, Ochlerotatus and Psorophora were found and identified. Our 18S rDNA and COI-based analysis indicates the relationships among groups at the supra-species level in concordance with mosquito taxonomy. The introduction and spread of vectors and diseases carried by them are not known in Mar del Plata, but some of the species found in this study were reported as pathogen vectors.
Resumo:
Observations of turbulent fluxes of momentum, heat and moisture from low-level aircraft data are presented. Fluxes are calculated using the eddy covariance technique from flight legs typically ∼40 m above the sea surface. Over 400 runs of 2 min (∼12 km) from 26 flights are evaluated. Flight legs are mainly from around the British Isles although a small number are from around Iceland and Norway. Sea-surface temperature (SST) observations from two on-board sensors (the ARIES interferometer and a Heimann radiometer) and a satellite-based analysis (OSTIA) are used to determine an improved SST estimate. Most of the observations are from moderate to strong wind speed conditions, the latter being a regime short of validation data for the bulk flux algorithms that are necessary for numerical weather prediction and climate models. Observations from both statically stable and unstable atmospheric boundary-layer conditions are presented. There is a particular focus on several flights made as part of the DIAMET (Diabatic influence on mesoscale structures in extratropical storms) project. Observed neutral exchange coefficients are in the same range as previous studies, although higher for the momentum coefficient, and are broadly consistent with the COARE 3.0 bulk flux algorithm, as well as the surface exchange schemes used in the ECMWF and Met Office models. Examining the results as a function of aircraft heading shows higher fluxes and exchange coefficients in the across-wind direction, compared to along-wind (although this comparison is limited by the relatively small number of along-wind legs). A multi-resolution spectral decomposition technique demonstrates a lengthening of spatial scales in along-wind variances in along-wind legs, implying the boundary-layer eddies are elongated in the along-wind direction. The along-wind runs may not be able to adequately capture the full range of turbulent exchange that is occurring because elongation places the largest eddies outside of the run length.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
A new heuristic for the Steiner Minimal Tree problem is presented here. The method described is based on the detection of particular sets of nodes in networks, the “Hot Spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used CIS comparison terms in the experimental analysis which demonstrates the goodness of the heuristic discussed in this paper.
Resumo:
A new heuristic for the Steiner minimal tree problem is presented. The method described is based on the detection of particular sets of nodes in networks, the “hot spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used as comparison terms in the experimental analysis which demonstrates the capability of the heuristic discussed
Resumo:
We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.
Resumo:
DIGE is a protein labelling and separation technique allowing quantitative proteomics of two or more samples by optical fluorescence detection of differentially labelled proteins that are electrophoretically separated on the same gel. DIGE is an alternative to quantitation by MS-based methodologies and can circumvent their analytical limitations in areas such as intact protein analysis, (linear) detection over a wide range of protein abundances and, theoretically, applications where extreme sensitivity is needed. Thus, in quantitative proteomics DIGE is usually complementary to MS-based quantitation and has some distinct advantages. This review describes the basics of DIGE and its unique properties and compares it to MS-based methods in quantitative protein expression analysis.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.
Resumo:
Two different ways of performing low-energy electron diffraction (LEED) structure determinations for the p(2 x 2) structure of oxygen on Ni {111} are compared: a conventional LEED-IV structure analysis using integer and fractional-order IV-curves collected at normal incidence and an analysis using only integer-order IV-curves collected at three different angles of incidence. A clear discrimination between different adsorption sites can be achieved by the latter approach as well as the first and the best fit structures of both analyses are within each other's error bars (all less than 0.1 angstrom). The conventional analysis is more sensitive to the adsorbate coordinates and lateral parameters of the substrate atoms whereas the integer-order-based analysis is more sensitive to the vertical coordinates of substrate atoms. Adsorbate-related contributions to the intensities of integer-order diffraction spots are independent of the state of long-range order in the adsorbate layer. These results show, therefore, that for lattice-gas disordered adsorbate layers, for which only integer-order spots are observed, similar accuracy and reliability can be achieved as for ordered adsorbate layers, provided the data set is large enough.
Resumo:
This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark
Resumo:
Risk and uncertainty are, to say the least, poorly considered by most individuals involved in real estate analysis - in both development and investment appraisal. Surveyors continue to express 'uncertainty' about the value (risk) of using relatively objective methods of analysis to account for these factors. These methods attempt to identify the risk elements more explicitly. Conventionally this is done by deriving probability distributions for the uncontrolled variables in the system. A suggested 'new' way of "being able to express our uncertainty or slight vagueness about some of the qualitative judgements and not entirely certain data required in the course of the problem..." uses the application of fuzzy logic. This paper discusses and demonstrates the terminology and methodology of fuzzy analysis. In particular it attempts a comparison of the procedures with those used in 'conventional' risk analysis approaches and critically investigates whether a fuzzy approach offers an alternative to the use of probability based analysis for dealing with aspects of risk and uncertainty in real estate analysis
Resumo:
We develop a new governance perspective on port–hinterland linkages and related port impacts. Many stakeholders in a port’s hinterland now demand tangible economic benefits from port activities, as a precondition for supporting port expansion and infrastructural investments. We use a governance lens to assess this farsighted contracting challenge. We find that most contemporary economic impact assessments of port investment projects pay scant attention to the contractual relationship challenges in port-hinterland relationships. In contrast, we focus explicitly on the spatial distribution of such impacts and the related contractual relationship issues facing port authorities or port users and their stakeholders in the port hinterland. We introduce a new concept, the Port Hinterland Impact (PHI) matrix, which focuses explicitly on the spatial distribution of port impacts and related contractual relationship challenges. The PHI matrix offers insight into port impacts using two dimensions: logistics dedicatedness, as an expression of Williamsonian asset specificity in the sphere of logistics contractual relationships, and geographic reach, with a longer reach typically reflecting the need for more complex contacting to overcome ‘distance’ challenges with external stakeholders. We use the PHI matrix in our empirical, governance-based analysis of contractual relationships between the port authorities in Antwerp and Zeebrugge, and their respective stakeholders.