932 resultados para Current decomposition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The passage of an electric current through graphite or few-layer graphene can result in a striking structural transformation, but there is disagreement about the precise nature of this process. Some workers have interpreted the phenomenon in terms of the sublimation and edge reconstruction of essentially flat graphitic structures. An alternative explanation is that the transformation actually involves a change from a flat to a three-dimensional structure. Here we describe detailed studies of carbon produced by the passage of a current through graphite which provide strong evidence that the transformed carbon is indeed three-dimensional. The evidence comes primarily from images obtained in the scanning transmission electron microscope using the technique of high-angle annular dark-field imaging, and from a detailed analysis of electron energy loss spectra. We discuss the possible mechanism of the transformation, and consider potential applications of “three-dimensional bilayer graphene”.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss substorm observations made near 2100 magnetic local time (MLT) on March 7, 1991, in a collaborative study involving data from the European Incoherent Scatter radar, all-sky camera data, and magnetometer data from the Tromsø Auroral Observatory, the U.K. Sub-Auroral Magnetometer Network (SAMNET) and the IMAGE magnetometer chain. We conclude that for the substorm studied a plasmoid was not pinched off until at least 10 min after onset at the local time of the observations (2100 MLT) and that the main substorm electrojet expanded westward over this local time 14 min after onset. In the late growth phase/early expansion phase, we observed southward drifting arcs probably moving faster than the background plasma. Similar southward moving arcs in the recovery phase moved at a speed which does not appear to be significantly different from the measured plasma flow speed. We discuss these data in terms of the “Kiruna conjecture” and classical “near-Earth neutral line” paradigms, since the data show features of both models of substorm development. We suggest that longitudinal variation in behavior may reconcile the differences between the two models in the case of this substorm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In contrast to prior studies showing a positive lapse-rate feedback associated with the Arctic inversion, Boé et al. reported that strong present-day Arctic temperature inversions are associated with stronger negative longwave feedbacks and thus reduced Arctic amplification in the model ensemble from phase 3 of the Coupled Model Intercomparison Project (CMIP3). A permutation test reveals that the relation between longwave feedbacks and inversion strength is an artifact of statistical self-correlation and that shortwave feedbacks have a stronger correlation with intermodel spread. The present comment concludes that the conventional understanding of a positive lapse-rate feedback associated with the Arctic inversion is consistent with the CMIP3 model ensemble.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The detection of physiological signals from the motor system (electromyographic signals) is being utilized in the practice clinic to guide the therapist in a more precise and accurate diagnosis of motor disorders. In this context, the process of decomposition of EMG (electromyographic) signals that includes the identification and classification of MUAP (Motor Unit Action Potential) of a EMG signal, is very important to help the therapist in the evaluation of motor disorders. The EMG decomposition is a complex task due to EMG features depend on the electrode type (needle or surface), its placement related to the muscle, the contraction level and the health of the Neuromuscular System. To date, the majority of researches on EMG decomposition utilize EMG signals acquired by needle electrodes, due to their advantages in processing this type of signal. However, relatively few researches have been conducted using surface EMG signals. Thus, this article aims to contribute to the clinical practice by presenting a technique that permit the decomposition of surface EMG signal via the use of Hidden Markov Models. This process is supported by the use of differential evolution and spectral clustering techniques. The developed system presented coherent results in: (1) identification of the number of Motor Units actives in the EMG signal; (2) presentation of the morphological patterns of MUAPs in the EMG signal; (3) identification of the firing sequence of the Motor Units. The model proposed in this work is an advance in the research area of decomposition of surface EMG signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Resistance to anticoagulants in Norway rats (Rattus norvegicus) and house mice (Mus domesticus) has been studied in the UK since the early 1960s. In no other country in the world is our understanding of resistance phenomena so extensive and profound. Almost every aspect of resistance in the key rodent target species has been examined in laboratory and field trials and results obtained by independent researchers have been published. It is the principal purpose of this document to present a short synopsis of this information. More recently, however, the development of genetical techniques has provided a definitive means of detection of resistant genotypes among pest rodent populations. Preliminary information from a number of such surveys will also be presented. Resistance in Norway rats: A total of nine different anticoagulant resistance mutations (single nucleotide polymorphisms or SNPs) are found among Norway rats in the UK. In no other country worldwide are present so many different forms of Norway rat resistance. Among these nine SNPs, five are known to confer on rats that carry them a significant degree of resistance to anticoagulant rodenticides. These mutations are: L128Q, Y139S, L120Q, Y139C and Y139F. The latter three mutations confer, to varying degrees, practical resistance to bromadiolone and difenacoum, the two second-generation anticoagulants in predominant use in the UK. It is the recommendation of RRAG that bromadiolone and difenacoum should not be used against rats carrying the L120Q, Y139C and Y139F mutations because this will promote the spread of resistance and jeopardise the long-term efficacy of anticoagulants. Brodifacoum, flocoumafen and difethialone are effective against these three genotypes but cannot presently be used because of the regulatory restriction that they can only be applied against rats that are living and feeding predominantly indoors. Our understanding of the geographical distribution of Norway rat resistance in incomplete but is rapidly increasing. In particular, the mapping of the focus of L120Q Norway rat resistance in central-southern England by DNA sequencing is well advanced. We now know that rats carrying this resistance mutation are present across a large part of the counties of Hampshire, Berkshire and Wiltshire, and the resistance spreads into Avon, Oxfordshire and Surrey. It is also found, perhaps as outlier foci, in south-west Scotland and East Sussex. L120Q is currently the most severe form of anticoagulant resistance found in Norway rats and is prevalent over a considerable part of central-southern England. A second form of advanced Norway rat resistance is conferred by the Y139C mutation. This is noteworthy because it occurs in at least four different foci that are widely geographically dispersed, namely in Dumfries and Galloway, Gloucestershire, Yorkshire and Norfolk. Once again, bromadiolone and difenacoum are not recommended for use against rats carrying this genotype and a concern of RRAG is that continued applications of resisted active substances may result in Y139C becoming more or less ubiquitous across much of the UK. Another type of advanced resistance, the Y139F mutation, is present in Kent and Sussex. This means that Norway rats, carrying some degree of resistance to bromadiolone and difenacoum, are now found from the south coast of Kent, west into the city of Bristol, to Yorkshire in the north-east and to the south-west of Scotland. This difficult situation can only deteriorate further where these three genotypes exist and resisted anticoagulants are predominantly used against them. Resistance in house mice: House mouse is not so well understood but the presence in the UK of two resistant genotypes, L128S and Y139C, is confirmed. House mice are naturally tolerant to anticoagulants and such is the nature of this tolerance, and the presence of genetical resistance, that house mice resistant to the first-generation anticoagulants are considered to be widespread in the UK. Consequently, baits containing warfarin, sodium warfarin, chlorophacinone and coumatetralyl are not approved for use against mice. This regulatory position is endorsed by RRAG. Baits containing brodifacoum, flocoumafen and difethialone are effective against house mice and may be applied in practice because house mouse infestations are predominantly indoors. There are some reports of resistance among mice in some areas to the second-generation anticoagulant bromadiolone, while difenacoum remains largely efficacious. Alternatives to anticoagulants: The use of habitat manipulation, that is the removal of harbourage, denial of the availability of food and the prevention of ingress to structures, is an essential component of sustainable rodent pest management. All are of importance in the management of resistant rodents and have the advantage of not selecting for resistant genotypes. The use of these techniques may be particularly valuable in preventing the build-up of rat infestations. However, none can be used to remove any sizeable extant rat infestation and for practical reasons their use against house mice is problematic. Few alternative chemical interventions are available in the European Union because of the removal from the market of zinc phosphide, calciferol and bromethalin. Our virtual complete reliance on the use of anticoagulants for the chemical control of rodents in the UK, and more widely in the EU, calls for improved schemes for resistance management. Of course, these might involve the use of alternatives to anticoagulant rodenticides. Also important is an increasing knowledge of the distribution of resistance mutations in rats and mice and the use of only fully effective anticoagulants against them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical Mode Decomposition is presented as an alternative to traditional analysis methods to decompose geomagnetic time series into spectral components. Important comments on the algorithm and its variations will be given. Using this technique, planetary wave modes of 5-, 10-, and 16-day mean periods can be extracted from magnetic field components of three different stations in Germany. In a second step, the amplitude modulation functions of these wave modes can be shown to contain significant contribution from solar cycle variation through correlation with smoothed sunspot numbers. Additionally, the data indicate connections with geomagnetic jerk occurrences, supported by a second set of data providing reconstructed near-Earth magnetic field for 150 years. Usually attributed to internal dynamo processes within the Earth's outer core, the question of who is impacting whom will be briefly discussed here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rhizoremediation is a bioremediation technique whereby enhanced microbial degradation of organic contaminants occurs within the plant root zone (rhizosphere). It is considered an effective and affordable ‘green technology’ for remediating soils contaminated with petroleum hydrocarbons (PHCs). This paper critically reviews the potential role of root exuded compounds in rhizoremediation, with emphasis on commonly exuded low molecular weight aliphatic organic acid anions (carboxylates). The extent to which remediation is achieved shows wide disparity among plant species. Therefore, plant selection is crucial for the advancement and widespread adoption of this technology. Root exudation is speculated to be one of the predominant factors leading to microbial changes in the rhizosphere and thus the potential driver behind enhanced petroleum biodegradation. Carboxylates can form a significant component of the root exudate mixture and are hypothesised to enhance petroleum biodegradation by: i) providing an easily degradable energy source; ii) increasing phosphorus supply; and/or iii) enhancing the contaminant bioavailability. These differing hypotheses, which are not mutually exclusive, require further investigation to progress our understanding of plant–microbe interactions with the aim to improve plant species selection and the efficacy of rhizoremediation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.