124 resultados para VOLTERRA FILTERS
Resumo:
An algorithm based only on the impedance cardiogram (ICG) recorded through two defibrillation pads, using the strongest frequency component and amplitude, incorporated into a defibrillator could determine circulatory arrest and reduce delays in starting cardiopulmonary resuscitation (CPR). Frequency analysis of the ICG signal is carried out by integer filters on a sample by sample basis. They are simpler, lighter and more versatile when compared to the FFT. This alternative approach, although less accurate, is preferred due to the limited processing capacity of devices that could compromise real time usability of the FFT. These two techniques were compared across a data set comprising 13 cases of cardiac arrest and 6 normal controls. The best filters were refined on this training set and an algorithm for the detection of cardiac arrest was trained on a wider data set. The algorithm was finally tested on a validation set. The ICG was recorded in 132 cardiac arrest patients (53 training, 79 validation) and 97 controls (47 training, 50 validation): the diagnostic algorithm indicated cardiac arrest with a sensitivity of 81.1% (77.6-84.3) and specificity of 97.1% (96.7-97.4) for the validation set (95% confidence intervals). Automated defibrillators with integrated ICG analysis have the potential to improve emergency care by lay persons enabling more rapid and appropriate initiation of CPR and when combined with ECG analysis they could improve on the detection of cardiac arrest.
Resumo:
In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.
Resumo:
The 2010 Eyjafjallajökull lasted 39 days and had 4 different phases, of which the first and third (14–18 April and 5–6 May) were most intense. Most of this period was dominated by winds with a northerly component that carried tephra toward Europe, where it was deposited in a number of locations and was sampled by rain gauges or buckets, surface swabs, sticky-tape samples and air filtering. In the UK, tephra was collected from each of the Phases 1–3 with a combined range of latitudes spanning the length of the country. The modal grain size of tephra in the rain gauge samples was 25 um, but the largest grains were 100 um in diameter and highly vesicular. The mass loading was equivalent to 8–218 shards cm2, which is comparable to tephra layers from much larger past eruptions. Falling tephra was collected on sticky tape in the English Midlands on 19, 20 and 21st April (Phase 2), and was dominated by aggregate clasts (mean diameter 85 um, component grains <10 um). SEM-EDS spectra for aggregate grains contained an extra peak for sulphur, when compared to control samples from the volcano, indicating that they were cemented by sulphur-rich minerals e.g. gypsum (CaSO4⋅H2O). Air quality monitoring stations did not record fluctuations in hourly PM10 concentrations outside the normal range of variability during the eruption, but there was a small increase in 24-hour running mean concentration from 21–24 April (Phase 2). Deposition of tephra from Phase 2 in the UK indicates that transport of tephra from Iceland is possible even for small eruption plumes given suitable wind conditions. The presence of relatively coarse grains adds uncertainty to concentration estimates from air quality sensors, which are most sensitive to grain sizes <10 um. Elsewhere, tephra was collected from roofs and vehicles in the Faroe Islands (mean grain size 40 um, but 100 um common), from rainwater in Bergen in Norway (23–91 um) and in air filters in Budapest, Hungary (2–6 um). A map is presented summarizing these and other recently published examples of distal tephra deposition from the Eyjafjallajökull eruption. It demonstrates that most tephra deposited on mainland Europe was produced in the highly explosive Phase 1 and was carried there in 2–3 days.
Resumo:
A rapid design methodology for orthonormal wavelet transform cores has been developed. This methodology is based on a generic, scaleable architecture utilising time-interleaved coefficients for the wavelet transform filters. The architecture has been captured in VHDL and parameterised in terms of wavelet family, wavelet type, data word length and coefficient word length. The control circuit is embedded within the cores and allows them to be cascaded without any interface glue logic for any desired level of decomposition. Case studies for stand alone and cascaded silicon cores for single and multi-stage wavelet analysis respectively are reported. The design time to produce silicon layout of a wavelet based system has been reduced to typically less than a day. The cores are comparable in area and performance to handcrafted designs. The designs are portable across a range of foundries and are also applicable to FPGA and PLD implementations.
Resumo:
Suction is an important stress variable that is required for reliable predictions of the likely performance of unsaturated soils. The axis translation technique is the best established method of measuring or controlling suction; however, the success of this application is heavily dependent on the rating of the high air entry filter (HAF) and how it is incorporated into the testing system. This paper reports some basic experiments in which samples of unsaturated kaolin were brought to saturation in stages using 5 bar and 15 bar HAFs. The results have shown that the water equilibrium in unsaturated soils is greatly affected by the rating of filters. The findings also suggest that the flow through unsaturated soils is not necessarily governed by the one-dimensional consolidation theory that was developed for saturated soils, and this may be attributed to the bimodal pore size distribution of unsaturated soils.
Resumo:
Industrial chemicals, antimicrobials, drugs and personal care products have been reported as global pollutants which enter the food chain. Some of them have also been classified as endocrine disruptors based on results of various studies employing a number of in vitro/. vivo tests. The present study employed a mammalian reporter gene assay to assess the effects of known and emerging contaminants on estrogen nuclear receptor transactivation.Out of fifty-nine compounds assessed, estrogen receptor agonistic activity was observed for parabens (. n= 3), UV filters (. n= 6), phthalates (. n= 4) and a metabolite, pyrethroids (. n= 9) and their metabolites (. n= 3). Two compounds were estrogen receptor antagonists while some of the agonists enhanced 17β-estradiol mediated response.This study reports five new compounds (pyrethroids and their metabolites) possessing estrogen agonist activity and highlights for the first time that pyrethroid metabolites are of particular concern showing much greater estrogenic activity than their parent compounds.
Resumo:
A novel approach to the modelling of passive intermodulation (PIM) generation in passive components with distributed weak nonlinearities is outlined. Based upon the formalism of X-parameters, it provides a unified framework for co-design of antenna beamforming networks, filters, combiners, phase shifters and other passive and active devices containing nonlinearities at RF front-end. The effects of discontinuities and complex circuit layouts can be efficiently evaluated with the aid of the equivalent networks of the canonical nonlinear elements. The main concepts are illustrated by examples of numerical simulations of PIM generation in the transmission lines and comparison with the measurement results.
Resumo:
Cao et al. reported a possible progenitor detection for the Type Ib supernovae iPTF13bvn for the first time. We find that the progenitor is in fact brighter than the magnitudes previously reported by approximately 0.7-0.2 mag with a larger error in the bluer filters. We compare our new magnitudes to our large set of binary evolution models and find that many binary models with initial masses in the range of 10-20M(circle dot) match this new photometry and other constraints suggested from analysing the supernova. In addition, these lower mass stars retain more helium at the end of the model evolution indicating that they are likely to be observed as Type Ib supernovae rather than their more massive, Wolf-Rayet counter parts. We are able to rule out typical Wolf-Rayet models as the progenitor because their ejecta masses are too high and they do not fit the observed SED unless they have a massive companion which is the observed source at the supernova location. Therefore only late-time observations of the location will truly confirm if the progenitor was a helium giant and not a Wolf-Rayet star.
Resumo:
It is well known that the absolute magnitudes (H) in the MPCORB and ASTORB orbital element catalogs suffer from a systematic offset. Juric at al. (2002) found 0.4 mag offset in the SDSS data and detailed light curve studies of WISE asteroids by Pravec et al. (2012) revealed size-dependent offsets of up to 0.5 mag. The offsets are thought to be caused by systematic errors introduced by earlier surveys using different photometric catalogs and filters. The next generation asteroid surveys provide an order of magnitude more asteroids and well-defined and calibrated magnitudes. The Pan-STARRS 1 telescope (PS1) has observed hundreds of thousands asteroids, submitted more than 2 million detections to the Minor Planet Center (MPC) and discovered almost 300 NEOs since the beginning of operations in late 2010. We transformed the observed apparent magnitudes of PS1-detected asteroids from the gP1,rP1,iP1,yP1,zP1 and wP1-bands into Johnson photometric system by assuming the mean S and C-type asteroid color (Fitzsimmons 2011 - personal communication, Schlafly et al. 2012, Magnier et al. 2012 - in preparation) and calculated the absolute magnitude (H) in the V-band and its uncertainty (Bowell et al., 1989) for more than 200,000 known asteroids having on average 6.7 detections per object. The H error with respect to the MPCORB catalog revealed a mean offset of -0.49+0.30 mag in good agreement with published values. We will also discuss the statistical and systematical errors in H and slope parameter G.
Resumo:
Context: Near-Earth asteroid-comet transition object 107P/ (4015) Wilson-Harrington is a possible target of the joint European Space Agency (ESA) and Japanese Aerospace Exploration Agency (JAXA) Marco Polo sample return mission. Physical studies of this object are relevant to this mission, and also to understanding its asteroidal or cometary nature. Aims: Our aim is to obtain significant new constraints on the surface thermal properties of this object. Methods: We present mid-infrared photometry in two filters (16 and 22 μm) obtained with NASA's Spitzer Space Telescope on February 12, 2007, and results from the application of the Near Earth Asteroid Thermal Model (NEATM). We obtained high S/N in two mid-IR bands allowing accurate measurements of its thermal emission. Results: We obtain a well constrained beaming parameter (η = 1.39±0.26) and obtain a diameter and geometric albedo of D = 3.46±0.32 km, and pV = 0.059±0.011. We also obtain similar results when we apply this best-fitting thermal model to single-band mid-IR photometry reported by Campins et al. (1995, P&SS, 43, 733), Kraemer et al. (2005, AJ, 130, 2363) and Reach et al. (2007, Icarus, 191, 298). Conclusions: The albedo of 4015 Wilson-Harrington is low, consistent with those of comet nuclei and primitive C-, P-, D-type asteorids. We establish a rough lower limit for the thermal inertia of W-H of 60 Jm-2s-0.5 K-1 when it is at r = 1 AU, which is slightly over the limit of 30 Jm-2 s-0.5 K-1 derived by Groussin et al. (2009, Icarus, 199, 568) for the thermal inertia of the nucleus of comet 22P/Kopff.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month.
Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1).
Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5<sup>m</sup> for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 Å between 3345-9995 Å. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Å) and imaging with broadband JHK<inf>s</inf> filters.
Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ∼15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK<inf>s</inf> imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this.
Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.
Resumo:
Multiple Table Lookup architectures in Software Defined Networking (SDN) open the door for exciting new network applications. The development of the OpenFlow protocol supported the SDN paradigm. However, the first version of the OpenFlow protocol specified a single table lookup model with the associated constraints in flow entry numbers and search capabilities. With the introduction of multiple table lookup in OpenFlow v1.1, flexible and efficient search to support SDN application innovation became possible. However, implementation of multiple table lookup in hardware to meet high performance requirements is non-trivial. One possible approach involves the use of multi-dimensional lookup algorithms. A high lookup performance can be achieved by using embedded memory for flow entry storage. A detailed study of OpenFlow flow filters for multi-dimensional lookup is presented in this paper. Based on a proposed multiple table lookup architecture, the memory consumption and update performance using parallel single field searches are evaluated. The results demonstrate an efficient multi-table lookup implementation with minimum memory usage.
Resumo:
Background:
Prolonged mechanical ventilation is associated with a longer intensive care unit (ICU) length of stay and higher mortality. Consequently, methods to improve ventilator weaning processes have been sought. Two recent Cochrane systematic reviews in ICU adult and paediatric populations concluded that protocols can be effective in reducing the duration of mechanical ventilation, but there was significant heterogeneity in study findings. Growing awareness of the benefits of understanding the contextual factors impacting on effectiveness has encouraged the integration of qualitative evidence syntheses with effectiveness reviews, which has delivered important insights into the reasons underpinning (differential) effectiveness of healthcare interventions.
Objectives:
1. To locate, appraise and synthesize qualitative evidence concerning the barriers and facilitators of the use of protocols for weaning critically-ill adults and children from mechanical ventilation;
2. To integrate this synthesis with two Cochrane effectiveness reviews of protocolized weaning to help explain observed heterogeneity by identifying contextual factors that impact on the use of protocols for weaning critically-ill adults and children from mechanical ventilation;
3. To use the integrated body of evidence to suggest the circumstances in which weaning protocols are most likely to be used.
Search methods:
We used a range of search terms identified with the help of the SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) mnemonic. Where available, we used appropriate methodological filters for specific databases. We searched the following databases: Ovid MEDLINE, Embase, OVID, PsycINFO, CINAHL Plus, EBSCOHost, Web of Science Core Collection, ASSIA, IBSS, Sociological Abstracts, ProQuest and LILACS on the 26th February 2015. In addition, we searched: the grey literature; the websites of professional associations for relevant publications; and the reference lists of all publications reviewed. We also contacted authors of the trials included in the effectiveness reviews as well as of studies (potentially) included in the qualitative synthesis, conducted citation searches of the publications reporting these studies, and contacted content experts.
We reran the search on 3rd July 2016 and found three studies, which are awaiting classification.
Selection criteria:
We included qualitative studies that described: the circumstances in which protocols are designed, implemented or used, or both, and the views and experiences of healthcare professionals either involved in the design, implementation or use of weaning protocols or involved in the weaning of critically-ill adults and children from mechanical ventilation not using protocols. We included studies that: reflected on any aspect of the use of protocols, explored contextual factors relevant to the development, implementation or use of weaning protocols, and reported contextual phenomena and outcomes identified as relevant to the effectiveness of protocolized weaning from mechanical ventilation.
Data collection and analysis:
At each stage, two review authors undertook designated tasks, with the results shared amongst the wider team for discussion and final development. We independently reviewed all retrieved titles, abstracts and full papers for inclusion, and independently extracted selected data from included studies. We used the findings of the included studies to develop a new set of analytic themes focused on the barriers and facilitators to the use of protocols, and further refined them to produce a set of summary statements. We used the Confidence in the Evidence from Reviews of Qualitative Research (CERQual) framework to arrive at a final assessment of the overall confidence of the evidence used in the synthesis. We included all studies but undertook two sensitivity analyses to determine how the removal of certain bodies of evidence impacted on the content and confidence of the synthesis. We deployed a logic model to integrate the findings of the qualitative evidence synthesis with those of the Cochrane effectiveness reviews.
Main results:
We included 11 studies in our synthesis, involving 267 participants (one study did not report the number of participants). Five more studies are awaiting classification and will be dealt with when we update the review.
The quality of the evidence was mixed; of the 35 summary statements, we assessed 17 as ‘low’, 13 as ‘moderate’ and five as ‘high’ confidence. Our synthesis produced nine analytical themes, which report potential barriers and facilitators to the use of protocols. The themes are: the need for continual staff training and development; clinical experience as this promotes felt and perceived competence and confidence to wean; the vulnerability of weaning to disparate interprofessional working; an understanding of protocols as militating against a necessary proactivity in clinical practice; perceived nursing scope of practice and professional risk; ICU structure and processes of care; the ability of protocols to act as a prompt for shared care and consistency in weaning practice; maximizing the use of protocols through visibility and ease of implementation; and the ability of protocols to act as a framework for communication with parents.
Authors' conclusions:
There is a clear need for weaning protocols to take account of the social and cultural environment in which they are to be implemented. Irrespective of its inherent strengths, a protocol will not be used if it does not accommodate these complexities. In terms of protocol development, comprehensive interprofessional input will help to ensure broad-based understanding and a sense of ‘ownership’. In terms of implementation, all relevant ICU staff will benefit from general weaning as well as protocol-specific training; not only will this help secure a relevant clinical knowledge base and operational understanding, but will also demonstrate to others that this knowledge and understanding is in place. In order to maximize relevance and acceptability, protocols should be designed with the patient profile and requirements of the target ICU in mind. Predictably, an under-resourced ICU will impact adversely on protocol implementation, as staff will prioritize management of acutely deteriorating and critically-ill patients.
Resumo:
A system of software and hardware that combines signal processing and contact microphones using normally inaudible body sounds, including heartbeat/pulse, respiration and internal sounds from the vocal tract that can be heard internally by the performer but not externally by others, to drive resonant filters. Performance at SARC Sonic Lab, Belfast, 19 Feb 2015 in collaboration with Birgit Ulher.