995 resultados para Filters methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cooled infrared filters and dichroic beam splitters manufactured for the Mid-Infrared Instrument are key optical components for the selection and isolation of wavelengths in the study of astrophysical properties of stars, galaxies, and other planetary objects. We describe the spectral design and manufacture of the precision cooled filter coatings for the spectrometer (7 K) and imager (9 K). Details of the design methods used to achieve the spectral requirements, selection of thin film materials, deposition technique, and testing are presented together with the optical layout of the instrument. (C) 2008 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the spectral design and manufacture of the narrow bandpass filters and 6-18µm broadband antireflection coatings for the 21-channel NASA EOS-AURA High Resolution Dynamics Limb Sounder (HIRDLS). A method of combining the measured spectral characteristics of each filter and antireflection coating, together with the spectral response of the other optical elements in the instrument to obtain a predicted system throughput response is presented. The design methods used to define the filter and coating spectral requirements, choice of filter materials, multilayer designs and deposition techniques are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The High Resolution Dynamics Limb Sounder is described, with particular reference to the atmospheric measurements to be made and the rationale behind the measurement strategy. The demands this strategy places on the filters to be used in the instrument and the designs to which this leads to are described. A second set of filters at an intermediate image plane to reduce "Ghost Imaging" is discussed together with their required spectral properties. A method of combining the spectral characteristics of the primary and secondary filters in each channel are combined together with the spectral response of the detectors and other optical elements to obtain the system spectral response weighted appropriately for the Planck function and atmospheric limb absorption. This method is used to demonstrate whether the out-of-band spectral blocking requirement for a channel is being met and an example calculation is demonstrated showing how the blocking is built up for a representative channel. Finally, the techniques used to produce filters of the necessary sub-millimetre sizes together with the testing methods and procedures used to assess the environmental durability and establish space flight quality are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we compare two different cyclone-tracking algorithms to detect North Atlantic polar lows, which are very intense mesoscale cyclones. Both approaches include spatial filtering, detection, tracking and constraints specific to polar lows. The first method uses digital bandpass-filtered mean sea level pressure (MSLP) fieldsin the spatial range of 200�600 km and is especially designed for polar lows. The second method also uses a bandpass filter but is based on the discrete cosine transforms (DCT) and can be applied to MSLP and vorticity fields. The latter was originally designed for cyclones in general and has been adapted to polar lows for this study. Both algorithms are applied to the same regional climate model output fields from October 1993 to September 1995 produced from dynamical downscaling of the NCEP/NCAR reanalysis data. Comparisons between these two methods show that different filters lead to different numbers and locations of tracks. The DCT is more precise in scale separation than the digital filter and the results of this study suggest that it is more suited for the bandpass filtering of MSLP fields. The detection and tracking parts also influence the numbers of tracks although less critically. After a selection process that applies criteria to identify tracks of potential polar lows, differences between both methods are still visible though the major systems are identified in both.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In theory, our research questions should drive our choice of method. In practice, we know this is not always the case. At various stages of the research process different factors may apply to restrict the choice of research method. These filters might include a series of inter-related factors such as the political context of the research, the disciplinary affiliation of the researchers, the research setting and peer-review. We suggest that as researchers conduct research and encounter the various filters they come to know the methods that are more likely to survive the filtering process. In future projects they may favour these methods. Public health problems and research questions may increasingly be framed in the terms that can be addressed by a restricted array of methods. Innovative proposals - where new methods are applied to old problems, old methods to new areas of inquiry and high-quality interdisciplinary research - may be unlikely to survive the processes of filtering. This may skew the public health knowledge base, limiting public health action. We argue that we must begin to investigate the process of research. We need to document how and why particular methods are chosen to investigate particular sets of public health problems. This will help us understand how we know what we know in public health and help us plan how we may more appropriately draw upon a range of research methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background : The aim of the ACE-Obesity study was to determine the economic credentials of interventions which aim to prevent unhealthy weight gain in children and adolescents. We have reported elsewhere on the modelled effectiveness of 13 obesity prevention interventions in children. In this paper, we report on the cost results and associated methods together with the innovative approach to priority setting that underpins the ACE-Obesity study.

Methods : The Assessing Cost-Effectiveness (ACE) approach combines technical rigour with 'due process' to facilitate evidence-based policy analysis. Technical rigour was achieved through use of standardised evaluation methods, a research team that assembles best available evidence and extensive uncertainty analysis. Cost estimates were based on pathway analysis, with resource usage estimated for the interventions and their 'current practice' comparator, as well as associated cost offsets. Due process was achieved through involvement of stakeholders, consensus decisions informed by briefing papers and 2nd stage filter analysis that captures broader factors that influence policy judgements in addition to cost-effectiveness results. The 2nd stage filters agreed by stakeholders were 'equity', 'strength of the evidence', 'feasibility of implementation', 'acceptability to stakeholders', 'sustainability' and 'potential for side-effects'.

Results :
The intervention costs varied considerably, both in absolute terms (from cost saving [6 interventions] to in excess of AUD50m per annum) and when expressed as a 'cost per child' estimate (from <AUD1.0 [reduction of TV advertising of high fat foods/high sugar drinks] to >AUD31,000 [laparoscopic adjustable gastric banding for morbidly obese adolescents]). High costs per child reflected cost structure, target population and/or under-utilisation.

Conclusions : The use of consistent methods enables valid comparison of potential intervention costs and cost-offsets for each of the interventions. ACE-Obesity informs policy-makers about cost-effectiveness, health impact, affordability and 2nd stage filters for important options for preventing unhealthy weight gain in children. In related articles cost-effectiveness results and second stage filter considerations for each intervention assessed will be presented and analysed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents two novel algorithms for blind chancel equalization (BCE) and blind source separation (BSS). Beside these, a general framework for global convergent analysis is proposed. Finally, the open problem of equalising a non-irreducible system is answered by the algorithm proposed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, many scholars make use of fusion of filters to enhance the performance of spam filtering. In the past several years, a lot of effort has been devoted to different ensemble methods to achieve better performance. In reality, how to select appropriate ensemble methods towards spam filtering is an unsolved problem. In this paper, we investigate this problem through designing a framework to compare the performances among various ensemble methods. It is helpful for researchers to fight spam email more effectively in applied systems. The experimental results indicate that online based methods perform well on accuracy, while the off-line batch methods are evidently influenced by the size of data set. When a large data set is involved, the performance of off-line batch methods is not at par with online methods, and in the framework of online methods, the performance of parallel ensemble is better when using complex filters only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a comparative evaluation of the state-of-art algorithms for detecting pedestrians in low frame rate and low resolution footage acquired by mobile sensors. Four approaches are compared: a) The Histogram of Oriented Gradient (HoG) approach [1]; b) A new histogram feature that is formed by the weighted sum of both the gradient magnitude and the filter responses from a set of elongated Gaussian filters [2] corresponding to the quantised orientation, called Histogram of Oriented Gradient Banks (HoGB) approach; c) The codebook based HoG feature with branch-and-bound (efficient subwindow search) algorithm [3] and; d) The codebook based HoGB approach. Results show that the HoG based detector achieves the highest performance in terms of the true positive detection, the HoGB approach has the lowest false positives whilst maintaining a comparable true positive rate to the HoG, and the codebook approaches allow computationally efficient detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To demonstrate the feasibility of direct angioscopic visualization of an optional inferior vena cava (IVC) filter in situ and during retrieval. MATERIALS AND METHODS: Angioscopy was used for direct visualization of optional IVC filters in six sheep. Cavograms were obtained before the filters were retrieved. After successful filter retrieval, segmental IVC perfusion was performed to evaluate filter retrieval-related damage to the IVC wall. Therefore, all branch vessels were ligated before the IVC segment was flushed with normal saline solution until it was fully distended. Then, the inflow was terminated and the IVC segment observed for deflation. Subsequently, the IVC was harvested en bloc, dissected, and inspected macroscopically. RESULTS: The visibility of IVC filters at angioscopy was excellent. During the retrieval procedure, filter collapse and retraction into the sheath were clearly demonstrated. Angioscopy provided additional information to that obtained with cavography, demonstrating adherent material in three filters. Three filters in place for more than 2 months could not be retrieved because the filter legs were incorporated into the IVC wall. After filter retrieval, there was no perforation at segmental IVC perfusion. At macroscopic inspection of the IVC lumen, a small piece of detached endothelium was found in one animal. CONCLUSION: Angioscopy enabled the direct evaluation of optional IVC filters in situ and during retrieval. Compared with cavography, angioscopy provided additional information about the filter in situ and the retrieval procedure. Future applications of this technique could include studies of filter migration, compression, and clot-trapping efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To determine the incidence of venous thromboembolism (VTE) after removal of retrievable inferior vena cava (IVC) filters. MATERIALS AND METHODS: Retrospective study was conducted of 67 patients who underwent 72 consecutive filter retrievals at a single institution. Data collected included VTE status at the time of filter placement, anticoagulant medications at the time of filter retrieval and afterward, new or recurrent VTE after filter removal, and insertion of subsequent filters. Patient questionnaires were completed in 50 cases, chart review in all patients. RESULTS: At the time of filter placement, 30 patients had documented VTE, 19 had a history of treated VTE, and 23 were at risk for but had neither previous nor present VTE. Mean duration of follow-up after filter removal was 20.6 months +/- 10.9. A total of 52 patients (57 filters) received anticoagulation and/or antiplatelet medications after filter removal. There were two documented episodes of recurrent deep vein thrombosis (2.8% of filters removed), both in patients who had VTE at the time of filter placement and underwent therapeutic anticoagulation at the time of filter removal. One of these patients (1.4% of filters removed) also experienced pulmonary embolism. Of the 23 patients without VTE when the filter was placed, none developed VTE after filter removal. Four patients (5.5% of filters removed) required subsequent permanent filters, three for complications of anticoagulation, one for failure of anticoagulation. CONCLUSIONS: VTE was rare after removal of IVC filters, but was most likely to occur in patients who had VTE at the time of filter placement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transformer protection is one of the most challenging applications within the power system protective relay field. Transformers with a capacity rating exceeding 10 MVA are usually protected using differential current relays. Transformers are an aging and vulnerable bottleneck in the present power grid; therefore, quick fault detection and corresponding transformer de-energization is the key element in minimizing transformer damage. Present differential current relays are based on digital signal processing (DSP). They combine DSP phasor estimation and protective-logic-based decision making. The limitations of existing DSP-based differential current relays must be identified to determine the best protection options for sensitive and quick fault detection. The development, implementation, and evaluation of a DSP differential current relay is detailed. The overall goal is to make fault detection faster without compromising secure and safe transformer operation. A detailed background on the DSP differential current relay is provided. Then different DSP phasor estimation filters are implemented and evaluated based on their ability to extract desired frequency components from the measured current signal quickly and accurately. The main focus of the phasor estimation evaluation is to identify the difference between using non-recursive and recursive filtering methods. Then the protective logic of the DSP differential current relay is implemented and required settings made in accordance with transformer application. Finally, the DSP differential current relay will be evaluated using available transformer models within the ATP simulation environment. Recursive filtering methods were found to have significant advantage over non-recursive filtering methods when evaluated individually and when applied in the DSP differential relay. Recursive filtering methods can be up to 50% faster than non-recursive methods, but can cause false trip due to overshoot if the only objective is speed. The relay sensitivity is however independent of filtering method and depends on the settings of the relay’s differential characteristics (pickup threshold and percent slope).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors describe the design, fabrication, and testing of a passive wireless sensor platform utilizing low-cost commercial surface acoustic wave filters and sensors. Polyimide and polyethylene terephthalate sheets are used as substrates to create a flexible sensor tag that can be applied to curved surfaces. A microfabricated antenna is integrated on the substrate in order to create a compact form factor. The sensor tags are fabricated using 315 MHz surface acoustic wave filters and photodiodes and tested with the aid of a fiber-coupled tungsten lamp. Microwave energy transmitted from a network analyzer is used to interrogate the sensor tag. Due to an electrical impedance mismatch at the SAW filter and sensor, energy is reflected at the sensor load and reradiated from the integrated antenna. By selecting sensors that change electrical impedance based on environmental conditions, the sensor state can be inferred through measurement of the reflected energy profile. Testing has shown that a calibrated system utilizing this type of sensor tag can detect distinct light levels wireless and passively. The authors also demonstrate simultaneous operation of two tags with different center passbands that detects light. Ranging tests show that the sensor tags can operate at a distance of at least 3.6 m.