995 resultados para Filters methods
Resumo:
Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.
Resumo:
PURPOSE: We describe the results of a preliminary prospective study using different recently developed temporary and retrievable inferior vena cava (IVC) filters. METHODS: Fifty temporary IVC filters (Gunther, Gunther Tulip, Antheor) were inserted in 47 patients when the required period of protection against pulmonary embolism (PE) was estimated to be less than 2 weeks. The indications were documented deep vein thrombosis (DVT) and temporary contraindications for anticoagulation, a high risk for PE, and PE despite DVT prophylaxis. RESULTS: Filters were removed 1-12 days after placement and nine (18%) had captured thrombi. Complications were one PE during and after removal of a filter, two minor filter migrations, and one IVC thrombosis. CONCLUSION: Temporary filters are effective in trapping clots and protecting against PE, and the complication rate does not exceed that of permanent filters. They are an alternative when protection from PE is required temporarily, and should be considered in patients with a normal life expectancy.
Resumo:
Multicast is one method to transfer information in IPv4 based communication. Other methods are unicast and broadcast. Multicast is based on the group concept where data is sent from one point to a group of receivers and this remarkably saves bandwidth. Group members express an interest to receive data by using Internet Group Management Protocol and traffic is received by only those receivers who want it. The most common multicast applications are media streaming applications, surveillance applications and data collection applications. There are many data security methods to protect unicast communication that is the most common transfer method in Internet. Popular data security methods are encryption, authentication, access control and firewalls. The characteristics of multicast such as dynamic membership cause that all these data security mechanisms can not be used to protect multicast traffic. Nowadays the protection of multicast traffic is possible via traffic restrictions where traffic is allowed to propagate only to certain areas. One way to implement this is packet filters. Methods tested in this thesis are MVR, IGMP Filtering and access control lists which worked as supposed. These methods restrict the propagation of multicast but are laborious to configure in a large scale. There are also a few manufacturerspecific products that make possible to encrypt multicast traffic. These separate products are expensive and mainly intended to protect video transmissions via satellite. Investigation of multicast security has taken place for several years and the security methods that will be the results of the investigation are getting ready. An IETF working group called MSEC is standardizing these security methods. The target of this working group is to standardize data security protocols for multicast during 2004.
Resumo:
Aerosol samples were collected at a pasture site in the Amazon Basin as part of the project LBA-SMOCC-2002 (Large-Scale Biosphere-Atmosphere Experiment in Amazonia - Smoke Aerosols, Clouds, Rainfall and Climate: Aerosols from Biomass Burning Perturb Global and Regional Climate). Sampling was conducted during the late dry season, when the aerosol composition was dominated by biomass burning emissions, especially in the submicron fraction. A 13-stage Dekati low-pressure impactor (DLPI) was used to collect particles with nominal aerodynamic diameters (D(p)) ranging from 0.03 to 0.10 mu m. Gravimetric analyses of the DLPI substrates and filters were performed to obtain aerosol mass concentrations. The concentrations of total, apparent elemental, and organic carbon (TC, EC(a), and OC) were determined using thermal and thermal-optical analysis (TOA) methods. A light transmission method (LTM) was used to determine the concentration of equivalent black carbon (BC(e)) or the absorbing fraction at 880 nm for the size-resolved samples. During the dry period, due to the pervasive presence of fires in the region upwind of the sampling site, concentrations of fine aerosols (D(p) < 2.5 mu m: average 59.8 mu g m(-3)) were higher than coarse aerosols (D(p) > 2.5 mu m: 4.1 mu g m(-3)). Carbonaceous matter, estimated as the sum of the particulate organic matter (i.e., OC x 1.8) plus BC(e), comprised more than 90% to the total aerosol mass. Concentrations of EC(a) (estimated by thermal analysis with a correction for charring) and BC(e) (estimated by LTM) averaged 5.2 +/- 1.3 and 3.1 +/- 0.8 mu g m(-3), respectively. The determination of EC was improved by extracting water-soluble organic material from the samples, which reduced the average light absorption Angstrom exponent of particles in the size range of 0.1 to 1.0 mu m from >2.0 to approximately 1.2. The size-resolved BC(e) measured by the LTM showed a clear maximum between 0.4 and 0.6 mu m in diameter. The concentrations of OC and BC(e) varied diurnally during the dry period, and this variation is related to diurnal changes in boundary layer thickness and in fire frequency.
Resumo:
In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
Current shrimp pond management practices generally result in elevated concentrations of nutrients, suspended solids, bacteria and phytoplankton compared with the influent water. Concerns about adverse environmental impacts caused by discharging pond effluent directly into adjacent waterways have prompted the search for cost-effective methods of effluent treatment. One potential method of effluent treatment is the use of ponds or raceways stocked with plants or animals that act as natural biofilters by removing waste nutrients. In addition to improving effluent water quality prior to discharge, the use of natural biofilters provides a method for capturing otherwise wasted nutrients. This study examined the potential of the native oyster, Saccostrea commercialis (Iredale and Roughley) and macroalgae, Gracilaria edulis (Gmelin) Silva to improve effluent water quality from a commercial Penaeus japonicus (Bate) shrimp farm, A system of raceways was constructed to permit recirculation of the effluent through the oysters to maximize the filtration of bacteria, phytoplankton and total suspended solids. A series of experiments was conducted to test the ability of oysters and macroalgae to improve effluent water quality in a flow-through system compared with a recirculating system. In the flow-through system, oysters reduced the concentration of bacteria to 35% of the initial concentration, chlorophyll a to 39%, total particulates (2.28-35.2 mum) to 29%, total nitrogen to 66% and total phosphorus to 56%. Under the recirculating flow regime, the ability of the oysters to improve water quality was significantly enhanced. After four circuits, total bacterial numbers were reduced to 12%, chlorophyll a to 4%, and total suspended solids to 16%. Efforts to increase biofiltration by adding additional layers of oyster trays and macroalgae-filled mesh bags resulted in fouling of the lower layers causing the death of oysters and senescence of macroalgae. Supplementary laboratory experiments were designed to examine the effects of high effluent concentrations of suspended particulates on the growth and condition of oysters and macroalgae. The results demonstrated that high concentrations of particulates inhibited growth and reduced the condition of oysters and macroalgae. Allowing the effluent to settle before biofiltration improved growth and reduced signs of stress in the oysters and macroalgae. A settling time of 6 h reduced particulates to a level that prevented fouling of the oysters and macroalgae.
Resumo:
In this work, 14 primary schools of Lisbon city, Portugal, followed a questionnaire of the ISAAC - International Study of Asthma and Allergies in Childhood Program, in 2009/2010. The questionnaire contained questions to identify children with respiratory diseases (wheeze, asthma and rhinitis). Total particulate matter (TPM) was passively collected inside two classrooms of each of 14 primary schools. Two types of filter matrices were used to collect TPM: Millipore (IsoporeTM) polycarbonate and quartz. Three campaigns were selected for the measurement of TPM: Spring, Autumn and Winter. The highest difference between the two types of filters is that the mass of collected particles was higher in quartz filters than in polycarbonate filters, even if their correlation is excellent. The highest TPM depositions occurred between October 2009 and March 2010, when related with rhinitis proportion. Rhinitis was found to be related to TPM when the data were grouped seasonally and averaged for all the schools. For the data of 2006/2007, the seasonal variation was found to be related to outdoor particle deposition (below 10 μm).
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.
Resumo:
The advent of retrievable caval filters was a game changer in the sense, that the previously irreversible act of implanting a medical device into the main venous blood stream of the body requiring careful evaluation of the pros and cons prior to execution suddenly became a "reversible" procedure where potential hazards in the late future of the patient lost most of their weight at the time of decision making. This review was designed to assess the rate of success with late retrieval of so called retrievable caval filters in order to get some indication about reasonable implant duration with respect to relatively "easy" implant removal with conventional means, i.e., catheters, hooks and lassos. A PubMed search (www.pubmed.gov) was performed with the search term "cava filter retrieval after 30 days clinical", and 20 reports between 1994 and 2013 dealing with late retrieval of caval filters were identified, covering approximately 7,000 devices with 600 removed filters. The maximal duration of implant reported is 2,599 days and the maximal implant duration of removed filters is also 2,599 days. The maximal duration reported with standard retrieval techniques, i.e., catheter, hook and/or lasso, is 475 days, whereas for the retrievals after this period more sophisticated techniques including lasers, etc. were required. The maximal implant duration for series with 100% retrieval accounts for 84 days, which is equivalent to 12 weeks or almost 3 months. We conclude that retrievable caval filters often become permanent despite the initial decision of temporary use. However, such "forgotten" retrievable devices can still be removed with a great chance of success up to three months after implantation. Conventional percutaneous removal techniques may be sufficient up to sixteen months after implantation whereas more sophisticated catheter techniques have been shown to be successful up to 83 months or more than seven years of implant duration. Tilting, migrating, or misplaced devices should be removed early on, and replaced if indicated with a device which is both, efficient and retrievable.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
Objectives: The purpose of this study was to analyze the debris captured in the distal protection filters used during carotid artery stenting (CAS). Background: CAS is an option available to high-risk patients requiring revascularization. Filters are suggested for optimal stroke prevention during CAS. Methods: From May 2005 to June 2007, filters from 59 asymptomatic patients who underwent CAS were collected and sent to a specialized laboratory for light-microscope and histological analysis. Peri- and postprocedural outcomes were assessed during 1-year follow-up. Results: On the basis of biomedical imaging of the filter debris, the captured material could not be identified as embolized particles from the carotid plaque. On histological analysis the debris consisted mainly of red blood cell aggregates and/ or platelets, occasionally accompanied by granulocytes. We found no consistent histological evidence of embolized particles originating from atherosclerotic plaques. Post-procedure, three neurological events were reported: two (3.4%) transient ischemic attacks (TIA) and one (1.7%) ipsilateral minor stroke. Conclusion: The filters used during CAS in asymptomatic patients planned for cardiac surgery often remained empty. These findings may be explained by assuming that asymptomatic patients feature a different atherosclerotic plaque composition or stabilization through antiplatelet medication. Larger, randomized trials are clearly warranted, especially in the asymptomatic population. © 2012 Wiley Periodicals, Inc.
Resumo:
The aim of this work was to select an appropriate digital filter for a servo application and to filter the noise from the measurement devices. Low pass filter attenuates the high frequency noise beyond the specified cut-off frequency. Digital lowpass filters in both IIR and FIR responses were designed and experimentally compared to understand their characteristics from the corresponding step responses of the system. Kaiser Windowing and Equiripple methods were selected for FIR response, whereas Butterworth, Chebyshev, InverseChebyshev and Elliptic methods were designed for IIR case. Limitations in digital filter design for a servo system were analysed. Especially the dynamic influences of each designed filter on the control stabilityof the electrical servo drive were observed. The criterion for the selection ofparameters in designing digital filters for servo systems was studied. Control system dynamics was given significant importance and the use of FIR and IIR responses in different situations were compared to justify the selection of suitableresponse in each case. The software used in the filter design was MatLab/Simulink® and dSPACE's DSP application. A speed controlled Permanent Magnet Linear synchronous Motor was used in the experimental work.