903 resultados para Filters
Resumo:
Currently, infrared filters for astronomical telescopes and satellite radiometers are based on multilayer thin film stacks of alternating high and low refractive index materials. However, the choice of suitable layer materials is limited and this places limitations on the filter performance that can be achieved. The ability to design materials with arbitrary refractive index allows for filter performance to be greatly increased but also increases the complexity of design. Here a differential algorithm was used as a method for optimised design of filters with arbitrary refractive indices, and then materials are designed to these specifications as mono-materials with sub wavelength structures using Bruggeman’s effective material approximation (EMA).
Resumo:
The Earth Cloud, Aerosol and Radiation Explorer mission (EarthCARE) Multispectral Imager (MSI) is a radiometric instrument designed to provide the imaging of the atmospheric cloud cover and the cloud top surface temperature from a sun-synchronous low Earth orbit. The MSI forms part of a suite of four instruments destined to support the European Space Agency Living Planet mission on-board the EarthCARE satellite payload to be launched in 2016, whose synergy will be used to construct three-dimensional scenes, textures and temperatures of atmospheric clouds and aerosols. The MSI instrument contains seven channels: four solar channels to measure visible and short-wave infrared wavelengths, and three channels to measure infrared thermal emission. In this paper, we describe the optical layout of the infrared instrument channels, thin-film multilayer designs, the coating deposition method and the spectral system throughput for the bandpass interference filters, dichroic beam splitters, lenses and mirror coatings to discriminate wavelengths at 8.8, 10.8, & 12.0 µm. The rationale for the selection of thin-film materials, spectral measurement technique, and environmental testing performance are also presented.
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
FeM2X4 spinels, where M is a transition metal and X is oxygen or sulfur, are candidate materials for spin filters, one of the key devices in spintronics. We present here a computational study of the inversion thermodynamics and the electronic structure of these (thio)spinels for M = Cr, Mn, Co, Ni, using calculations based on the density functional theory with on-site Hubbard corrections (DFT+U). The analysis of the configurational free energies shows that different behaviour is expected for the equilibrium cation distributions in these structures: FeCr2X4 and FeMn2S4 are fully normal, FeNi2X4 and FeCo2S4 are intermediate, and FeCo2O4 and FeMn2O4 are fully inverted. We have analyzed the role played by the size of the ions and by the crystal field stabilization effects in determining the equilibrium inversion degree. We also discuss how the electronic and magnetic structure of these spinels is modified by the degree of inversion, assuming that this could be varied from the equilibrium value. We have obtained electronic densities of states for the completely normal and completely inverse cation distribution of each compound. FeCr2X4, FeMn2X4, FeCo2O4 and FeNi2O4 are half-metals in the ferrimagnetic state when Fe is in tetrahedral positions. When M is filling the tetrahedral positions, the Cr-containing compounds and FeMn2O4 are half-metallic systems, while the Co and Ni spinels are insulators. The Co and Ni sulfide counterparts are metallic for any inversion degree together with the inverse FeMn2S4. Our calculations suggest that the spin filtering properties of the FeM2X4 (thio)spinels could be modified via the control of the cation distribution through variations in the synthesis conditions.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
This paper reports the first derived thermo-optical properties for vacuum deposited infrared thin films embedded in multilayers. These properties were extracted from the temperature-dependence of manufactured narrow bandpass filters across the 4-17 µm mid-infrared wavelength region. Using a repository of spaceflight multi-cavity bandpass filters, the thermo-optical expansion coefficients of PbTe and ZnSe were determined across an elevated temperature range 20-160 ºC. Embedded ZnSe films showed thermo-optical properties similar to reported bulk values, whilst the embedded PbTe films of lower optical density, deviate from reference literature sources. Detailed knowledge of derived coefficients is essential to the multilayer design of temperature-invariant narrow bandpass filters for use in non-cooled infrared detection systems. We further present manufacture of the first reported temperature-invariant multi-cavity narrow bandpass filter utilizing PbS chalcogenide layer material.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
The Bloom filter is a space efficient randomized data structure for representing a set and supporting membership queries. Bloom filters intrinsically allow false positives. However, the space savings they offer outweigh the disadvantage if the false positive rates are kept sufficiently low. Inspired by the recent application of the Bloom filter in a novel multicast forwarding fabric, this paper proposes a variant of the Bloom filter, the optihash. The optihash introduces an optimization for the false positive rate at the stage of Bloom filter formation using the same amount of space at the cost of slightly more processing than the classic Bloom filter. Often Bloom filters are used in situations where a fixed amount of space is a primary constraint. We present the optihash as a good alternative to Bloom filters since the amount of space is the same and the improvements in false positives can justify the additional processing. Specifically, we show via simulations and numerical analysis that using the optihash the false positives occurrences can be reduced and controlled at a cost of small additional processing. The simulations are carried out for in-packet forwarding. In this framework, the Bloom filter is used as a compact link/route identifier and it is placed in the packet header to encode the route. At each node, the Bloom filter is queried for membership in order to make forwarding decisions. A false positive in the forwarding decision is translated into packets forwarded along an unintended outgoing link. By using the optihash, false positives can be reduced. The optimization processing is carried out in an entity termed the Topology Manger which is part of the control plane of the multicast forwarding fabric. This processing is only carried out on a per-session basis, not for every packet. The aim of this paper is to present the optihash and evaluate its false positive performances via simulations in order to measure the influence of different parameters on the false positive rate. The false positive rate for the optihash is then compared with the false positive probability of the classic Bloom filter.
Resumo:
Objectives: We investigated effects of chronic exposure (2 months) to ambient levels of particulate matter (PM) on development of protease-induced emphysema and pulmonary remodeling in mice. Methods: Balb/c mice received nasal drop of either papain or normal saline and were kept in two exposure chambers situated in an area with high traffic density. One of them received ambient air and the other had filters for PM. Results: mean concentration of PM10 was 2.68 +/- 0.38 and 33.86 +/- 2.09 mu g/m(3), respectively, in the filtered and ambient air chambers (p<0.001). After 2 months of exposure, lungs from papain-treated mice kept in the chamber with ambient air presented greater values of mean linear intercept, an increase in density of collagen fibers in alveolar septa and in expression of 8-isoprostane (p = 0.002, p < 0.05 and p = 0.002, respectively, compared to papain-treated mice kept in the chamber with filtered air). We did not observe significant differences between these two groups in density of macrophages and in amount of cells expressing matrix metalloproteinase-12. There were no significant differences in saline-treated mice kept in the two chambers. Conclusions: We conclude that exposure to urban levels of PM worsens protease-induced emphysema and increases pulmonary remodeling. We suggest that an increase in oxidative stress induced by PM exposure influences this response. These pulmonary effects of PM were observed only in mice with emphysema. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.
Resumo:
This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.