120 resultados para Photocatalytic filters
Resumo:
A potential problem with Ensemble Kalman Filter is the implicit Gaussian assumption at analysis times. Here we explore the performance of a recently proposed fully nonlinear particle filter on a high-dimensional but simplified ocean model, in which the Gaussian assumption is not made. The model simulates the evolution of the vorticity field in time, described by the barotropic vorticity equation, in a highly nonlinear flow regime. While common knowledge is that particle filters are inefficient and need large numbers of model runs to avoid degeneracy, the newly developed particle filter needs only of the order of 10-100 particles on large scale problems. The crucial new ingredient is that the proposal density cannot only be used to ensure all particles end up in high-probability regions of state space as defined by the observations, but also to ensure that most of the particles have similar weights. Using identical twin experiments we found that the ensemble mean follows the truth reliably, and the difference from the truth is captured by the ensemble spread. A rank histogram is used to show that the truth run is indistinguishable from any of the particles, showing statistical consistency of the method.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.
Resumo:
Currently, infrared filters for astronomical telescopes and satellite radiometers are based on multilayer thin film stacks of alternating high and low refractive index materials. However, the choice of suitable layer materials is limited and this places limitations on the filter performance that can be achieved. The ability to design materials with arbitrary refractive index allows for filter performance to be greatly increased but also increases the complexity of design. Here a differential algorithm was used as a method for optimised design of filters with arbitrary refractive indices, and then materials are designed to these specifications as mono-materials with sub wavelength structures using Bruggeman’s effective material approximation (EMA).
Resumo:
A pyridyl-functionalized diiron dithiolate complex, [μ-(4-pyCH2−NMI-S2)Fe2(CO)6] (3, py = pyridine(ligand), NMI = naphthalene monoimide) was synthesized and fully characterized. In the presence of zinc tetraphenylporphyrin (ZnTPP), a self-assembled 3·ZnTPP complex was readily formed in CH2Cl2 by the coordination of the pyridyl nitrogen to the porphyrin zinc center. Ultrafast photoinduced electron transfer from excited ZnTPP to complex 3 in the supramolecular assembly was observed in real time by monitoring the ν(CO) and ν(CO)NMI spectral changes with femtosecond time-resolved infrared (TRIR) spectroscopy. We have confirmed that photoinduced charge separation produced the monoreduced species by comparing the time-resolved IR spectra with the conventional IR spectra of 3•− generated by reversible electrochemical reduction. The lifetimes for the charge separation and charge recombination processes were found to be τCS = 40 ± 3 ps and τCR = 205 ± 14 ps, respectively. The charge recombination is much slower than that in an analogous covalent complex, demonstrating the potential of a supramolecular approach to extend the lifetime of the chargeseparated state in photocatalytic complexes. The observed vibrational frequency shifts provide a very sensitive probe of the delocalization of the electron-spin density over the different parts of the Fe2S2 complex. The TR and spectro-electrochemical IR spectra, electron paramagnetic resonance spectra, and density functional theory calculations all show that the spin density in 3•− is delocalized over the diiron core and the NMI bridge. This delocalization explains why the complex exhibits low catalytic dihydrogen production even though it features a very efficient photoinduced electron transfer. The ultrafast porphyrin-to-NMIS2−Fe2(CO)6 photoinduced electron transfer is the first reported example of a supramolecular Fe2S2-hydrogenase model studied by femtosecond TRIR spectroscopy. Our results show that TRIR spectroscopy is a powerful tool to investigate photoinduced electron transfer in potential dihydrogen-producing catalytic complexes, and that way to optimize their performance by rational approaches.
Resumo:
The Earth Cloud, Aerosol and Radiation Explorer mission (EarthCARE) Multispectral Imager (MSI) is a radiometric instrument designed to provide the imaging of the atmospheric cloud cover and the cloud top surface temperature from a sun-synchronous low Earth orbit. The MSI forms part of a suite of four instruments destined to support the European Space Agency Living Planet mission on-board the EarthCARE satellite payload to be launched in 2016, whose synergy will be used to construct three-dimensional scenes, textures and temperatures of atmospheric clouds and aerosols. The MSI instrument contains seven channels: four solar channels to measure visible and short-wave infrared wavelengths, and three channels to measure infrared thermal emission. In this paper, we describe the optical layout of the infrared instrument channels, thin-film multilayer designs, the coating deposition method and the spectral system throughput for the bandpass interference filters, dichroic beam splitters, lenses and mirror coatings to discriminate wavelengths at 8.8, 10.8, & 12.0 µm. The rationale for the selection of thin-film materials, spectral measurement technique, and environmental testing performance are also presented.
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
FeM2X4 spinels, where M is a transition metal and X is oxygen or sulfur, are candidate materials for spin filters, one of the key devices in spintronics. We present here a computational study of the inversion thermodynamics and the electronic structure of these (thio)spinels for M = Cr, Mn, Co, Ni, using calculations based on the density functional theory with on-site Hubbard corrections (DFT+U). The analysis of the configurational free energies shows that different behaviour is expected for the equilibrium cation distributions in these structures: FeCr2X4 and FeMn2S4 are fully normal, FeNi2X4 and FeCo2S4 are intermediate, and FeCo2O4 and FeMn2O4 are fully inverted. We have analyzed the role played by the size of the ions and by the crystal field stabilization effects in determining the equilibrium inversion degree. We also discuss how the electronic and magnetic structure of these spinels is modified by the degree of inversion, assuming that this could be varied from the equilibrium value. We have obtained electronic densities of states for the completely normal and completely inverse cation distribution of each compound. FeCr2X4, FeMn2X4, FeCo2O4 and FeNi2O4 are half-metals in the ferrimagnetic state when Fe is in tetrahedral positions. When M is filling the tetrahedral positions, the Cr-containing compounds and FeMn2O4 are half-metallic systems, while the Co and Ni spinels are insulators. The Co and Ni sulfide counterparts are metallic for any inversion degree together with the inverse FeMn2S4. Our calculations suggest that the spin filtering properties of the FeM2X4 (thio)spinels could be modified via the control of the cation distribution through variations in the synthesis conditions.
Resumo:
Modification of graphene to open a robust gap in its electronic spectrum is essential for its use in field effect transistors and photochemistry applications. Inspired by recent experimental success in the preparation of homogeneous alloys of graphene and boron nitride (BN), we consider here engineering the electronic structure and bandgap of C2xB1−xN1−x alloys via both compositional and configurational modification. We start from the BN end-member, which already has a large bandgap, and then show that (a) the bandgap can in principle be reduced to about 2 eV with moderate substitution of C (x < 0.25); and (b) the electronic structure of C2xB1−xN1−x can be further tuned not only with composition x, but also with the configuration adopted by C substituents in the BN matrix. Our analysis, based on accurate screened hybrid functional calculations, provides a clear understanding of the correlation found between the bandgap and the level of aggregation of C atoms: the bandgap decreases most when the C atoms are maximally isolated, and increases with aggregation of C atoms due to the formation of bonding and anti-bonding bands associated with hybridization of occupied and empty defect states. We determine the location of valence and conduction band edges relative to vacuum and discuss the implications on the potential use of 2D C2xB1−xN1−x alloys in photocatalytic applications. Finally, we assess the thermodynamic limitations on the formation of these alloys using a cluster expansion model derived from first-principles.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
This paper reports the first derived thermo-optical properties for vacuum deposited infrared thin films embedded in multilayers. These properties were extracted from the temperature-dependence of manufactured narrow bandpass filters across the 4-17 µm mid-infrared wavelength region. Using a repository of spaceflight multi-cavity bandpass filters, the thermo-optical expansion coefficients of PbTe and ZnSe were determined across an elevated temperature range 20-160 ºC. Embedded ZnSe films showed thermo-optical properties similar to reported bulk values, whilst the embedded PbTe films of lower optical density, deviate from reference literature sources. Detailed knowledge of derived coefficients is essential to the multilayer design of temperature-invariant narrow bandpass filters for use in non-cooled infrared detection systems. We further present manufacture of the first reported temperature-invariant multi-cavity narrow bandpass filter utilizing PbS chalcogenide layer material.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
The Bloom filter is a space efficient randomized data structure for representing a set and supporting membership queries. Bloom filters intrinsically allow false positives. However, the space savings they offer outweigh the disadvantage if the false positive rates are kept sufficiently low. Inspired by the recent application of the Bloom filter in a novel multicast forwarding fabric, this paper proposes a variant of the Bloom filter, the optihash. The optihash introduces an optimization for the false positive rate at the stage of Bloom filter formation using the same amount of space at the cost of slightly more processing than the classic Bloom filter. Often Bloom filters are used in situations where a fixed amount of space is a primary constraint. We present the optihash as a good alternative to Bloom filters since the amount of space is the same and the improvements in false positives can justify the additional processing. Specifically, we show via simulations and numerical analysis that using the optihash the false positives occurrences can be reduced and controlled at a cost of small additional processing. The simulations are carried out for in-packet forwarding. In this framework, the Bloom filter is used as a compact link/route identifier and it is placed in the packet header to encode the route. At each node, the Bloom filter is queried for membership in order to make forwarding decisions. A false positive in the forwarding decision is translated into packets forwarded along an unintended outgoing link. By using the optihash, false positives can be reduced. The optimization processing is carried out in an entity termed the Topology Manger which is part of the control plane of the multicast forwarding fabric. This processing is only carried out on a per-session basis, not for every packet. The aim of this paper is to present the optihash and evaluate its false positive performances via simulations in order to measure the influence of different parameters on the false positive rate. The false positive rate for the optihash is then compared with the false positive probability of the classic Bloom filter.