819 resultados para relay filtering
Resumo:
In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.
Resumo:
Accurate estimation of the soil water balance (SWB) is important for a number of applications (e.g. environmental, meteorological, agronomical and hydrological). The objective of this study was to develop and test techniques for the estimation of soil water fluxes and SWB components (particularly infiltration, evaporation and drainage below the root zone) from soil water records. The work presented here is based on profile soil moisture data measured using dielectric methods, at 30-min resolution, at an experimental site with different vegetation covers (barley, sunflower and bare soil). Estimates of infiltration were derived by assuming that observed gains in the soil profile water content during rainfall were due to infiltration. Inaccuracies related to diurnal fluctuations present in the dielectric-based soil water records are resolved by filtering the data with adequate threshold values. Inconsistencies caused by the redistribution of water after rain events were corrected by allowing for a redistribution period before computing water gains. Estimates of evaporation and drainage were derived from water losses above and below the deepest zero flux plane (ZFP), respectively. The evaporation estimates for the sunflower field were compared to evaporation data obtained with an eddy covariance (EC) system located elsewhere in the field. The EC estimate of total evaporation for the growing season was about 25% larger than that derived from the soil water records. This was consistent with differences in crop growth (based on direct measurements of biomass, and field mapping of vegetation using laser altimetry) between the EC footprint and the area of the field used for soil moisture monitoring. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
Observations from the High Resolution Dynamics Limb Sounder (HIRDLS) instrument on NASA's Aura satellite are used to quantify gravity wave momentum fluxes in the middle atmosphere. The period around the 2006 Arctic sudden stratospheric warming (SSW) is investigated, during which a substantial elevation of the stratopause occurred. Analysis of the HIRDLS results, together with analysis of European Centre for Medium-Range Weather Forecasting zonal winds, provide direct evidence of wind filtering of the gravity wave spectrum during this period. This confirms previous hypotheses from model studies and further contributes to our understanding of the effects of gravity wave driving on the winter polar stratopause.
Resumo:
Reports the factor-filtering and primality-testing of Mersenne Numbers Mp for p < 100000, the latter using the ICL 'DAP' Distributed Array Processor.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
During the past 15 years, a number of initiatives have been undertaken at national level to develop ocean forecasting systems operating at regional and/or global scales. The co-ordination between these efforts has been organized internationally through the Global Ocean Data Assimilation Experiment (GODAE). The French MERCATOR project is one of the leading participants in GODAE. The MERCATOR systems routinely assimilate a variety of observations such as multi-satellite altimeter data, sea-surface temperature and in situ temperature and salinity profiles, focusing on high-resolution scales of the ocean dynamics. The assimilation strategy in MERCATOR is based on a hierarchy of methods of increasing sophistication including optimal interpolation, Kalman filtering and variational methods, which are progressively deployed through the Syst`eme d’Assimilation MERCATOR (SAM) series. SAM-1 is based on a reduced-order optimal interpolation which can be operated using ‘altimetry-only’ or ‘multi-data’ set-ups; it relies on the concept of separability, assuming that the correlations can be separated into a product of horizontal and vertical contributions. The second release, SAM-2, is being developed to include new features from the singular evolutive extended Kalman (SEEK) filter, such as three-dimensional, multivariate error modes and adaptivity schemes. The third one, SAM-3, considers variational methods such as the incremental four-dimensional variational algorithm. Most operational forecasting systems evaluated during GODAE are based on least-squares statistical estimation assuming Gaussian errors. In the framework of the EU MERSEA (Marine EnviRonment and Security for the European Area) project, research is being conducted to prepare the next-generation operational ocean monitoring and forecasting systems. The research effort will explore nonlinear assimilation formulations to overcome limitations of the current systems. This paper provides an overview of the developments conducted in MERSEA with the SEEK filter, the Ensemble Kalman filter and the sequential importance re-sampling filter.
Resumo:
To construct Biodiversity richness maps from Environmental Niche Models (ENMs) of thousands of species is time consuming. A separate species occurrence data pre-processing phase enables the experimenter to control test AUC score variance due to species dataset size. Besides, removing duplicate occurrences and points with missing environmental data, we discuss the need for coordinate precision, wide dispersion, temporal and synonymity filters. After species data filtering, the final task of a pre-processing phase should be the automatic generation of species occurrence datasets which can then be directly ’plugged-in’ to the ENM. A software application capable of carrying out all these tasks will be a valuable time-saver particularly for large scale biodiversity studies.
Resumo:
Preface. Iron is considered to be a minor element employed, in a variety of forms, by nearly all living organisms. In some cases, it is utilised in large quantities, for instance for the formation of magnetosomes within magnetotactic bacteria or during use of iron as a respiratory donor or acceptor by iron oxidising or reducing bacteria. However, in most cases the role of iron is restricted to its use as a cofactor or prosthetic group assisting the biological activity of many different types of protein. The key metabolic processes that are dependent on iron as a cofactor are numerous; they include respiration, light harvesting, nitrogen fixation, the Krebs cycle, redox stress resistance, amino acid synthesis and oxygen transport. Indeed, it is clear that Life in its current form would be impossible in the absence of iron. One of the main reasons for the reliance of Life upon this metal is the ability of iron to exist in multiple redox states, in particular the relatively stable ferrous (Fe2+) and ferric (Fe3+) forms. The availability of these stable oxidation states allows iron to engage in redox reactions over a wide range of midpoint potentials, depending on the coordination environment, making it an extremely adaptable mediator of electron exchange processes. Iron is also one of the most common elements within the Earth’s crust (5% abundance) and thus is considered to have been readily available when Life evolved on our early, anaerobic planet. However, as oxygen accumulated (the ‘Great oxidation event’) within the atmosphere some 2.4 billion years ago, and as the oceans became less acidic, the iron within primordial oceans was converted from its soluble reduced form to its weakly-soluble oxidised ferric form, which precipitated (~1.8 billion years ago) to form the ‘banded iron formations’ (BIFs) observed today in Precambrian sedimentary rocks around the world. These BIFs provide a geological record marking a transition point away from the ancient anaerobic world towards modern aerobic Earth. They also indicate a period over which the bio-availability of iron shifted from abundance to limitation, a condition that extends to the modern day. Thus, it is considered likely that the vast majority of extant organisms face the common problem of securing sufficient iron from their environment – a problem that Life on Earth has had to cope with for some 2 billion years. This struggle for iron is exemplified by the competition for this metal amongst co-habiting microorganisms who resort to stealing (pirating) each others iron supplies! The reliance of micro-organisms upon iron can be disadvantageous to them, and to our innate immune system it represents a chink in the microbial armour, offering an opportunity that can be exploited to ward off pathogenic invaders. In order to infect body tissues and cause disease, pathogens must secure all their iron from the host. To fight such infections, the host specifically withdraws available iron through the action of various iron depleting processes (e.g. the release of lactoferrin and lipocalin-2) – this represents an important strategy in our defence against disease. However, pathogens are frequently able to deploy iron acquisition systems that target host iron sources such as transferrin, lactoferrin and hemoproteins, and thus counteract the iron-withdrawal approaches of the host. Inactivation of such host-targeting iron-uptake systems often attenuates the pathogenicity of the invading microbe, illustrating the importance of ‘the battle for iron’ in the infection process. The role of iron sequestration systems in facilitating microbial infections has been a major driving force in research aimed at unravelling the complexities of microbial iron transport processes. But also, the intricacy of such systems offers a challenge that stimulates the curiosity. One such challenge is to understand how balanced levels of free iron within the cytosol are achieved in a way that avoids toxicity whilst providing sufficient levels for metabolic purposes – this is a requirement that all organisms have to meet. Although the systems involved in achieving this balance can be highly variable amongst different microorganisms, the overall strategy is common. On a coarse level, the homeostatic control of cellular iron is maintained through strict control of the uptake, storage and utilisation of available iron, and is co-ordinated by integrated iron-regulatory networks. However, much yet remains to be discovered concerning the fine details of these different iron regulatory processes. As already indicated, perhaps the most difficult task in maintaining iron homeostasis is simply the procurement of sufficient iron from external sources. The importance of this problem is demonstrated by the plethora of distinct iron transporters often found within a single bacterium, each targeting different forms (complex or redox state) of iron or a different environmental condition. Thus, microbes devote considerable cellular resource to securing iron from their surroundings, reflecting how successful acquisition of iron can be crucial in the competition for survival. The aim of this book is provide the reader with an overview of iron transport processes within a range of microorganisms and to provide an indication of how microbial iron levels are controlled. This aim is promoted through the inclusion of expert reviews on several well studied examples that illustrate the current state of play concerning our comprehension of how iron is translocated into the bacterial (or fungal) cell and how iron homeostasis is controlled within microbes. The first two chapters (1-2) consider the general properties of microbial iron-chelating compounds (known as ‘siderophores’), and the mechanisms used by bacteria to acquire haem and utilise it as an iron source. The following twelve chapters (3-14) focus on specific types of microorganism that are of key interest, covering both an array of pathogens for humans, animals and plants (e.g. species of Bordetella, Shigella, , Erwinia, Vibrio, Aeromonas, Francisella, Campylobacter and Staphylococci, and EHEC) as well as a number of prominent non-pathogens (e.g. the rhizobia, E. coli K-12, Bacteroides spp., cyanobacteria, Bacillus spp. and yeasts). The chapters relay the common themes in microbial iron uptake approaches (e.g. the use of siderophores, TonB-dependent transporters, and ABC transport systems), but also highlight many distinctions (such as use of different types iron regulator and the impact of the presence/absence of a cell wall) in the strategies employed. We hope that those both within and outside the field will find this book useful, stimulating and interesting. We intend that it will provide a source for reference that will assist relevant researchers and provide an entry point for those initiating their studies within this subject. Finally, it is important that we acknowledge and thank wholeheartedly the many contributors who have provided the 14 excellent chapters from which this book is composed. Without their considerable efforts, this book, and the understanding that it relays, would not have been possible. Simon C Andrews and Pierre Cornelis
Resumo:
Capturing the pattern of structural change is a relevant task in applied demand analysis, as consumer preferences may vary significantly over time. Filtering and smoothing techniques have recently played an increasingly relevant role. A dynamic Almost Ideal Demand System with random walk parameters is estimated in order to detect modifications in consumer habits and preferences, as well as changes in the behavioural response to prices and income. Systemwise estimation, consistent with the underlying constraints from economic theory, is achieved through the EM algorithm. The proposed model is applied to UK aggregate consumption of alcohol and tobacco, using quarterly data from 1963 to 2003. Increased alcohol consumption is explained by a preference shift, addictive behaviour and a lower price elasticity. The dynamic and time-varying specification is consistent with the theoretical requirements imposed at each sample point. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Light patterns have less effect on numbers of eggs laid by current stocks than on those of forty years ago, but the principles have not changed. Ovarian activity is stimulated by increasing photoperiods and suppressed by decreasing photoperiods. The light pattern used during rearing can still have large effects on age at 50% lay, even for modern stocks. Early sexual maturity maximises egg numbers but gives smaller eggs. Late maturity maximises egg size at the expense of numbers. The relationship between egg output (g/hen d) and age at first egg is curvilinear, with maximum yield occurring in flocks maturing in about the centre of their potential range. Fancy patterns of increasing daylength after maturity are probably not justified. A flock held on a constant 14h day will lay as many eggs as one given step up lighting. Intermittent lighting saves about 5% of feed consumption with no loss of output, provided that the feed has adequate amino acid content to allow for the reduced feed intake. Producers with light-proof laying houses should be taking advantage of intermittent lighting. The recommended light intensity for laying houses is still 10 lx, although the physiological threshold for response to changes in photoperiod is closer to 2 lx. Very dim (0.05 lx) light filtering into blacked out houses will not stimulate the hypothalamic receptors responsible for photo-sexual responses, but may affect the bird's biological clock, which can alter its response to a constant short photoperiod. Feed intake shows a curvilinear dependence on environmental temperature. At temperatures below the panting threshold, performance can be maintained by adjusting the feed so as to maintain an adequate intake of critical amino acids. Above the panting threshold, the hen is unable to take in enough energy to maintain normal output. There is no dietary modification which can effectively offset this problem. Diurnally cycling temperatures result in feed intake and egg production equivalent to that observed under a constant temperature equal to the mean of the cycle. When the poultry house is cooler at night than by day, it helps to provide light so that the birds can feed during the cooler part of the cycle.
Resumo:
A fully automated procedure to extract and to image local fibre orientation in biological tissues from scanning X-ray diffraction is presented. The preferred chitin fibre orientation in the flow sensing system of crickets is determined with high spatial resolution by applying synchrotron radiation based X-ray microbeam diffraction in conjunction with advanced sample sectioning using a UV micro-laser. The data analysis is based on an automated detection of azimuthal diffraction maxima after 2D convolution filtering (smoothing) of the 2D diffraction patterns. Under the assumption of crystallographic fibre symmetry around the morphological fibre axis, the evaluation method allows mapping the three-dimensional orientation of the fibre axes in space. The resulting two-dimensional maps of the local fibre orientations - together with the complex shape of the flow sensing system - may be useful for a better understanding of the mechanical optimization of such tissues.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
Modal filtering is based on the capability of single-mode waveguides to transmit only one complex amplitude function to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible in a nulling interferometer. In the present paper we focus on the progress of Integrated Optics in the thermal infrared [6-20 mu m] range, one of the two candidate technologies for the fabrication of Modal Filters, together with fiber optics. In conclusion of the European Space Agency's (ESA) "Integrated Optics for Darwin" activity, etched layers of clialcogenide material deposited on chalcogenide glass substrates was selected among four candidates as the technology with the best potential to simultaneously meet the filtering efficiency, absolute and spectral transmission, and beam coupling requirements. ESA's new "Integrated Optics" activity started at mid-2007 with the purpose of improving the technology until compliant prototypes can be manufactured and validated, expectedly by the end of 2009. The present paper aims at introducing the project and the components requirements and functions. The selected materials and preliminary designs, as well as the experimental validation logic and test benches are presented. More details are provided on the progress of the main technology: vacuum deposition in the co-evaporation mode and subsequent etching of chalcogenide layers. In addition., preliminary investigations of an alternative technology based on burying a chalcogenide optical fiber core into a chalcogenide substrate are presented. Specific developments of anti-reflective solutions designed for the mitigation of Fresnel losses at the input and output surface of the components are also introduced.
Resumo:
The 3D reconstruction of a Golgi-stained dendritic tree from a serial stack of images captured with a transmitted light bright-field microscope is investigated. Modifications to the bootstrap filter are discussed such that the tree structure may be estimated recursively as a series of connected segments. The tracking performance of the bootstrap particle filter is compared against Differential Evolution, an evolutionary global optimisation method, both in terms of robustness and accuracy. It is found that the particle filtering approach is significantly more robust and accurate for the data considered.
Resumo:
In this paper we consider a cooperative communication system where some a priori information of wireless channels is available at the transmitter. Several opportunistic relaying strategies are developed to fully utilize the available channel information. Then an explicit expression of the outage probability is developed for each proposed cooperative scheme as well as the diversity-multiplexing tradeoff by using order statistics. Our analytical results show that the more channel information available at the transmitter, the better performance a cooperative system can achieve. When the exact values of the source-relay channels are available, the performance loss at low SNR can be effectively suppressed. When the source node has the access to the source-relay and relay-destination channels, the full diversity can be achieved by costing only one extra channel used for relaying transmission, and an optimal diversity-multiplexing tradeoff can be achieved d(r) = (N + 1)(1 - 2r), where N is the number of all possible relaying nodes.