946 resultados para Covering Number
Resumo:
Predation by house cats (Felis catus) is one of the largest human-related sources of mortality for wild birds in the United States and elsewhere, and has been implicated in extinctions and population declines of several species. However, relatively little is known about this topic in Canada. The objectives of this study were to provide plausible estimates for the number of birds killed by house cats in Canada, identify information that would help improve those estimates, and identify species potentially vulnerable to population impacts. In total, cats are estimated to kill between 100 and 350 million birds per year in Canada (> 95% of estimates were in this range), with the majority likely to be killed by feral cats. This range of estimates is based on surveys indicating that Canadians own about 8.5 million pet cats, a rough approximation of 1.4 to 4.2 million feral cats, and literature values of predation rates from studies conducted elsewhere. Reliability of the total kill estimate would be improved most by better knowledge of feral cat numbers and diet in Canada, though any data on birds killed by cats in Canada would be helpful. These estimates suggest that 2-7% of birds in southern Canada are killed by cats per year. Even at the low end, predation by house cats is probably the largest human-related source of bird mortality in Canada. Many species of birds are potentially vulnerable to at least local population impacts in southern Canada, by virtue of nesting or feeding on or near ground level, and habitat choices that bring them into contact with human-dominated landscapes where cats are abundant. Because cat predation is likely to remain a primary source of bird mortality in Canada for some time, this issue needs more scientific attention in Canada.
Resumo:
[1] Temperature and ozone observations from the Microwave Limb Sounder (MLS) on the EOS Aura satellite are used to study equatorial wave activity in the autumn of 2005. In contrast to previous observations for the same season in other years, the temperature anomalies in the middle and lower tropical stratosphere are found to be characterized by a strong wave-like eastward progression with zonal wave number equal to 3. Extended empirical orthogonal function (EOF) analysis reveals that the wave 3 components detected in the temperature anomalies correspond to a slow Kelvin wave with a period of 8 days and a phase speed of 19 m/s. Fluctuations associated with this Kelvin wave mode are also apparent in ozone profiles. Moreover, as expected by linear theory, the ozone fluctuations observed in the lower stratosphere are in phase with the temperature perturbations, and peak around 20–30 hPa where the mean ozone mixing ratios have the steepest vertical gradient. A search for other Kelvin wave modes has also been made using both the MLS observations and the analyses from one experiment where MLS ozone profiles are assimilated into the European Centre for Medium-Range Weather Forecasts (ECMWF) data assimilation system via a 6-hourly 3D var scheme. Our results show that the characteristics of the wave activity detected in the ECMWF temperature and ozone analyses are in good agreement with MLS data.
Condition number estimates for combined potential boundary integral operators in acoustic scattering
Resumo:
We study the classical combined field integral equation formulations for time-harmonic acoustic scattering by a sound soft bounded obstacle, namely the indirect formulation due to Brakhage-Werner/Leis/Panic, and the direct formulation associated with the names of Burton and Miller. We obtain lower and upper bounds on the condition numbers for these formulations, emphasising dependence on the frequency, the geometry of the scatterer, and the coupling parameter. Of independent interest we also obtain upper and lower bounds on the norms of two oscillatory integral operators, namely the classical acoustic single- and double-layer potential operators.
Resumo:
There is a clear need for financial protection in the construction industry, both to guarantee satisfactory completion of construction projects and to guard against non-payment. However, the cost of financial protection is often felt to be disproportionately high, with unnecessary overlap between different measures. The Reading Construction Forum has commissioned and steered research which is published in this report in an effort to bring the problem out into the open and to clarify the various options open to the various parties and stakeholders in the construction process. "Financial Protection in the UK Building Industry" is the first definitive report on the subject, offering an accurate and simple guide that all levels within the construction industry can understand. This accessible new guide considers the problem of financial protection and clearly lays out the alternative solutions.It looks by turn at the client, the main contractor, and the sub-contractor, discussing which financial protection options are available to each of them, and considers the pros and cons of each option. The cost of each type of financial protection is weighed against the amount of protection provided and the risks involved. The book concludes with guidance for consultants, emphasising relevant points to consider when advising clients and contractors about which type of financial protection to choose. "Financial Protection in the UK Building Industry" was researched by a literature search, collection of statistical data, and financial data, as well as discussions with clients, contractors, sub-contractors and consultants. This investigation has shown that the direct costs of implementing financial protection measures are marginal, and that wider adoption of payment protection would create a more equitable situation between contracting parties.This guide will enable anyone in the construction industry to consider all the options, and determine what is the best solution for them. "Reading Construction Forum Financial Protection for the UK Building Industry" was complied by the University of Reading, funded by the Reading Construction Forum. The Forum has recently commissioned and steered a number of high-profile reports covering important aspects of the construction industry. Members of the Forum include major companies which are concerned with achieving high quality in the design, construction and use of commercial, retail and industrial buildings. All are committed to change and innovation in the British and European construction industries.
Resumo:
A suite of climate change indices derived from daily temperature and precipitation data, with a primary focus on extreme events, were computed and analyzed. By setting an exact formula for each index and using specially designed software, analyses done in different countries have been combined seamlessly. This has enabled the presentation of the most up-to-date and comprehensive global picture of trends in extreme temperature and precipitation indices using results from a number of workshops held in data-sparse regions and high-quality station data supplied by numerous scientists world wide. Seasonal and annual indices for the period 1951-2003 were gridded. Trends in the gridded fields were computed and tested for statistical significance. Results showed widespread significant changes in temperature extremes associated with warming, especially for those indices derived from daily minimum temperature. Over 70% of the global land area sampled showed a significant decrease in the annual occurrence of cold nights and a significant increase in the annual occurrence of warm nights. Some regions experienced a more than doubling of these indices. This implies a positive shift in the distribution of daily minimum temperature throughout the globe. Daily maximum temperature indices showed similar changes but with smaller magnitudes. Precipitation changes showed a widespread and significant increase, but the changes are much less spatially coherent compared with temperature change. Probability distributions of indices derived from approximately 200 temperature and 600 precipitation stations, with near-complete data for 1901-2003 and covering a very large region of the Northern Hemisphere midlatitudes (and parts of Australia for precipitation) were analyzed for the periods 1901-1950, 1951-1978 and 1979-2003. Results indicate a significant warming throughout the 20th century. Differences in temperature indices distributions are particularly pronounced between the most recent two periods and for those indices related to minimum temperature. An analysis of those indices for which seasonal time series are available shows that these changes occur for all seasons although they are generally least pronounced for September to November. Precipitation indices show a tendency toward wetter conditions throughout the 20th century.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
A parallel hardware random number generator for use with a VLSI genetic algorithm processing device is proposed. The design uses an systolic array of mixed congruential random number generators. The generators are constantly reseeded with the outputs of the proceeding generators to avoid significant biasing of the randomness of the array which would result in longer times for the algorithm to converge to a solution. 1 Introduction In recent years there has been a growing interest in developing hardware genetic algorithm devices [1, 2, 3]. A genetic algorithm (GA) is a stochastic search and optimization technique which attempts to capture the power of natural selection by evolving a population of candidate solutions by a process of selection and reproduction [4]. In keeping with the evolutionary analogy, the solutions are called chromosomes with each chromosome containing a number of genes. Chromosomes are commonly simple binary strings, the bits being the genes.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
Preface. Iron is considered to be a minor element employed, in a variety of forms, by nearly all living organisms. In some cases, it is utilised in large quantities, for instance for the formation of magnetosomes within magnetotactic bacteria or during use of iron as a respiratory donor or acceptor by iron oxidising or reducing bacteria. However, in most cases the role of iron is restricted to its use as a cofactor or prosthetic group assisting the biological activity of many different types of protein. The key metabolic processes that are dependent on iron as a cofactor are numerous; they include respiration, light harvesting, nitrogen fixation, the Krebs cycle, redox stress resistance, amino acid synthesis and oxygen transport. Indeed, it is clear that Life in its current form would be impossible in the absence of iron. One of the main reasons for the reliance of Life upon this metal is the ability of iron to exist in multiple redox states, in particular the relatively stable ferrous (Fe2+) and ferric (Fe3+) forms. The availability of these stable oxidation states allows iron to engage in redox reactions over a wide range of midpoint potentials, depending on the coordination environment, making it an extremely adaptable mediator of electron exchange processes. Iron is also one of the most common elements within the Earth’s crust (5% abundance) and thus is considered to have been readily available when Life evolved on our early, anaerobic planet. However, as oxygen accumulated (the ‘Great oxidation event’) within the atmosphere some 2.4 billion years ago, and as the oceans became less acidic, the iron within primordial oceans was converted from its soluble reduced form to its weakly-soluble oxidised ferric form, which precipitated (~1.8 billion years ago) to form the ‘banded iron formations’ (BIFs) observed today in Precambrian sedimentary rocks around the world. These BIFs provide a geological record marking a transition point away from the ancient anaerobic world towards modern aerobic Earth. They also indicate a period over which the bio-availability of iron shifted from abundance to limitation, a condition that extends to the modern day. Thus, it is considered likely that the vast majority of extant organisms face the common problem of securing sufficient iron from their environment – a problem that Life on Earth has had to cope with for some 2 billion years. This struggle for iron is exemplified by the competition for this metal amongst co-habiting microorganisms who resort to stealing (pirating) each others iron supplies! The reliance of micro-organisms upon iron can be disadvantageous to them, and to our innate immune system it represents a chink in the microbial armour, offering an opportunity that can be exploited to ward off pathogenic invaders. In order to infect body tissues and cause disease, pathogens must secure all their iron from the host. To fight such infections, the host specifically withdraws available iron through the action of various iron depleting processes (e.g. the release of lactoferrin and lipocalin-2) – this represents an important strategy in our defence against disease. However, pathogens are frequently able to deploy iron acquisition systems that target host iron sources such as transferrin, lactoferrin and hemoproteins, and thus counteract the iron-withdrawal approaches of the host. Inactivation of such host-targeting iron-uptake systems often attenuates the pathogenicity of the invading microbe, illustrating the importance of ‘the battle for iron’ in the infection process. The role of iron sequestration systems in facilitating microbial infections has been a major driving force in research aimed at unravelling the complexities of microbial iron transport processes. But also, the intricacy of such systems offers a challenge that stimulates the curiosity. One such challenge is to understand how balanced levels of free iron within the cytosol are achieved in a way that avoids toxicity whilst providing sufficient levels for metabolic purposes – this is a requirement that all organisms have to meet. Although the systems involved in achieving this balance can be highly variable amongst different microorganisms, the overall strategy is common. On a coarse level, the homeostatic control of cellular iron is maintained through strict control of the uptake, storage and utilisation of available iron, and is co-ordinated by integrated iron-regulatory networks. However, much yet remains to be discovered concerning the fine details of these different iron regulatory processes. As already indicated, perhaps the most difficult task in maintaining iron homeostasis is simply the procurement of sufficient iron from external sources. The importance of this problem is demonstrated by the plethora of distinct iron transporters often found within a single bacterium, each targeting different forms (complex or redox state) of iron or a different environmental condition. Thus, microbes devote considerable cellular resource to securing iron from their surroundings, reflecting how successful acquisition of iron can be crucial in the competition for survival. The aim of this book is provide the reader with an overview of iron transport processes within a range of microorganisms and to provide an indication of how microbial iron levels are controlled. This aim is promoted through the inclusion of expert reviews on several well studied examples that illustrate the current state of play concerning our comprehension of how iron is translocated into the bacterial (or fungal) cell and how iron homeostasis is controlled within microbes. The first two chapters (1-2) consider the general properties of microbial iron-chelating compounds (known as ‘siderophores’), and the mechanisms used by bacteria to acquire haem and utilise it as an iron source. The following twelve chapters (3-14) focus on specific types of microorganism that are of key interest, covering both an array of pathogens for humans, animals and plants (e.g. species of Bordetella, Shigella, , Erwinia, Vibrio, Aeromonas, Francisella, Campylobacter and Staphylococci, and EHEC) as well as a number of prominent non-pathogens (e.g. the rhizobia, E. coli K-12, Bacteroides spp., cyanobacteria, Bacillus spp. and yeasts). The chapters relay the common themes in microbial iron uptake approaches (e.g. the use of siderophores, TonB-dependent transporters, and ABC transport systems), but also highlight many distinctions (such as use of different types iron regulator and the impact of the presence/absence of a cell wall) in the strategies employed. We hope that those both within and outside the field will find this book useful, stimulating and interesting. We intend that it will provide a source for reference that will assist relevant researchers and provide an entry point for those initiating their studies within this subject. Finally, it is important that we acknowledge and thank wholeheartedly the many contributors who have provided the 14 excellent chapters from which this book is composed. Without their considerable efforts, this book, and the understanding that it relays, would not have been possible. Simon C Andrews and Pierre Cornelis
Resumo:
The effects of irrigation and nitrogen (N) fertilizer on Hagberg falling number (HFN), specific weight (SW) and blackpoint (BP) of winter wheat (Triticum aestivum L) were investigated. Mains water (+50 and +100 mm month(-1), containing 44 mg NO3- litre(-1) and 28 mg SO42- litre(-1)) was applied with trickle irrigation during winter (17 January-17 March), spring (21 March-20 May) or summer (24 May-23 July). In 1999/2000 these treatments were factorially combined with three N levels (0, 200, 400 kg N ha(-1)), applied to cv Hereward. In 2000/01 the 400 kg N ha(-1) treatment was replaced with cv Malacca given 200 kg N ha(-1). Irrigation increased grain yield, mostly by increasing grain numbers when applied in winter and spring, and by increasing mean grain weight when applied in summer. Nitrogen increased grain numbers and SW, and reduced BP in both years. Nitrogen increased HFN in 1999/2000 and reduced HFN in 2000/01. Effects of irrigation on HFN, SW and BP were smaller and inconsistent over year and nitrogen level. Irrigation interacted with N on mean grain weight: negatively for winter and spring irrigation, and positively for summer irrigation. Ten variables derived from digital image analysis of harvested grain were included with mean grain weight in a principal components analysis. The first principal component ('size') was negatively related to HFN (in two years) and BP (one year), and positively related to SW (two years). Treatment effects on dimensions of harvested grain could not explain all of the effects on HFN, BP and SW but the results were consistent with the hypothesis that water and nutrient availability, even when they were affected early in the season, could influence final grain quality if they influenced grain numbers and size. (C) 2004 Society of Chemical Industry