967 resultados para Congestion Window (cwnd)
Resumo:
High prevalence of anthelmintic-resistant gastrointestinal nematodes (GIN) in goats has increased pressure to find effective, alternative non-synthetic control methods, one of which is adding forage of the high condensed tannin (CT) legume sericea lespedeza (SL; Lespedeza cuneata) to the animal's diet. Previous work has demonstrated good efficacy of dried SL (hay, pellets) against small ruminant GIN, but information is lacking on consumption of fresh SL, particularly during the late summer–autumn period in the southern USA when perennial warm-season grass pastures are often low in quality. A study was designed to determine the effects of autumn (September–November) consumption of fresh SL forage, grass pasture (predominantly bermudagrass, BG; Cynodon dactylon), or a combination of SL + BG forage by young goats [intact male Spanish kids, 9 months old (20.7 ± 1.1 kg), n = 10/treatment group] on their GIN infection status. Three forage paddocks (0.40 ha) were set up at the Fort Valley State University Agricultural Research Station (Fort Valley, GA) for an 8-week trial. The goats in each paddock were supplemented with a commercial feed pellet at 0.45 kg/head/d for the first 4 weeks of the trial, and 0.27 kg/head/d for the final 4 weeks. Forage samples taken at the start of the trial were analyzed for crude protein (CP), neutral detergent fiber (NDF), and acid detergent fiber (ADF) content, and a separate set of SL samples was analyzed for CT in leaves, stems, and whole plant using the benzyl mercaptan thiolysis method. Animal weights were taken at the start and end of the trial, and fecal and blood samples were collected weekly for determination of fecal egg counts (FEC) and packed cell volume (PCV), respectively. Adult GIN was recovered from the abomasum and small intestines of all goats at the end of the experiment for counting and speciation. The CP levels were highest for SL forage, intermediate for SL + BG, and lowest for BG forage samples, while NDF and ADF values were the opposite, with highest levels in BG and lowest in SL forage samples. Sericea lespedeza leaves had more CT than stems (16.0 g vs. 3.3 g/100 g dry weight), a slightly higher percentage of PDs (98% vs. 94%, respectively) and polymers of larger mean degrees of polymerization (42 vs. 18, respectively). There were no differences in average daily gain or blood PCV between the treatment groups, but SL goats had lower FEC (P < 0.05) than the BG or SL + BG forage goats throughout most of the trial. The SL + BG goats had lower FEC than the BG forage animals by the end of the trial (week 8, P < 0.05). The SL goats had lower numbers (P < 0.05) of male Haemonchus contortus and tended to have fewer female (P < 0.10) and total (P < 0.07) H. contortus compared with the BG goats. The predominant GIN in all the goats was Trichostrongylus colubriformis (73% of total GIN). As a low-input forage with activity against pathogenic GIN (H. contortus), SL has a potential to reduce producers’ dependence upon synthetic anthelmintics and also to fill the autumn ‘window’ in good-quality fresh forages for goat grazing in the southern USA.
Resumo:
Giant planets helped to shape the conditions we see in the Solar System today and they account for more than 99% of the mass of the Sun’s planetary system. They can be subdivided into the Ice Giants (Uranus and Neptune) and the Gas Giants (Jupiter and Saturn), which differ from each other in a number of fundamental ways. Uranus, in particular is the most challenging to our understanding of planetary formation and evolution, with its large obliquity, low self-luminosity, highly asymmetrical internal field, and puzzling internal structure. Uranus also has a rich planetary system consisting of a system of inner natural satellites and complex ring system, five major natural icy satellites, a system of irregular moons with varied dynamical histories, and a highly asymmetrical magnetosphere. Voyager 2 is the only spacecraft to have explored Uranus, with a flyby in 1986, and no mission is currently planned to this enigmatic system. However, a mission to the uranian system would open a new window on the origin and evolution of the Solar System and would provide crucial information on a wide variety of physicochemical processes in our Solar System. These have clear implications for understanding exoplanetary systems. In this paper we describe the science case for an orbital mission to Uranus with an atmospheric entry probe to sample the composition and atmospheric physics in Uranus’ atmosphere. The characteristics of such an orbiter and a strawman scientific payload are described and we discuss the technical challenges for such a mission. This paper is based on a white paper submitted to the European Space Agency’s call for science themes for its large-class mission programme in 2013.
Resumo:
Summary Reasons for performing study: Metabonomics is emerging as a powerful tool for disease screening and investigating mammalian metabolism. This study aims to create a metabolic framework by producing a preliminary reference guide for the normal equine metabolic milieu. Objectives: To metabolically profile plasma, urine and faecal water from healthy racehorses using high resolution 1H-NMR spectroscopy and to provide a list of dominant metabolites present in each biofluid for the benefit of future research in this area. Study design: This study was performed using seven Thoroughbreds in race training at a single time-point. Urine and faecal samples were collected non-invasively and plasma was obtained from samples taken for routine clinical chemistry purposes. Methods: Biofluids were analysed using 1H-NMR spectroscopy. Metabolite assignment was achieved via a range of 1D and 2D experiments. Results: A total of 102 metabolites were assigned across the three biological matrices. A core metabonome of 14 metabolites was ubiquitous across all biofluids. All biological matrices provided a unique window on different aspects of systematic metabolism. Urine was the most populated metabolite matrix with 65 identified metabolites, 39 of which were unique to this biological compartment. A number of these were related to gut microbial host co-metabolism. Faecal samples were the most metabolically variable between animals; acetate was responsible for the majority (28%) of this variation. Short chain fatty acids were the predominant features identified within this biofluid by 1H-NMR spectroscopy. Conclusions: Metabonomics provides a platform for investigating complex and dynamic interactions between the host and its consortium of gut microbes and has the potential to uncover markers for health and disease in a variety of biofluids. Inherent variation in faecal extracts along with the relative abundance of microbial-mammalian metabolites in urine and invasive nature of plasma sampling, infers that urine is the most appropriate biofluid for the purposes of metabonomic analysis.
Resumo:
We make use of the Skyrme effective nuclear interaction within the time-dependent Hartree-Fock framework to assess the effect of inclusion of the tensor terms of the Skyrme interaction on the fusion window of the 16O–16O reaction. We find that the lower fusion threshold, around the barrier, is quite insensitive to these details of the force, but the higher threshold, above which the nuclei pass through each other, changes by several MeV between different tensor parametrisations. The results suggest that eventually fusion properties may become part of the evaluation or fitting process for effective nuclear interactions.
Implication of methodological uncertainties for mid-Holocene sea surface temperature reconstructions
Resumo:
We present and examine a multi-sensor global compilation of mid-Holocene (MH) sea surface temperatures (SST), based on Mg/Ca and alkenone palaeothermometry and reconstructions obtained using planktonic foraminifera and organic-walled dinoflagellate cyst census counts. We assess the uncertainties originating from using different methodologies and evaluate the potential of MH SST reconstructions as a benchmark for climate-model simulations. The comparison between different analytical approaches (time frame, baseline climate) shows the choice of time window for the MH has a negligible effect on the reconstructed SST pattern, but the choice of baseline climate affects both the magnitude and spatial pattern of the reconstructed SSTs. Comparison of the SST reconstructions made using different sensors shows significant discrepancies at a regional scale, with uncertainties often exceeding the reconstructed SST anomaly. Apparent patterns in SST may largely be a reflection of the use of different sensors in different regions. Overall, the uncertainties associated with the SST reconstructions are generally larger than the MH anomalies. Thus, the SST data currently available cannot serve as a target for benchmarking model simulations. Further evaluations of potential subsurface and/or seasonal artifacts that may contribute to obscure the MH SST reconstructions are urgently needed to provide reliable benchmarks for model evaluations.
Resumo:
Recent advancement in wireless communication technologies and automobiles have enabled the evolution of Intelligent Transport System (ITS) which addresses various vehicular traffic issues like traffic congestion, information dissemination, accident etc. Vehicular Ad-hoc Network (VANET) a distinctive class of Mobile ad-hoc Network (MANET) is an integral component of ITS in which moving vehicles are connected and communicate wirelessly. Wireless communication technologies play a vital role in supporting both Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication in VANET. This paper surveys some of the key vehicular wireless access technology standards such as 802.11p, P1609 protocols, Cellular System, CALM, MBWA, WiMAX, Microwave, Bluetooth and ZigBee which served as a base for supporting both Safety and Non Safety applications. It also analyses and compares the wireless standards using various parameters such as bandwidth, ease of use, upfront cost, maintenance, accessibility, signal coverage, signal interference and security. Finally, it discusses some of the issues associated with the interoperability among those protocols.
Resumo:
We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.
Resumo:
The core processing step of the noise reduction median filter technique is to find the median within a window of integers. A four-step procedure method to compute the running median of the last N W-bit stream of integers showing area and time benefits is proposed. The method slices integers into groups of B-bit using a pipeline of W/B blocks. From the method, an architecture is developed giving a designer the flexibility to exchange area gains for faster frequency of operation, or vice versa, by adjusting N, W and B parameter values. Gains in area of around 40%, or in frequency of operation of around 20%, are clearly observed by FPGA circuit implementations compared to latest methods in the literature.
Resumo:
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Resumo:
Senescence represents the final developmental act of the leaf, during which the leaf cell is dismantled in a coordinated manner to remobilize nutrients and to secure reproductive success. The process of senescence provides the plant with phenotypic plasticity to help it adapt to adverse environmental conditions. Here, we provide a comprehensive overview of the factors and mechanisms that control the onset of senescence. We explain how the competence to senesce is established during leaf development, as depicted by the senescence window model. We also discuss the mechanisms by which phytohormones and environmental stresses control senescence, as well as the impact of source-sink relationships on plant yield and stress tolerance. In addition, we discuss the role of senescence as a strategy for stress adaptation and how crop production and food quality could benefit from engineering or breeding crops with altered onset of senescence.
Resumo:
We propose a geoadditive negative binomial model (Geo-NB-GAM) for regional count data that allows us to address simultaneously some important methodological issues, such as spatial clustering, nonlinearities, and overdispersion. This model is applied to the study of location determinants of inward greenfield investments that occurred during 2003–2007 in 249 European regions. After presenting the data set and showing the presence of overdispersion and spatial clustering, we review the theoretical framework that motivates the choice of the location determinants included in the empirical model, and we highlight some reasons why the relationship between some of the covariates and the dependent variable might be nonlinear. The subsequent section first describes the solutions proposed by previous literature to tackle spatial clustering, nonlinearities, and overdispersion, and then presents the Geo-NB-GAM. The empirical analysis shows the good performance of Geo-NB-GAM. Notably, the inclusion of a geoadditive component (a smooth spatial trend surface) permits us to control for spatial unobserved heterogeneity that induces spatial clustering. Allowing for nonlinearities reveals, in keeping with theoretical predictions, that the positive effect of agglomeration economies fades as the density of economic activities reaches some threshold value. However, no matter how dense the economic activity becomes, our results suggest that congestion costs never overcome positive agglomeration externalities.
Resumo:
In many lower-income countries, the establishment of marine protected areas (MPAs) involves significant opportunity costs for artisanal fishers, reflected in changes in how they allocate their labor in response to the MPA. The resource economics literature rarely addresses such labor allocation decisions of artisanal fishers and how, in turn, these contribute to the impact of MPAs on fish stocks, yield, and income. This paper develops a spatial bio-economic model of a fishery adjacent to a village of people who allocate their labor between fishing and on-shore wage opportunities to establish a spatial Nash equilibrium at a steady state fish stock in response to various locations for no-take zone MPAs and managed access MPAs. Villagers’ fishing location decisions are based on distance costs, fishing returns, and wages. Here, the MPA location determines its impact on fish stocks, fish yield, and villager income due to distance costs, congestion, and fish dispersal. Incorporating wage labor opportunities into the framework allows examination of the MPA’s impact on rural incomes, with results determining that win-wins between yield and stocks occur in very different MPA locations than do win-wins between income and stocks. Similarly, villagers in a high-wage setting face a lower burden from MPAs than do those in low-wage settings. Motivated by issues of central importance in Tanzania and Costa Rica, we impose various policies on this fishery – location specific no-take zones, increasing on-shore wages, and restricting MPA access to a subset of villagers – to analyze the impact of an MPA on fish stocks and rural incomes in such settings.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
Among the more peculiar literary papyri uncovered in the past century are numerous bilingual texts of Virgil and Cicero, with the Latin original and a Greek translation arranged in distinctive narrow columns. These materials, variously classified as texts with translations or as glossaries, were evidently used by Greek-speaking students when they first started to read Latin literature. They thus provide a unique window into the experience of the first of many groups of non-native Latin speakers to struggle with reading the classics of Latin literature.