20 resultados para simplification
em CentAUR: Central Archive University of Reading - UK
Resumo:
The requirement to rapidly and efficiently evaluate ruminant feedstuffs places increased emphasis on in vitro systems. However, despite the developmental work undertaken and widespread application of such techniques, little attention has been paid to the incubation medium. Considerable research using in vitro systems is conducted in resource-poor developing countries that often have difficulties associated with technical expertise, sourcing chemicals and/or funding to cover analytical and equipment costs. Such limitations have, to date, restricted vital feed evaluation programmes in these regions. This paper examines the function and relevance of the buffer, nutrient, and reducing solution components within current in vitro media, with the aim of identifying where simplification can be achieved. The review, supported by experimental work, identified no requirement to change the carbonate or phosphate salts, which comprise the main buffer components. The inclusion of microminerals provided few additional nutrients over that already supplied by the rumen fluid and substrate, and so may be omitted. Nitrogen associated with the inoculum was insufficient to support degradation and a level of 25 mg N/g substrate is recommended. A sulphur inclusion level of 4-5 mg S/g substrate is proposed, with S levels lowered through omission of sodium sulphide and replacement of magnesium sulphate with magnesium chloride. It was confirmed that a highly reduced medium was not required, provided that anaerobic conditions were rapidly established. This allows sodium sulphide, part of the reducing solution, to be omitted. Further, as gassing with CO2 directly influences the quantity of gas released, it is recommended that minimum CO, levels be used and that gas flow and duration, together with the volume of medium treated, are detailed in experimental procedures. It is considered that these simplifications will improve safety and reduce costs and problems associated with sourcing components, while maintaining analytical precision. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Indicators are commonly recommended as tools for assessing the attainment of development, and the current vogue is for aggregating a number of indicators together into a single index. It is claimed that such indices of development help facilitate maximum impact in policy terms by appealing to those who may not necessarily have technical expertise in data collection, analysis and interpretation. In order to help counter criticisms of over-simplification, those advocating such indices also suggest that the raw data be provided so as to allow disaggregation into component parts and hence facilitate a more subtle interpretation if a reader so desires. This paper examines the problems involved with interpreting indices of development by focusing on the United Nations Development Programmes (UNDP) Human Development Index (HDI) published each year in the Human Development Reports (HDRs). The HDI was intended to provide an alternative to the more economic based indices, such as GDP, commonly used within neo-liberal development agendas. The paper explores the use of the HDI as a gauge of human development by making comparisons between two major political and economic communities in Africa (ECOWAS and SADC). While the HDI did help highlight important changes in human development as expressed by the HDI over 10 years, it is concluded that the HDI and its components are difficult to interpret as methodologies have changed significantly and the 'averaging' nature of the HDI could hide information unless care is taken. The paper discusses the applicability of alternative models to the HDI such as the more neo-populist centred methods commonly advocated for indicators of sustainable development. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The paper presents a design for a hardware genetic algorithm which uses a pipeline of systolic arrays. These arrays have been designed using systolic synthesis techniques which involve expressing the algorithm as a set of uniform recurrence relations. The final design divorces the fitness function evaluation from the hardware and can process chromosomes of different lengths, giving the design a generic quality. The paper demonstrates the design methodology by progressively re-writing a simple genetic algorithm, expressed in C code, into a form from which systolic structures can be deduced. This paper extends previous work by introducing a simplification to a previous systolic design for the genetic algorithm. The simplification results in the removal of 2N 2 + 4N cells and reduces the time complexity by 3N + 1 cycles.
Resumo:
P>1. Management of lowland mesotrophic grasslands in north-west Europe often makes use of inorganic fertilizers, high stocking densities and silage-based forage systems to maximize productivity. The impact of these practices has resulted in a simplification of the plant community combined with wide-scale declines in the species richness of grassland invertebrates. We aim to identify how field margin management can be used to promote invertebrate diversity across a suite of functionally diverse taxa (beetles, planthoppers, true bugs, butterflies, bumblebees and spiders). 2. Using an information theoretic approach we identify the impacts of management (cattle grazing, cutting and inorganic fertilizer) and plant community composition (forb species richness, grass species richness and sward architecture) on invertebrate species richness and body size. As many of these management practices are common to grassland systems throughout the world, understanding invertebrate responses to them is important for the maintenance of biodiversity. 3. Sward architecture was identified as the primary factor promoting increased species richness of both predatory and phytophagous trophic levels, as well as being positively correlated with mean body size. In all cases phytophagous invertebrate species richness was positively correlated with measures of plant species richness. 4. The direct effects of management practices appear to be comparatively weak, suggesting that their impacts are indirect and mediated though the continuous measures of plant community structure, such as sward architecture or plant species richness. 5. Synthesis and applications. By partitioning field margins from the remainder of the field, economically viable intensive grassland management can be combined with extensive management aimed at promoting native biodiversity. The absence of inorganic fertilizer, combined with a reduction in the intensity of both cutting and grazing regimes, promotes floral species richness and sward architectural complexity. By increasing sward architecture the total biomass of invertebrates also increased (by c. 60% across the range of sward architectural measures seen in this study), increasing food available for higher trophic levels, such as birds and mammals.
Resumo:
Microscopic endoparasites belonging to the Phylum Myxozoa provide a striking example of how much there is still to be learned about the diversity of the Metazoa. Recent research on myxozoans has provided new insights into evolution within the Bilateria, revealing unparalleled levels of morphological simplification associated with parasitism, a home for an orphan worm, and a hypothesis of the endosymbiotic origin(s) for extrusible intracellular organelles in myxozoans and cnidarians. In addition, discovery of the source of a devastating disease of salmonid fish has enabled researchers to identify two ancient clades within the Myxozoa, and has exposed new mysteries concerning myxozoan life-cycle evolution and evolutionary diversification. This plethora of new insights exemplifies the fundamental value of studying obscure organisms.
Resumo:
Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to Mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.
Resumo:
Adaptive filters used in code division multiple access (CDMA) receivers to counter interference have been formulated both with and without the assumption of training symbols being transmitted. They are known as training-based and blind detectors respectively. We show that the convergence behaviour of the blind minimum-output-energy (MOE) detector can be quite easily derived, unlike what was implied by the procedure outlined in a previous paper. The simplification results from the observation that the correlation matrix determining convergence performance can be made symmetric, after which many standard results from the literature on least mean square (LMS) filters apply immediately.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
Treating algebraic symbols as objects (eg. “‘a’ means ‘apple’”) is a means of introducing elementary simplification of algebra, but causes problems further on. This current school-based research included an examination of texts still in use in the mathematics department, and interviews with mathematics teachers, year 7 pupils and then year 10 pupils asking them how they would explain, “3a + 2a = 5a” to year 7 pupils. Results included the notion that the ‘algebra as object’ analogy can be found in textbooks in current usage, including those recently published. Teachers knew that they were not ‘supposed’ to use the analogy but not always clear why, nevertheless stating methods of teaching consistent with an‘algebra as object’ approach. Year 7 pupils did not explicitly refer to ‘algebra as object’, although some of their responses could be so interpreted. In the main, year 10 pupils used ‘algebra as object’ to explain simplification of algebra, with some complicated attempts to get round the limitations. Further research would look to establish whether the appearance of ‘algebra as object’ in pupils’ thinking between year 7 and 10 is consistent and, if so, where it arises. Implications also are for on-going teacher training with alternatives to introducing such simplification.
Resumo:
The time taken to consider development proposals within the English planning system continues to provoke great policy concern despite a decade of inquiry and policy change. The results of an extensive site-based survey and hedonic modelling exercise across 45 local authorities are reported here. The analysis reveals a slow, uncertain system. It identifies planning delay as a serious problem for housing supply and its ability to respond to increases in demand. Only a relatively limited set of factors seem relevant in explaining differences in times and the results suggest that 80% of councils’ performances are statistically indistinguishable from each other. These findings question the policy emphasis put on rankings of local authorities, though some influence from local politics is apparent. Development control is consistently a lengthy and uncertain process due to its complexity. Therefore, success in lowering planning delay is only likely through radical simplification.
Resumo:
This paper studies the effects of increasing formality via tax reduction and simplification schemes on micro-firm performance. It uses the 1997 Brazilian SIMPLES program. We develop a simple theoretical model to show that SIMPLES has an impact only on a segment of the micro-firm population, for which the effect of formality on firm performance can be identified, and that can be analyzed along the single dimensional quantiles of the conditional firm revenues. To estimate the effect of formality, we use an econometric approach that compares eligible and non-eligible firms, born before and after SIMPLES in a local interval about the introduction of SIMPLES. We use an estimator that combines both quantile regression and the regression discontinuity identification strategy. The empirical results corroborate the positive effect of formality on microfirms' performance and produce a clear characterization of who benefits from these programs.
Resumo:
The use of discounted cash flow (DCF) methods in investment valuation and appraisal is argued by many academics as being rational and more rigorous than the traditional capitalisation model. However those advocates of DCF should be cautious in their claims for rationality. The various DCF models all rely upon an all-encompassing equated yield (IRR) within the calculation. This paper will argue that this is a simplification of the risk perception which the investor places on the income profile from property. In determining the long term capital value of a property an 'average' DCF method will produce the 'correct' price, however, the individual short term values of each cash-flow may differ significantly. In the UK property market today, where we are facing a period in which prices are not expected to rise generally at the same rate or with such persistence as hitherto, investors and tenants are increasingly concerned with the down side implications of rental growth and investors may indeed be interested in trading property over a shorter investment horizon than they had originally planned. The purpose of this paper is therefore to bring to the analysis a rigorous framework which can be used to analyse the constituent cash flows within the freehold valuation. We show that the arbitrage analysis lends itself to segregating the capital value of the cash flows in a way which is more appropriate for financial investors
Resumo:
Multi-gas approaches to climate change policies require a metric establishing ‘equivalences’ among emissions of various species. Climate scientists and economists have proposed four kinds of such metrics and debated their relative merits. We present a unifying framework that clarifies the relationships among them. We show, as have previous authors, that the global warming potential (GWP), used in international law to compare emissions of greenhouse gases, is a special case of the global damage potential (GDP), assuming (1) a finite time horizon, (2) a zero discount rate, (3) constant atmospheric concentrations, and (4) impacts that are proportional to radiative forcing. Both the GWP and GDP follow naturally from a cost–benefit framing of the climate change issue. We show that the global temperature change potential (GTP) is a special case of the global cost potential (GCP), assuming a (slight) fall in the global temperature after the target is reached. We show how the four metrics should be generalized if there are intertemporal spillovers in abatement costs, distinguishing between private (e.g., capital stock turnover) and public (e.g., induced technological change) spillovers. Both the GTP and GCP follow naturally from a cost-effectiveness framing of the climate change issue. We also argue that if (1) damages are zero below a threshold and (2) infinitely large above a threshold, then cost-effectiveness analysis and cost–benefit analysis lead to identical results. Therefore, the GCP is a special case of the GDP. The UN Framework Convention on Climate Change uses the GWP, a simplified cost–benefit concept. The UNFCCC is framed around the ultimate goal of stabilizing greenhouse gas concentrations. Once a stabilization target has been agreed under the convention, implementation is clearly a cost-effectiveness problem. It would therefore be more consistent to use the GCP or its simplification, the GTP.
Resumo:
The complete details of our calculation of the NLO QCD corrections to heavy flavor photo- and hadroproduction with longitudinally polarized initial states are presented. The main motivation for investigating these processes is the determination of the polarized gluon density at the COMPASS and RHIC experiments, respectively, in the near future. All methods used in the computation are extensively documented, providing a self-contained introduction to this type of calculations. Some employed tools also may be of general interest, e.g., the series expansion of hypergeometric functions. The relevant parton level results are collected and plotted in the form of scaling functions. However, the simplification of the obtained gluon-gluon virtual contributions has not been completed yet. Thus NLO phenomenological predictions are only given in the case of photoproduction. The theoretical uncertainties of these predictions, in particular with respect to the heavy quark mass, are carefully considered. Also it is shown that transverse momentum cuts can considerably enhance the measured production asymmetries. Finally unpolarized heavy quark production is reviewed in order to derive conditions for a successful interpretation of future spin-dependent experimental data.