24 resultados para Simplification of Ontologies
em CentAUR: Central Archive University of Reading - UK
Resumo:
The requirement to rapidly and efficiently evaluate ruminant feedstuffs places increased emphasis on in vitro systems. However, despite the developmental work undertaken and widespread application of such techniques, little attention has been paid to the incubation medium. Considerable research using in vitro systems is conducted in resource-poor developing countries that often have difficulties associated with technical expertise, sourcing chemicals and/or funding to cover analytical and equipment costs. Such limitations have, to date, restricted vital feed evaluation programmes in these regions. This paper examines the function and relevance of the buffer, nutrient, and reducing solution components within current in vitro media, with the aim of identifying where simplification can be achieved. The review, supported by experimental work, identified no requirement to change the carbonate or phosphate salts, which comprise the main buffer components. The inclusion of microminerals provided few additional nutrients over that already supplied by the rumen fluid and substrate, and so may be omitted. Nitrogen associated with the inoculum was insufficient to support degradation and a level of 25 mg N/g substrate is recommended. A sulphur inclusion level of 4-5 mg S/g substrate is proposed, with S levels lowered through omission of sodium sulphide and replacement of magnesium sulphate with magnesium chloride. It was confirmed that a highly reduced medium was not required, provided that anaerobic conditions were rapidly established. This allows sodium sulphide, part of the reducing solution, to be omitted. Further, as gassing with CO2 directly influences the quantity of gas released, it is recommended that minimum CO, levels be used and that gas flow and duration, together with the volume of medium treated, are detailed in experimental procedures. It is considered that these simplifications will improve safety and reduce costs and problems associated with sourcing components, while maintaining analytical precision. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Currently many ontologies are available for addressing different domains. However, it is not always possible to deploy such ontologies to support collaborative working, so that their full potential can be exploited to implement intelligent cooperative applications capable of reasoning over a network of context-specific ontologies. The main problem arises from the fact that presently ontologies are created in an isolated way to address specific needs. However we foresee the need for a network of ontologies which will support the next generation of intelligent applications/devices, and, the vision of Ambient Intelligence. The main objective of this paper is to motivate the design of a networked ontology (Meta) model which formalises ways of connecting available ontologies so that they are easy to search, to characterise and to maintain. The aim is to make explicit the virtual and implicit network of ontologies serving the Semantic Web.
Resumo:
P>1. Management of lowland mesotrophic grasslands in north-west Europe often makes use of inorganic fertilizers, high stocking densities and silage-based forage systems to maximize productivity. The impact of these practices has resulted in a simplification of the plant community combined with wide-scale declines in the species richness of grassland invertebrates. We aim to identify how field margin management can be used to promote invertebrate diversity across a suite of functionally diverse taxa (beetles, planthoppers, true bugs, butterflies, bumblebees and spiders). 2. Using an information theoretic approach we identify the impacts of management (cattle grazing, cutting and inorganic fertilizer) and plant community composition (forb species richness, grass species richness and sward architecture) on invertebrate species richness and body size. As many of these management practices are common to grassland systems throughout the world, understanding invertebrate responses to them is important for the maintenance of biodiversity. 3. Sward architecture was identified as the primary factor promoting increased species richness of both predatory and phytophagous trophic levels, as well as being positively correlated with mean body size. In all cases phytophagous invertebrate species richness was positively correlated with measures of plant species richness. 4. The direct effects of management practices appear to be comparatively weak, suggesting that their impacts are indirect and mediated though the continuous measures of plant community structure, such as sward architecture or plant species richness. 5. Synthesis and applications. By partitioning field margins from the remainder of the field, economically viable intensive grassland management can be combined with extensive management aimed at promoting native biodiversity. The absence of inorganic fertilizer, combined with a reduction in the intensity of both cutting and grazing regimes, promotes floral species richness and sward architectural complexity. By increasing sward architecture the total biomass of invertebrates also increased (by c. 60% across the range of sward architectural measures seen in this study), increasing food available for higher trophic levels, such as birds and mammals.
Resumo:
The complete details of our calculation of the NLO QCD corrections to heavy flavor photo- and hadroproduction with longitudinally polarized initial states are presented. The main motivation for investigating these processes is the determination of the polarized gluon density at the COMPASS and RHIC experiments, respectively, in the near future. All methods used in the computation are extensively documented, providing a self-contained introduction to this type of calculations. Some employed tools also may be of general interest, e.g., the series expansion of hypergeometric functions. The relevant parton level results are collected and plotted in the form of scaling functions. However, the simplification of the obtained gluon-gluon virtual contributions has not been completed yet. Thus NLO phenomenological predictions are only given in the case of photoproduction. The theoretical uncertainties of these predictions, in particular with respect to the heavy quark mass, are carefully considered. Also it is shown that transverse momentum cuts can considerably enhance the measured production asymmetries. Finally unpolarized heavy quark production is reviewed in order to derive conditions for a successful interpretation of future spin-dependent experimental data.
Resumo:
Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.
Resumo:
The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
Treating algebraic symbols as objects (eg. “‘a’ means ‘apple’”) is a means of introducing elementary simplification of algebra, but causes problems further on. This current school-based research included an examination of texts still in use in the mathematics department, and interviews with mathematics teachers, year 7 pupils and then year 10 pupils asking them how they would explain, “3a + 2a = 5a” to year 7 pupils. Results included the notion that the ‘algebra as object’ analogy can be found in textbooks in current usage, including those recently published. Teachers knew that they were not ‘supposed’ to use the analogy but not always clear why, nevertheless stating methods of teaching consistent with an‘algebra as object’ approach. Year 7 pupils did not explicitly refer to ‘algebra as object’, although some of their responses could be so interpreted. In the main, year 10 pupils used ‘algebra as object’ to explain simplification of algebra, with some complicated attempts to get round the limitations. Further research would look to establish whether the appearance of ‘algebra as object’ in pupils’ thinking between year 7 and 10 is consistent and, if so, where it arises. Implications also are for on-going teacher training with alternatives to introducing such simplification.
Resumo:
The use of discounted cash flow (DCF) methods in investment valuation and appraisal is argued by many academics as being rational and more rigorous than the traditional capitalisation model. However those advocates of DCF should be cautious in their claims for rationality. The various DCF models all rely upon an all-encompassing equated yield (IRR) within the calculation. This paper will argue that this is a simplification of the risk perception which the investor places on the income profile from property. In determining the long term capital value of a property an 'average' DCF method will produce the 'correct' price, however, the individual short term values of each cash-flow may differ significantly. In the UK property market today, where we are facing a period in which prices are not expected to rise generally at the same rate or with such persistence as hitherto, investors and tenants are increasingly concerned with the down side implications of rental growth and investors may indeed be interested in trading property over a shorter investment horizon than they had originally planned. The purpose of this paper is therefore to bring to the analysis a rigorous framework which can be used to analyse the constituent cash flows within the freehold valuation. We show that the arbitrage analysis lends itself to segregating the capital value of the cash flows in a way which is more appropriate for financial investors
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
Indicators are commonly recommended as tools for assessing the attainment of development, and the current vogue is for aggregating a number of indicators together into a single index. It is claimed that such indices of development help facilitate maximum impact in policy terms by appealing to those who may not necessarily have technical expertise in data collection, analysis and interpretation. In order to help counter criticisms of over-simplification, those advocating such indices also suggest that the raw data be provided so as to allow disaggregation into component parts and hence facilitate a more subtle interpretation if a reader so desires. This paper examines the problems involved with interpreting indices of development by focusing on the United Nations Development Programmes (UNDP) Human Development Index (HDI) published each year in the Human Development Reports (HDRs). The HDI was intended to provide an alternative to the more economic based indices, such as GDP, commonly used within neo-liberal development agendas. The paper explores the use of the HDI as a gauge of human development by making comparisons between two major political and economic communities in Africa (ECOWAS and SADC). While the HDI did help highlight important changes in human development as expressed by the HDI over 10 years, it is concluded that the HDI and its components are difficult to interpret as methodologies have changed significantly and the 'averaging' nature of the HDI could hide information unless care is taken. The paper discusses the applicability of alternative models to the HDI such as the more neo-populist centred methods commonly advocated for indicators of sustainable development. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The paper presents a design for a hardware genetic algorithm which uses a pipeline of systolic arrays. These arrays have been designed using systolic synthesis techniques which involve expressing the algorithm as a set of uniform recurrence relations. The final design divorces the fitness function evaluation from the hardware and can process chromosomes of different lengths, giving the design a generic quality. The paper demonstrates the design methodology by progressively re-writing a simple genetic algorithm, expressed in C code, into a form from which systolic structures can be deduced. This paper extends previous work by introducing a simplification to a previous systolic design for the genetic algorithm. The simplification results in the removal of 2N 2 + 4N cells and reduces the time complexity by 3N + 1 cycles.
Resumo:
Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to Mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.