850 resultados para representation reformulation
Resumo:
We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world datasets indicate the efficiency of the approach.
Resumo:
Graphic depiction is an established method for academics to present concepts about theories of innovation. These expressions have been adopted by policy-makers, the media and businesses. However, there has been little research on the extent of their usage or effectiveness ex-academia. In addition, innovation theorists have ignored this area of study, despite the communication of information about innovation being acknowledged as a major determinant of success for corporate enterprise. The thesis explores some major themes in the theories of innovation and compares how graphics are used to represent them. The thesis examines the contribution of visual sociology and graphic theory to an investigation of a sample of graphics. The methodological focus is a modified content analysis. The following expressions are explored: check lists, matrices, maps and mapping in the management of innovation; models, flow charts, organisational charts and networks in the innovation process; and curves and cycles in the representation of performance and progress. The main conclusion is that academia is leading the way in usage as well as novelty. The graphic message is switching from prescription to description. The computerisation of graphics has created a major role for the information designer. It is recommended that use of the graphic representation of innovation should be increased in all domains, though it is conceded that its content and execution need to improve, too. Education of graphic 'producers', 'intermediaries' and 'consumers' will play a part in this, as will greater exploration of diversity, novelty and convention. Work has begun to tackle this and suggestions for future research are made.
Resumo:
This paper explores the representation of the first African World Cup in the British and South African press. Drawing on the output of a variety of media outlets between 2004, when South Africa was awarded the right to host the 2010 event, and the culmination of the tournament in July 2010, this paper contends that a range of representations of Africa have been put forward by the British and South African media. These can be interpreted as alarmist, sensationalist and even racist in certain extreme instances, and hypernationalist and overly defensive in other cases. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
A content analysis examined the way majorities and minorities are represented in the British press. An analysis of the headlines of five British newspapers, over a period of five years, revealed that the words ‘majority’ and ‘minority’ appeared 658 times. Majority headlines were most frequent (66% ), more likely to emphasize the numerical size of the majority, to link majority status with political groups, to be described with positive evaluations, and to cover political issues. By contrast, minority headlines were less frequent (34%), more likely to link minority status with ethnic groups and to other social issues, and less likely to be described with positive evaluations. The implications of examining how real-life majorities and minorities are represented for our understanding of experimental research are discussed.
Resumo:
A multi-chromosome GA (Multi-GA) was developed, based upon concepts from the natural world, allowing improved flexibility in a number of areas including representation, genetic operators, their parameter rates and real world multi-dimensional applications. A series of experiments were conducted, comparing the performance of the Multi-GA to a traditional GA on a number of recognised and increasingly complex test optimisation surfaces, with promising results. Further experiments demonstrated the Multi-GA's flexibility through the use of non-binary chromosome representations and its applicability to dynamic parameterisation. A number of alternative and new methods of dynamic parameterisation were investigated, in addition to a new non-binary 'Quotient crossover' mechanism. Finally, the Multi-GA was applied to two real world problems, demonstrating its ability to handle mixed type chromosomes within an individual, the limited use of a chromosome level fitness function, the introduction of new genetic operators for structural self-adaptation and its viability as a serious real world analysis tool. The first problem involved optimum placement of computers within a building, allowing the Multi-GA to use multiple chromosomes with different type representations and different operators in a single individual. The second problem, commonly associated with Geographical Information Systems (GIS), required a spatial analysis location of the optimum number and distribution of retail sites over two different population grids. In applying the Multi-GA, two new genetic operators (addition and deletion) were developed and explored, resulting in the definition of a mechanism for self-modification of genetic material within the Multi-GA structure and a study of this behaviour.
Resumo:
Since much knowledge is tacit, eliciting knowledge is a common bottleneck during the development of knowledge-based systems. Visual interactive simulation (VIS) has been proposed as a means for eliciting experts’ decision-making by getting them to interact with a visual simulation of the real system in which they work. In order to explore the effectiveness and efficiency of VIS based knowledge elicitation, an experiment has been carried out with decision-makers in a Ford Motor Company engine assembly plant. The model properties under investigation were the level of visual representation (2-dimensional, 2½-dimensional and 3-dimensional) and the model parameter settings (unadjusted and adjusted to represent more uncommon and extreme situations). The conclusion from the experiment is that using a 2-dimensional representation with adjusted parameter settings provides the better simulation-based means for eliciting knowledge, at least for the case modelled.
Resumo:
The past two decades have witnessed growing political disaffection and a widening mass/elite disjuncture in France, reflected in opinion polls, rising abstentionism, electoral volatility and fragmentation, with sustained voting against incumbent governments. Though the electoral system has preserved the duopoly of the mainstream coalitions, they have suffered loss of public confidence and swings in electoral support. Stable parliamentary majorities conceal a political landscape of assorted anti-system parties and growing support for far right and far left. The picture is paradoxical: the French express alienation from political parties yet relate positively to their political institutions; they berate national politicians but retain strong bonds with those elected locally; they appear increasingly disengaged from politics yet forms of ‘direct democracy’ are finding new vigour. While the electoral, attitudinal and systemic factors reviewed here may not signal a crisis of democracy, they point to serious problems of political representation in contemporary France.
Resumo:
In a certain automobile factory, batch-painting of the body types in colours is controlled by an allocation system. This tries to balance production with orders, whilst making optimally-sized batches of colours. Sequences of cars entering painting cannot be optimised for easy selection of colour and batch size. `Over-production' is not allowed, in order to reduce buffer stocks of unsold vehicles. Paint quality is degraded by random effects. This thesis describes a toolkit which supports IKBS in an object-centred formalism. The intended domain of use for the toolkit is flexible manufacturing. A sizeable application program was developed, using the toolkit, to test the validity of the IKBS approach in solving the real manufacturing problem above, for which an existing conventional program was already being used. A detailed statistical analysis of the operating circumstances of the program was made to evaluate the likely need for the more flexible type of program for which the toolkit was intended. The IKBS program captures the many disparate and conflicting constraints in the scheduling knowledge and emulates the behaviour of the program installed in the factory. In the factory system, many possible, newly-discovered, heuristics would be awkward to represent and it would be impossible to make many new extensions. The representation scheme is capable of admitting changes to the knowledge, relying on the inherent encapsulating properties of object-centres programming to protect and isolate data. The object-centred scheme is supported by an enhancement of the `C' programming language and runs under BSD 4.2 UNIX. The structuring technique, using objects, provides a mechanism for separating control of expression of rule-based knowledge from the knowledge itself and allowing explicit `contexts', within which appropriate expression of knowledge can be done. Facilities are provided for acquisition of knowledge in a consistent manner.
Resumo:
This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.