915 resultados para Estoppel by representation
Levinasian ethics and the representation of the other in international and cross-cultural management
Resumo:
In this paper, we seek to further the discussion, problematization and critique of west/east identity relations in ICM studies by considering the ethics of the relationship – an issue never far beneath the surface in discussions of Orientalism. In particular we seek to both examine and question the ethics of representation in relation to a critique of what has come to be known as international and cross-cultural management (ICM). To pursue such a discussion, we draw specifically on the ethical elaborations of Emmanuel Levinas as well as his chief interlocutors Jacques Derrida and Zygmunt Bauman. The value of this discussion, we propose, is that Levinas offers a philosophy that holds as its central concept the relationship between the self and Other as the primary ethical and pre-ontological relation. Levinas’ philosophy provides a means of extending the post-colonial critique of ICM, and ICM provides a context in which the Levinasian ethics can be brought to bear on a significant issue on contemporary business and management.
Resumo:
For determining functionality dependencies between two proteins, both represented as 3D structures, it is an essential condition that they have one or more matching structural regions called patches. As 3D structures for proteins are large, complex and constantly evolving, it is computationally expensive and very time-consuming to identify possible locations and sizes of patches for a given protein against a large protein database. In this paper, we address a vector space based representation for protein structures, where a patch is formed by the vectors within the region. Based on our previews work, a compact representation of the patch named patch signature is applied here. A similarity measure of two patches is then derived based on their signatures. To achieve fast patch matching in large protein databases, a match-and-expand strategy is proposed. Given a query patch, a set of small k-sized matching patches, called candidate patches, is generated in match stage. The candidate patches are further filtered by enlarging k in expand stage. Our extensive experimental results demonstrate encouraging performances with respect to this biologically critical but previously computationally prohibitive problem.
Resumo:
Current database technologies do not support contextualised representations of multi-dimensional narratives. This paper outlines a new approach to this problem using a multi-dimensional database served in a 3D game environment. Preliminary results indicate it is a particularly efficient method for the types of contextualised narratives used by Australian Aboriginal peoples to tell their stories about their traditional landscapes and knowledge practices. We discuss the development of a tool that complements rather than supplants direct experience of these traditional knowledge practices.
Resumo:
States or state sequences in neural network models are made to represent concepts from applications. This paper motivates, introduces and discusses a formalism for denoting such representations; a representation for representations. The formalism is illustrated by using it to discuss the representation of variable binding and inference abstractly, and then to present four specific representations. One of these is an apparently novel hybrid of phasic and tensor-product representations which retains the desirable properties of each.
Resumo:
We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world datasets indicate the efficiency of the approach.
Resumo:
Graphic depiction is an established method for academics to present concepts about theories of innovation. These expressions have been adopted by policy-makers, the media and businesses. However, there has been little research on the extent of their usage or effectiveness ex-academia. In addition, innovation theorists have ignored this area of study, despite the communication of information about innovation being acknowledged as a major determinant of success for corporate enterprise. The thesis explores some major themes in the theories of innovation and compares how graphics are used to represent them. The thesis examines the contribution of visual sociology and graphic theory to an investigation of a sample of graphics. The methodological focus is a modified content analysis. The following expressions are explored: check lists, matrices, maps and mapping in the management of innovation; models, flow charts, organisational charts and networks in the innovation process; and curves and cycles in the representation of performance and progress. The main conclusion is that academia is leading the way in usage as well as novelty. The graphic message is switching from prescription to description. The computerisation of graphics has created a major role for the information designer. It is recommended that use of the graphic representation of innovation should be increased in all domains, though it is conceded that its content and execution need to improve, too. Education of graphic 'producers', 'intermediaries' and 'consumers' will play a part in this, as will greater exploration of diversity, novelty and convention. Work has begun to tackle this and suggestions for future research are made.
Resumo:
This paper explores the representation of the first African World Cup in the British and South African press. Drawing on the output of a variety of media outlets between 2004, when South Africa was awarded the right to host the 2010 event, and the culmination of the tournament in July 2010, this paper contends that a range of representations of Africa have been put forward by the British and South African media. These can be interpreted as alarmist, sensationalist and even racist in certain extreme instances, and hypernationalist and overly defensive in other cases. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
A content analysis examined the way majorities and minorities are represented in the British press. An analysis of the headlines of five British newspapers, over a period of five years, revealed that the words ‘majority’ and ‘minority’ appeared 658 times. Majority headlines were most frequent (66% ), more likely to emphasize the numerical size of the majority, to link majority status with political groups, to be described with positive evaluations, and to cover political issues. By contrast, minority headlines were less frequent (34%), more likely to link minority status with ethnic groups and to other social issues, and less likely to be described with positive evaluations. The implications of examining how real-life majorities and minorities are represented for our understanding of experimental research are discussed.
Resumo:
Since much knowledge is tacit, eliciting knowledge is a common bottleneck during the development of knowledge-based systems. Visual interactive simulation (VIS) has been proposed as a means for eliciting experts’ decision-making by getting them to interact with a visual simulation of the real system in which they work. In order to explore the effectiveness and efficiency of VIS based knowledge elicitation, an experiment has been carried out with decision-makers in a Ford Motor Company engine assembly plant. The model properties under investigation were the level of visual representation (2-dimensional, 2½-dimensional and 3-dimensional) and the model parameter settings (unadjusted and adjusted to represent more uncommon and extreme situations). The conclusion from the experiment is that using a 2-dimensional representation with adjusted parameter settings provides the better simulation-based means for eliciting knowledge, at least for the case modelled.
Resumo:
In a certain automobile factory, batch-painting of the body types in colours is controlled by an allocation system. This tries to balance production with orders, whilst making optimally-sized batches of colours. Sequences of cars entering painting cannot be optimised for easy selection of colour and batch size. `Over-production' is not allowed, in order to reduce buffer stocks of unsold vehicles. Paint quality is degraded by random effects. This thesis describes a toolkit which supports IKBS in an object-centred formalism. The intended domain of use for the toolkit is flexible manufacturing. A sizeable application program was developed, using the toolkit, to test the validity of the IKBS approach in solving the real manufacturing problem above, for which an existing conventional program was already being used. A detailed statistical analysis of the operating circumstances of the program was made to evaluate the likely need for the more flexible type of program for which the toolkit was intended. The IKBS program captures the many disparate and conflicting constraints in the scheduling knowledge and emulates the behaviour of the program installed in the factory. In the factory system, many possible, newly-discovered, heuristics would be awkward to represent and it would be impossible to make many new extensions. The representation scheme is capable of admitting changes to the knowledge, relying on the inherent encapsulating properties of object-centres programming to protect and isolate data. The object-centred scheme is supported by an enhancement of the `C' programming language and runs under BSD 4.2 UNIX. The structuring technique, using objects, provides a mechanism for separating control of expression of rule-based knowledge from the knowledge itself and allowing explicit `contexts', within which appropriate expression of knowledge can be done. Facilities are provided for acquisition of knowledge in a consistent manner.
Resumo:
This research was conducted at the Space Research and Technology Centre o the European Space Agency at Noordvijk in the Netherlands. ESA is an international organisation that brings together a range of scientists, engineers and managers from 14 European member states. The motivation for the work was to enable decision-makers, in a culturally and technologically diverse organisation, to share information for the purpose of making decisions that are well informed about the risk-related aspects of the situations they seek to address. The research examined the use of decision support system DSS) technology to facilitate decision-making of this type. This involved identifying the technology available and its application to risk management. Decision-making is a complex activity that does not lend itself to exact measurement or precise understanding at a detailed level. In view of this, a prototype DSS was developed through which to understand the practical issues to be accommodated and to evaluate alternative approaches to supporting decision-making of this type. The problem of measuring the effect upon the quality of decisions has been approached through expert evaluation of the software developed. The practical orientation of this work was informed by a review of the relevant literature in decision-making, risk management, decision support and information technology. Communication and information technology unite the major the,es of this work. This allows correlation of the interests of the research with European public policy. The principles of communication were also considered in the topic of information visualisation - this emerging technology exploits flexible modes of human computer interaction (HCI) to improve the cognition of complex data. Risk management is itself an area characterised by complexity and risk visualisation is advocated for application in this field of endeavour. The thesis provides recommendations for future work in the fields of decision=making, DSS technology and risk management.
Resumo:
Following Andersen's (1986, 1991) study of untutored anglophone learners of Spanish, aspectual features have been at the centre of hypotheses on the development of past verbal morphology in language acquisition. The Primacy of Aspect Hypothesis claims that the association of any verb category (Aktionsart) with any aspect (perfective or imperfective) constitutes the endpoint of acquisition. However, its predictions rely on the observation of a limited number of untutored learners at the early stages of their acquisition, and have yet to be confirmed in other settings. The aim of the present thesis is to evaluate the explanatory power of the PAH in respect of the acquisition of French past tenses, an aspect of the language which constitutes a serious stumbling block for foreign learners, even those at the highest levels of proficiency (Coppieters 1987). The present research applies the PAH to the production of 61 anglophone 'advanced learners' (as defined in Bartning 1997) in a tutored environment. In so doing, it tests concurrent explanations, including the influence of the input, the influence of chunking, and the hypothesis of cyclic development. Finally, it discusses the cotextual and contextual factors that still provoke what Anderson (1991) terms "non-native glitches" at the final stage, as predicted by the PAH. The first part of the thesis provides the theoretical background to the corpus analysis. It opens with a diachronic presentation of the French past tense system focusing on present areas of competition and developments that emphasize the complexity of the system to be acquired. The concepts of time, grammatical aspect and lexical aspect (Aktionsart) are introduced and discussed in the second chapter, and a distinctive formal representation of the French past tenses is offered in the third chapter. The second part of the thesis is devoted to a corpus analysis. The data gathering procedures and the choice of tasks (oral and written film narratives based on Modern Times, cloze tests and acceptability judgement tests) are described and justified in the research methodology chapter. The research design was shaped by previous studies and consequently allows comparison with these. The second chapter is devoted to the narratives analysis and the third to the grammatical tasks. This section closes with a summary of discoveries and a comparison with previous results. The conclusion addresses the initial research questions in the light of both theory and practice. It shows that the PAH fails to account for the complex phenomenon of past tense development in the acquisitional settings under study, as it adopts a local (the verb phrase) and linear (steady progression towards native usage) approach. It is thus suggested that past tense acquisition rather follows a pendular development as learners reformulate their learning hypotheses and become increasingly able to shift from local to global cues and so to integrate the influence of cotext and context in their tense choice.
Resumo:
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.
Resumo:
This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.
Resumo:
In this thesis we present an overview of sparse approximations of grey level images. The sparse representations are realized by classic, Matching Pursuit (MP) based, greedy selection strategies. One such technique, termed Orthogonal Matching Pursuit (OMP), is shown to be suitable for producing sparse approximations of images, if they are processed in small blocks. When the blocks are enlarged, the proposed Self Projected Matching Pursuit (SPMP) algorithm, successfully renders equivalent results to OMP. A simple coding algorithm is then proposed to store these sparse approximations. This is shown, under certain conditions, to be competitive with JPEG2000 image compression standard. An application termed image folding, which partially secures the approximated images is then proposed. This is extended to produce a self contained folded image, containing all the information required to perform image recovery. Finally a modified OMP selection technique is applied to produce sparse approximations of Red Green Blue (RGB) images. These RGB approximations are then folded with the self contained approach.