62 resultados para Knowledge Information Objects
Resumo:
Before the emergence of coordination of production by firms, manufacturers and merchants traded in markets with asymmetric information. Evidence suggests that the practical knowledge thus gained by these agents was well in advance of contemporary political economists and anticipates twentieth-century developments in the economics of information. Charles Babbage, who regarded merchants and manufacturers as the chief sources of reliable economic data, drew on this knowledge as revealed in the evidence of manufacturers and merchants presented to House of Commons select committees to make an important pioneering contribution to the theory of production and exchange with information asymmetries.
Resumo:
In the area of child care policy and practice, the benefits for children who are separated from their birth parents of maintaining some form of connection with their family of origin is now widely accepted. The arguments in support of this are found mainly in research concerning adoption and stem from four inter-related themes: children's rights to know of their heritage and background; parents' rights to information about the well-being of their children; the benefits of having knowledge about origins; and concerns about the impact of not knowing. The effects on the developing identities of those who, for various reasons, are unlikely ever to know the details of their birth parent(s) is an under-researched issue. Karen Winter and Olivia Cohen use a case study to illustrate some of the gaps concerning knowledge in this area. They argue that there is much to be learnt from the development of research projects which have as their focus the accounts of children and young people, from a wide range of care arrangements, regarding identity issues where they have no connections with or knowledge about their birth parent(s).
Resumo:
This paper evaluates how long-term records could and should be utilized in conservation policy and practice. Traditionally, there has been an extremely limited use of long-term ecological records (greater than 50 years) in biodiversity conservation. There are a number of reasons why such records tend to be discounted, including a perception of poor scale of resolution in both time and space, and the lack of accessibility of long temporal records to non-specialists. Probably more important, however, is the perception that even if suitable temporal records are available, their roles are purely descriptive, simply demonstrating what has occurred before in Earth’s history, and are of little use in the actual practice of conservation. This paper asks why this is the case and whether there is a place for the temporal record in conservation management. Key conservation initiatives related to extinctions, identification of regions of greatest diversity/threat, climate change and biological invasions are addressed. Examples of how a temporal record can add information that is of direct practicable applicability to these issues are highlighted. These include (i) the identification of species at the end of their evolutionary lifespan and therefore most at risk from extinction, (ii) the setting of realistic goals and targets for conservation ‘hotspots’, and (iii) the identification of various management tools for the maintenance/restoration of a desired biological state. For climate change conservation strategies, the use of long-term ecological records in testing the predictive power of species envelope models is highlighted, along with the potential of fossil records to examine the impact of sea-level rise. It is also argued that a long-term perspective is essential for the management of biological invasions, not least in determining when an invasive is not an invasive. The paper concludes that often inclusion of a long-term ecological perspective can provide a more scientifically defensible basis for conservation decisions than the one based only on contemporary records. The pivotal issue of this paper is not whether long-term records are of interest to conservation biologists, but how they can actually be utilized in conservation practice and policy.
Resumo:
Recently, several belief negotiation models have been introduced to deal with the problem of belief merging. A negotiation model usually consists of two functions: a negotiation function and a weakening function. A negotiation function is defined to choose the weakest sources and these sources will weaken their point of view using a weakening function. However, the currently available belief negotiation models are based on classical logic, which makes them difficult to define weakening functions. In this paper, we define a prioritized belief negotiation model in the framework of possibilistic logic. The priority between formulae provides us with important information to decide which beliefs should be discarded. The problem of merging uncertain information from different sources is then solved by two steps. First, beliefs in the original knowledge bases will be weakened to resolve inconsistencies among them. This step is based on a prioritized belief negotiation model. Second, the knowledge bases obtained by the first step are combined using a conjunctive operator which may have a reinforcement effect in possibilistic logic.
Resumo:
The implementation of effective time analysis methods fast and accurately in the era of digital manufacturing has become a significant challenge for aerospace manufacturers hoping to build and maintain a competitive advantage. This paper proposes a structure oriented, knowledge-based approach for intelligent time analysis of aircraft assembly processes within a digital manufacturing framework. A knowledge system is developed so that the design knowledge can be intelligently retrieved for implementing assembly time analysis automatically. A time estimation method based on MOST, is reviewed and employed. Knowledge capture, transfer and storage within the digital manufacturing environment are extensively discussed. Configured plantypes, GUIs and functional modules are designed and developed for the automated time analysis. An exemplar study using an aircraft panel assembly from a regional jet is also presented. Although the method currently focuses on aircraft assembly, it can also be well utilized in other industry sectors, such as transportation, automobile and shipbuilding. The main contribution of the work is to present a methodology that facilitates the integration of time analysis with design and manufacturing using a digital manufacturing platform solution.
Resumo:
Across the UK recent policy developments have focused on improved information sharing and inter-agency cooperation. Professional non-reporting of child maltreatment concerns has been consistently highlighted as a problem in a range of countries and the research literature indicates that this can happen for a variety of reasons. Characteristics such as the type of abuse and the threshold of evidence available are key factors, as are concerns that reporting will damage the professional-client relationship. Professional discipline can also impact on willingness to report, as can personal beliefs about abuse, attitudes towards child protection services and experiences of court processes. Research examining the role of organisational factors in information sharing and reporting emphasises the importance of training and there are some positive indications that training can increase professional awareness of reporting processes and requirements and help to increase knowledge of child abuse and its symptoms. Nonetheless, this is a complex issue and the need for training to go beyond simple awareness raising is recognised. In order to tackle non-reporting in a meaningful way, childcare professionals need access to on-going multidisciplinary training which is specifically tailored to address the range of different factors which impact on reporting attitudes and behaviours.
Resumo:
Accurate estimates of the time-to-contact (TTC) of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach) and allocentric (lateral approach) TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1). Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.
Resumo:
The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction.
Resumo:
Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool.
Resumo:
In this paper we investigate the relationship between two prioritized knowledge bases by measuring both the conflict and the agreement between them.First of all, a quantity of conflict and two quantities of agreement are defined. The former is shown to be a generalization of the well-known Dalal distance which is the hamming distance between two interpretations. The latter are, respectively, a quantity of strong agreement which measures the amount ofinformation on which two belief bases “totally” agree, and a quantity of weak agreement which measures the amount of information that is believed by onesource but is unknown to the other. All three quantity measures are based on the weighted prime implicant, which represents beliefs in a prioritized belief base. We then define a degree of conflict and two degrees of agreement based on our quantity of conflict and quantities of agreement. We also consider the impact of these measures on belief merging and information source ordering.
Resumo:
A method is discussed for measuring the acoustic impedance of tubular objects that gives accurate results for a wide range of frequencies. The apparatus that is employed is similar to that used in many previously developed methods; it consists of a cylindrical measurement duct fitted with several microphones, of which two are active in each measurement session, and a driver at one of its ends. The object under study is fitted at the other end. The impedance of the object is determined from the microphone signals obtained during excitation of the air inside the 1 duct by the driver, and from three coefficients that are pre-determined using four calibration measurements with closed cylindrical tubes. The calibration procedure is based on the simple mathematical relationships between the impedances of the calibration tubes, and does not require knowledge of the propagation constant. Measurements with a cylindrical tube yield an estimate of the attenuation constant for plane waves, which is found to differ from the theoretical prediction by less than 1.4% in the frequency range 1 kHz-20 kHz. Impedance measurements of objects with abrupt changes in diameter are found to be in good agreement with multimodal theory.
Resumo:
Purpose – This paper explores the factors which determine the degree of knowledge transfer in inter-firm new product development projects. We test a theoretical model exploring how inter-firm knowledge transfer is enabled or hindered by a buyer’s learning intent, the degree of supplier protectiveness, inter-firm knowledge ambiguity, and absorptive capacity. Design/methodology/approach – A sample of 153 R&D intensive manufacturing firms in the UK automotive, aerospace, pharmaceutical, electrical, chemical, and general manufacturing industries were used to test the framework. Two-step structural equation modeling in AMOS 7.0 was used to analyse the data. Findings – Our results indicate that a buyer’s learning intent increases inter-firm knowledge transfer, but also acts as an incentive for suppliers to protect their knowledge. Such defensive measures increase the degree of inter-firm knowledge ambiguity, encouraging buyer firms to invest in absorptive capacity as a means to interpret supplier knowledge, but also increase the degree of knowledge transfer. Practical implications – Our paper illustrates the effects of focusing on acquisition, rather than accessing, supplier technological knowledge. We show that an overt learning strategy can be detrimental to knowledge transfer between buyer-supplier, as supplier’s react by restricting the flow of information. Organisations are encouraged to consider this dynamic when engaging in multi-organisational new product development projects. Originality/value – This paper examines the dynamics of knowledge transfer within inter-firm NPD projects, showing how transfer is influenced by the buyer firm’s learning intention, supplier’s response, characteristics of the relationship and knowledge to be transferred.