839 resultados para Knowledge-based industry


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In present day knowledge societies political decisions are often justified on the basis of scientific expertise. Traditionally, a linear relation between knowledge production and application was postulated which would lead, with more and better science, to better policies. Empirical studies in Science and Technology studies have essentially demolished this idea. However, it is still powerful, not least among practitioners working in fields where decision making is based on large doses of expert knowledge. Based on conceptual work in the field of Science and Technology Studies (STS) I shall examine two cases of global environmental governance, ozone layer protection and global climate change. I will argue that hybridization and purification are important for two major forms of scientific expertise. One is delivered though scientific advocacy (by individual scientists or groups of scientists), the other through expert committees, i.e. institutionalized forms of collecting and communicating expertise to decision makers. Based on this analysis lessons will be drawn, also with regard to the stalling efforts at establishing an international forestry regime.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In practical term any result obtained using an ordered weighted averaging (OWA) operator heavily depends upon the method to determine the weighting vector. Several approaches for obtaining the associated weights have been suggested in the literature, in which none of them took into account the preference of alternatives. This paper presents a method for determining the OWA weights when the preferences of alternatives across all the criteria are considered. An example is given to illustrate this method and an application in internet search engine shows the use of this new OWA operator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study was concerned with the computer automation of land evaluation. This is a broad subject with many issues to be resolved, so the study concentrated on three key problems: knowledge based programming; the integration of spatial information from remote sensing and other sources; and the inclusion of socio-economic information into the land evaluation analysis. Land evaluation and land use planning were considered in the context of overseas projects in the developing world. Knowledge based systems were found to provide significant advantages over conventional programming techniques for some aspects of the land evaluation process. Declarative languages, in particular Prolog, were ideally suited to integration of social information which changes with every situation. Rule-based expert system shells were also found to be suitable for this role, including knowledge acquisition at the interview stage. All the expert system shells examined suffered from very limited constraints to problem size, but new products now overcome this. Inductive expert system shells were useful as a guide to knowledge gaps and possible relationships, but the number of examples required was unrealistic for typical land use planning situations. The accuracy of classified satellite imagery was significantly enhanced by integrating spatial information on soil distribution for Thailand data. Estimates of the rice producing area were substantially improved (30% change in area) by the addition of soil information. Image processing work on Mozambique showed that satellite remote sensing was a useful tool in stratifying vegetation cover at provincial level to identify key development areas, but its full utility could not be realised on typical planning projects, without treatment as part of a complete spatial information system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lean is usually associated with the ‘operations’ of a manufacturing enterprise; however, there is a growing awareness that these principles may be transferred readily to other functions and sectors. The application to knowledge-based activities such as engineering design is of particular relevance to UK plc. Hence, the purpose of this study has been to establish the state-of-the-art, in terms of the adoption of Lean in new product development, by carrying out a systematic review of the literature. The authors' findings confirm the view that Lean can be applied beneficially away from the factory; that an understanding and definition of value is key to success; that a set-based (or Toyota methodology) approach to design is favoured together with the strong leadership of a chief engineer; and that the successful implementation requires organization-wide changes to systems, practices, and behaviour. On this basis it is felt that this review paper provides a useful platform for further research in this topic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives The creation of more high-growth firms continues to be a key component of enterprise policy throughout the countries of the OECD. In the UK the developing enterprise policy framework highlights the importance of supporting businesses with growth potential. The difficulty, of course, is the ability of those delivering business support policies to accurately identify those businesses, especially at start-up, which will benefit from interventions and experiences an enhanced growth performance. This paper has a core objective of presenting new data on the number of high growth firms in the UK and providing an assessment of their economic significance. Approach This paper uses a specially created longitudinal firm-level database based on the Inter-Departmental Business Register (IDBR) held by the Office of National Statistics (ONS) for all private sector businesses in the UK for the period 1997-2008 to investigate the share of high-growth firms (including a sub-set of start-up more commonly referred to as gazelles) in successive cohorts of start-ups. We apply OECD definitions of high growth and gazelles to this database and are able to quantify for the first time their number (disaggregated by sector, region, size) and importance (employment and sales). Prior Work However, what is lacking at the core of this policy focus is any comprehensive statistical analysis of the scale and nature of high-growth firms in cohorts of new and established businesses. The evidence base in response to the question “Why do high-growth firms matter?” is surprisingly weak. Important work in this area has been initiated by Bartelsman et al., (2003),Hoffman and Jünge (2006) and Henreksen and Johansson (2009) but to date work in the UK has been limited (BERR, 2008b). Results We report that there are ~11,500 high growth firms in the UK in both 2005 and 2008. The share of high growth start-ups in the UK in 2005 (6.3%) was, contrary to the widely held perception in policy circles, higher than in the United States (5.2%). Of particular interest in the analysis are the growth trajectories (pattern of growth) of these firms as well as the extent to which they are restricted to technology-based or knowledge-based sectors. Implications and Value Using hitherto unused population data for the first time we have answered a fundamental research and policy question on the number and scale of high growth firms in the UK. We draw the conclusion that this ‘rare’ event does not readily lend itself to policy intervention on the grounds that the significant effort needed to identify such businesses ex ante would appear unjustified even if it was possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: The paper aims to explore the nature and purpose of higher education (HE) in the twenty-first century, focussing on how it can help fashion a green knowledge-based economy by developing approaches to learning and teaching that are social, networked and ecologically sensitive. Design/methodology/approach: The paper presents a discursive analysis of the skills and knowledge requirements of an emerging green knowledge-based economy using a range of policy focussed and academic research literature. Findings: The business opportunities that are emerging as a more sustainable world is developed requires the knowledge and skills that can capture and move then forward but in a complex and uncertain worlds learning needs to non-linear, creative and emergent. Practical implications: Sustainable learning and the attributes graduates will need to exhibit are prefigured in the activities and learning characterising the work and play facilitated by new media technologies. Social implications: Greater emphasis is required in higher learning understood as the capability to learn, adapt and direct sustainable change requires interprofessional co-operation that must utlise the potential of new media technologies to enhance social learning and collective intelligence. Originality/value: The practical relationship between low-carbon economic development, social sustainability and HE learning is based on both normative criteria and actual and emerging projections in economic, technological and skills needs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Selecting the best alternative in a group decision making is a subject of many recent studies. The most popular method proposed for ranking the alternatives is based on the distance of each alternative to the ideal alternative. The ideal alternative may never exist; hence the ranking results are biased to the ideal point. The main aim in this study is to calculate a fuzzy ideal point that is more realistic to the crisp ideal point. On the other hand, recently Data Envelopment Analysis (DEA) is used to find the optimum weights for ranking the alternatives. This paper proposes a four stage approach based on DEA in the Fuzzy environment to aggregate preference rankings. An application of preferential voting system shows how the new model can be applied to rank a set of alternatives. Other two examples indicate the priority of the proposed method compared to the some other suggested methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to review the literature which focuses on four major higher education decision problems. These are: resource allocation; performance measurement; budgeting; and scheduling. Design/methodology/approach: Related articles appearing in the international journals from 1996 to 2005 are gathered and analyzed so that the following three questions can be answered: "What kind of decision problems were paid most attention to?"; "Were the multiple criteria decision-making techniques prevalently adopted?"; and "What are the inadequacies of these approaches?" Findings: Based on the inadequacies, some improvements and possible future work are recommended, and a comprehensive resource allocation model is developed taking account of these factors. Finally, a new knowledge-based goal programming technique which integrates some operations of analytic hierarchy process is proposed to tackle the model intelligently. Originality/value: Higher education has faced the problem of budget cuts or constrained budgets for the past 30 years. Managing the process of the higher education system is, therefore, a crucial and urgent task for the decision makers of universities in order to improve their performance or competitiveness. © Emerald Group Publishing Limited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. The Galatean Risk Screening Tool [1] is a psychological model for mental health risk assessment based on fuzzy sets. This paper details how the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. These semantics are formalised by a detailed specification for an XML structure used to represent the expertise. The component parts were then mapped to equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements. © Springer-Verlag 2010.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Radio Frequency Identification (RFID) has been identified as a crucial technology for the modern 21st century knowledge-based economy. Some businesses have realised benefits of RFID adoption through improvements in operational efficiency, additional cost savings, and opportunities for higher revenues. RFID research in warehousing operations has been less prominent than in other application domains. To investigate how RFID technology has had an impact in warehousing, a comprehensive analysis of research findings available from articles through leading scientific article databases has been conducted. Articles from years 1995 to 2010 have been reviewed and analysed with respect to warehouse operations, RFID application domains, benefits achieved and obstacles encountered. Four discussion topics are presented covering RFID in warehousing focusing on its applications, perceived benefits, obstacles to its adoption and future trends. This is aimed at elucidating the current state of RFID in the warehouse and providing insights for researchers to establish new research agendas and for practitioners to consider and assess the adoption of RFID in warehousing functions. © 2013 Elsevier B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The formal model of natural language processing in knowledge-based information systems is considered. The components realizing functions of offered formal model are described.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper presents a case study of geo-monitoring a region consisting in the capturing and encoding of human expertise into a knowledge-based system. As soon as the maps have been processed, the data patterns are detected using knowledge-based agents for the harvest prognosis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fuzzy data envelopment analysis (DEA) models emerge as another class of DEA models to account for imprecise inputs and outputs for decision making units (DMUs). Although several approaches for solving fuzzy DEA models have been developed, there are some drawbacks, ranging from the inability to provide satisfactory discrimination power to simplistic numerical examples that handles only triangular fuzzy numbers or symmetrical fuzzy numbers. To address these drawbacks, this paper proposes using the concept of expected value in generalized DEA (GDEA) model. This allows the unification of three models - fuzzy expected CCR, fuzzy expected BCC, and fuzzy expected FDH models - and the ability of these models to handle both symmetrical and asymmetrical fuzzy numbers. We also explored the role of fuzzy GDEA model as a ranking method and compared it to existing super-efficiency evaluation models. Our proposed model is always feasible, while infeasibility problems remain in certain cases under existing super-efficiency models. In order to illustrate the performance of the proposed method, it is first tested using two established numerical examples and compared with the results obtained from alternative methods. A third example on energy dependency among 23 European Union (EU) member countries is further used to validate and describe the efficacy of our approach under asymmetric fuzzy numbers.