892 resultados para Database, Image Retrieval, Browsing, Semantic Concept
Resumo:
O tema Espiritualidade no mundo corporativo: aproximações entre a práxis religiosa e a vida profissional, é compreendido no campo das Ciências da Religião, especificamente na área de Práxis religiosa na sociedade, e foi pesquisado de acordo com o método histórico crítico, tendo como metodologia para a coleta de dados a pesquisa bibliográfica e documental. O estudo do tema buscou verificar duas hipóteses, a saber: a espiritualidade no mundo corporativo como resposta às necessidades espirituais da sociedade contemporânea, em substituição às expressões religiosas institucionalizadas, e a espiritualidade no mundo corporativo como conceito solidário à práxis religiosa para a promoção humana no mercado e na sociedade. Os conceitos de espiritualidade em Paul Tillich e práxis em Casiano Floristán foram utilizados como referencias teóricos para estudo do tema. A dissertação foi estruturada em três capítulos. No primeiro, há definições e relações dos termos que determinam o campo semântico da espiritualidade, que são: espiritualidade e religião, religioso e secular, sagrado e profano. No segundo, há o registro das origens do movimento, identificando seu período de florescimento, seus principais atores, seu contexto científico, ideológico, religioso, econômico e filosófico, e principalmente lista os diferentes conceitos que o termo espiritualidade adquire no mundo corporativo. Esse capítulo esclarece os fatores sociais e históricos que possibilitaram a inserção da espiritualidade no local de trabalho e a inclusão do tema no mundo corporativo. Finalmente, o terceiro e último capítulo analisa três possíveis utilizações do conceito de espiritualidade no mundo corporativo: a espiritualidade a serviço da religião (proselitismo), do capital (performance) e do ser humano (práxis). Possibilitando assim ao leitor ampliar seus conhecimentos sobre o tema que é o objeto desta dissertação.(AU)
Resumo:
We present a vision and a proposal for using Semantic Web technologies in the organic food industry. This is a very knowledge intensive industry at every step from the producer, to the caterer or restauranteur, through to the consumer. There is a crucial need for a concept of environmental audit which would allow the various stake holders to know the full environmental impact of their economic choices. This is a di?erent and parallel form of knowledge to that of price. Semantic Web technologies can be used e?ectively for the calculation and transfer of this type of knowledge (together with other forms of multimedia data) which could contribute considerably to the commercial and educational impact of the organic food industry. We outline how this could be achieved as our essential ob jective is to show how advanced technologies could be used to both reduce ecological impact and increase public awareness.
Resumo:
Photo annotation is a resource-intensive task, yet is increasingly essential as image archives and personal photo collections grow in size. There is an inherent con?ict in the process of describing and archiving personal experiences, because casual users are generally unwilling to expend large amounts of e?ort on creating the annotations which are required to organise their collections so that they can make best use of them. This paper describes the Photocopain system, a semi-automatic image annotation system which combines information about the context in which a photograph was captured with information from other readily available sources in order to generate outline annotations for that photograph that the user may further extend or amend.
Resumo:
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
Resumo:
Owing to the rise in the volume of literature, problems arise in the retrieval of required information. Various retrieval strategies have been proposed, but most of that are not flexible enough for their users. Specifically, most of these systems assume that users know exactly what they are looking for before approaching the system, and that users are able to precisely express their information needs according to l aid- down specifications. There has, however, been described a retrieval program THOMAS which aims at satisfying incompletely- defined user needs through a man- machine dialogue which does not require any rigid queries. Unlike most systems, Thomas attempts to satisfy the user's needs from a model which it builds of the user's area of interest. This model is a subset of the program's "world model" - a database in the form of a network where the nodes represent concepts since various concepts have various degrees of similarities and associations, this thesis contends that instead of models which assume equal levels of similarities between concepts, the links between the concepts should have values assigned to them to indicate the degree of similarity between the concepts. Furthermore, the world model of the system should be structured such that concepts which are related to one another be clustered together, so that a user- interaction would involve only the relevant clusters rather than the entire database such clusters being determined by the system, not the user. This thesis also attempts to link the design work with the current notion in psychology centred on the use of the computer to simulate human cognitive processes. In this case, an attempt has been made to model a dialogue between two people - the information seeker and the information expert. The system, called Thomas-II, has been implemented and found to require less effort from the user than Thomas.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
An implementation of a Lexical Functional Grammar (LFG) natural language front-end to a database is presented, and its capabilities demonstrated by reference to a set of queries used in the Chat-80 system. The potential of LFG for such applications is explored. Other grammars previously used for this purpose are briefly reviewed and contrasted with LFG. The basic LFG formalism is fully described, both as to its syntax and semantics, and the deficiencies of the latter for database access application shown. Other current LFG implementations are reviewed and contrasted with the LFG implementation developed here specifically for database access. The implementation described here allows a natural language interface to a specific Prolog database to be produced from a set of grammar rule and lexical specifications in an LFG-like notation. In addition to this the interface system uses a simple database description to compile metadata about the database for later use in planning the execution of queries. Extensions to LFG's semantic component are shown to be necessary to produce a satisfactory functional analysis and semantic output for querying a database. A diverse set of natural language constructs are analysed using LFG and the derivation of Prolog queries from the F-structure output of LFG is illustrated. The functional description produced from LFG is proposed as sufficient for resolving many problems of quantification and attachment.
Resumo:
The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.
Resumo:
The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
Resumo:
Humans consciously and subconsciously establish various links, emerge semantic images and reason in mind, learn linking effect and rules, select linked individuals to interact, and form closed loops through links while co-experiencing in multiple spaces in lifetime. Machines are limited in these abilities although various graph-based models have been used to link resources in the cyber space. The following are fundamental limitations of machine intelligence: (1) machines know few links and rules in the physical space, physiological space, psychological space, socio space and mental space, so it is not realistic to expect machines to discover laws and solve problems in these spaces; and, (2) machines can only process pre-designed algorithms and data structures in the cyber space. They are limited in ability to go beyond the cyber space, to learn linking rules, to know the effect of linking, and to explain computing results according to physical, physiological, psychological and socio laws. Linking various spaces will create a complex space — the Cyber-Physical-Physiological-Psychological-Socio-Mental Environment CP3SME. Diverse spaces will emerge, evolve, compete and cooperate with each other to extend machine intelligence and human intelligence. From multi-disciplinary perspective, this paper reviews previous ideas on various links, introduces the concept of cyber-physical society, proposes the ideal of the CP3SME including its definition, characteristics, and multi-disciplinary revolution, and explores the methodology of linking through spaces for cyber-physical-socio intelligence. The methodology includes new models, principles, mechanisms, scientific issues, and philosophical explanation. The CP3SME aims at an ideal environment for humans to live and work. Exploration will go beyond previous ideals on intelligence and computing.
Resumo:
This paper presents our Semantic Web portal infrastructure, which focuses on how to enhance knowledge access in traditional Web portals by gathering and exploiting semantic metadata. Special attention is paid to three important issues that affect the performance of knowledge access: i) high quality metadata acquisition, which concerns how to ensure high quality while gathering semantic metadata from heterogeneous data sources; ii) semantic search, which addresses how to meet the information querying needs of ordinary end users who are not necessarily familiar with the problem domain or the supported query language; and iii) semantic browsing, which concerns how to help users understand and explore the problem domain.
Resumo:
Term dependence is a natural consequence of language use. Its successful representation has been a long standing goal for Information Retrieval research. We present a methodology for the construction of a concept hierarchy that takes into account the three basic dimensions of term dependence. We also introduce a document evaluation function that allows the use of the concept hierarchy as a user profile for Information Filtering. Initial experimental results indicate that this is a promising approach for incorporating term dependence in the way documents are filtered.
Resumo:
Image collections are ever growing and hence visual information is becoming more and more important. Moreover, the classical paradigm of taking pictures has changed, first with the spread of digital cameras and, more recently, with mobile devices equipped with integrated cameras. Clearly, these image repositories need to be managed, and tools for effectively and efficiently searching image databases are highly sought after, especially on mobile devices where more and more images are being stored. In this paper, we present an image browsing system for interactive exploration of image collections on mobile devices. Images are arranged so that visually similar images are grouped together while large image repositories become accessible through a hierarchical, browsable tree structure, arranged on a hexagonal lattice. The developed system provides an intuitive and fast interface for navigating through image databases using a variety of touch gestures. © 2012 Springer-Verlag.
Resumo:
The target of no-reference (NR) image quality assessment (IQA) is to establish a computational model to predict the visual quality of an image. The existing prominent method is based on natural scene statistics (NSS). It uses the joint and marginal distributions of wavelet coefficients for IQA. However, this method is only applicable to JPEG2000 compressed images. Since the wavelet transform fails to capture the directional information of images, an improved NSS model is established by contourlets. In this paper, the contourlet transform is utilized to NSS of images, and then the relationship of contourlet coefficients is represented by the joint distribution. The statistics of contourlet coefficients are applicable to indicate variation of image quality. In addition, an image-dependent threshold is adopted to reduce the effect of content to the statistical model. Finally, image quality can be evaluated by combining the extracted features in each subband nonlinearly. Our algorithm is trained and tested on the LIVE database II. Experimental results demonstrate that the proposed algorithm is superior to the conventional NSS model and can be applied to different distortions. © 2009 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this work is the development of database of the distributed information measurement and control system that implements methods of optical spectroscopy for plasma physics research and atomic collisions and provides remote access to information and hardware resources within the Intranet/Internet networks. The database is based on database management system Oracle9i. Client software was realized in Java language. The software was developed using Model View Controller architecture, which separates application data from graphical presentation components and input processing logic. The following graphical presentations were implemented: measurement of radiation spectra of beam and plasma objects, excitation function for non-elastic collisions of heavy particles and analysis of data acquired in preceding experiments. The graphical clients have the following functionality of the interaction with the database: browsing information on experiments of a certain type, searching for data with various criteria, and inserting the information about preceding experiments.