914 resultados para pacs: information retrieval techniques
Resumo:
In the seventh edition, the book has been updated and revised to reflect changes in the market, the development of appraisal methods and the subsequent changes in professional practice. The intial overview in Part I of the book, The Economic and Legal Framework, has been revisd to show the present position. Changes in appraisal techniques based on the research of the authors have been incorporated in Part II on Investment Valuation. Revisions have also been made in part II, again based on the research activities of the authors, which examines Investment Appraisal.The serves a number of purposes. First, it provides a critical examination of valuation techniques, with particular reference to the investment method of valuation. Second, it supplies practising valuers and appraisers with more effective data, information and techniques to enable them to carry out their valuations, appraisals and negotiations in an increasily competitive field. Finally, it provides assistance to students and academics in understanding the context of and a range of approaches to the valuation and appraisal of property investments. This book has been a key text in property investment appraisal for more than 30 years, it has sold many thousands of copies globally to academics, students and practitioners.
Resumo:
The present paper reports the results of a study aiming to describe the attitudes of teachers in adult continuous education in the Autonomous Community of Andalusia (Spain) towards the use and integration of information and communication technologies (ITC) in the educational centres they work in, while identifying those factors that favour the development of good practice. It is a mixed methods descriptive research, and information collection techniques include a questionnaire and in-depth interviews. A total number of 172 teachers were surveyed, as well as 18 head teachers and coordinators, in adult education. For questionnaire validation the expert judgment technique was used, as they were selected by the «expert competence coefficient» or «K coefficient» procedure. To improve its psychometric properties, construct validity was determined by means of Varimax factor analysis and maximum likelihood extraction (two factors were extracted). Confidence was set by Cronbach's alpha (0.88). The interview guide was also validated by this group of experts. Results point out, on one hand, that teachers hold positive attitudes towards ICT regarding both ICT's role in professional development and their ease of use and access. On the other hand, among the most important factors for ICT-supported good educational practices lies in ICT's capacity to favour personalized work.
Resumo:
Latent semantic indexing (LSI) is a popular technique used in information retrieval (IR) applications. This paper presents a novel evaluation strategy based on the use of image processing tools. The authors evaluate the use of the discrete cosine transform (DCT) and Cohen Daubechies Feauveau 9/7 (CDF 9/7) wavelet transform as a pre-processing step for the singular value decomposition (SVD) step of the LSI system. In addition, the effect of different threshold types on the search results is examined. The results show that accuracy can be increased by applying both transforms as a pre-processing step, with better performance for the hard-threshold function. The choice of the best threshold value is a key factor in the transform process. This paper also describes the most effective structure for the database to facilitate efficient searching in the LSI system.
Resumo:
Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.
Resumo:
Latent semantic indexing (LSI) is a technique used for intelligent information retrieval (IR). It can be used as an alternative to traditional keyword matching IR and is attractive in this respect because of its ability to overcome problems with synonymy and polysemy. This study investigates various aspects of LSI: the effect of the Haar wavelet transform (HWT) as a preprocessing step for the singular value decomposition (SVD) in the key stage of the LSI process; and the effect of different threshold types in the HWT on the search results. The developed method allows the visualisation and processing of the term document matrix, generated in the LSI process, using HWT. The results have shown that precision can be increased by applying the HWT as a preprocessing step, with better results for hard thresholding than soft thresholding, whereas standard SVD-based LSI remains the most effective way of searching in terms of recall value.
Resumo:
Context: The development of a consolidated knowledge base for social work requires rigorous approaches to identifying relevant research. Method: The quality of 10 databases and a web search engine were appraised by systematically searching for research articles on resilience and burnout in child protection social workers. Results: Applied Social Sciences Index and Abstracts, Social Services Abstracts and Social Sciences Citation Index (SSCI) had greatest sensitivity, each retrieving more than double than any other database. PsycINFO and Cumulative Index to Nursing and Allied Health (CINAHL) had highest precision. Google Scholar had modest sensitivity and good precision in relation to the first 100 items. SSCI, Google Scholar, Medline, and CINAHL retrieved the highest number of hits not retrieved by any other database. Conclusion: A range of databases is required for even modestly comprehensive searching. Advanced database searching methods are being developed but the profession requires greater standardization of terminology to assist in information retrieval.
Resumo:
Decision making is an important element throughout the life-cycle of large-scale projects. Decisions are critical as they have a direct impact upon the success/outcome of a project and are affected by many factors including the certainty and precision of information. In this paper we present an evidential reasoning framework which applies Dempster-Shafer Theory and its variant Dezert-Smarandache Theory to aid decision makers in making decisions where the knowledge available may be imprecise, conflicting and uncertain. This conceptual framework is novel as natural language based information extraction techniques are utilized in the extraction and estimation of beliefs from diverse textual information sources, rather than assuming these estimations as already given. Furthermore we describe an algorithm to define a set of maximal consistent subsets before fusion occurs in the reasoning framework. This is important as inconsistencies between subsets may produce results which are incorrect/adverse in the decision making process. The proposed framework can be applied to problems involving material selection and a Use Case based in the Engineering domain is presented to illustrate the approach. © 2013 Elsevier B.V. All rights reserved.
Resumo:
The Supreme Court of the United States in Feist v. Rural (Feist, 1991) specified that compilations or databases, and other works, must have a minimal degree of creativity to be copyrightable. The significance and global diffusion of the decision is only matched by the difficulties it has posed for interpretation. The judgment does not specify what is to be understood by creativity, although it does give a full account of the negative of creativity, as ‘so mechanical or routine as to require no creativity whatsoever’ (Feist, 1991, p.362). The negative of creativity as highly mechanical has particularly diffused globally.
A recent interpretation has correlated ‘so mechanical’ (Feist, 1991) with an automatic mechanical procedure or computational process, using a rigorous exegesis fully to correlate the two uses of mechanical. The negative of creativity is then understood as an automatic computation and as a highly routine process. Creativity is itself is conversely understood as non-computational activity, above a certain level of routinicity (Warner, 2013).
The distinction between the negative of creativity and creativity is strongly analogous to an independently developed distinction between forms of mental labour, between semantic and syntactic labour. Semantic labour is understood as human labour motivated by considerations of meaning and syntactic labour as concerned solely with patterns. Semantic labour is distinctively human while syntactic labour can be directly humanly conducted or delegated to machine, as an automatic computational process (Warner, 2005; 2010, pp.33-41).
The value of the analogy is to greatly increase the intersubjective scope of the distinction between semantic and syntactic mental labour. The global diffusion of the standard for extreme absence of copyrightability embodied in the judgment also indicates the possibility that the distinction fully captures the current transformation in the distribution of mental labour, where syntactic tasks which were previously humanly performed are now increasingly conducted by machine.
The paper has substantive and methodological relevance to the conference themes. Substantively, it is concerned with human creativity, with rationality as not reducible to computation, and has relevance to the language myth, through its indirect endorsement of a non-computable or not mechanical semantics. These themes are supported by the underlying idea of technology as a human construction. Methodologically, it is rooted in the humanities and conducts critical thinking through exegesis and empirically tested theoretical development
References
Feist. (1991). Feist Publications, Inc. v. Rural Tel. Service Co., Inc. 499 U.S. 340.
Warner, J. (2005). Labor in information systems. Annual Review of Information Science and Technology. 39, 2005, pp.551-573.
Warner, J. (2010). Human Information Retrieval (History and Foundations of Information Science Series). Cambridge, MA: MIT Press.
Warner, J. (2013). Creativity for Feist. Journal of the American Society for Information Science and Technology. 64, 6, 2013, pp.1173-1192.
Resumo:
Textual problem-solution repositories are available today in
various forms, most commonly as problem-solution pairs from community
question answering systems. Modern search engines that operate on
the web can suggest possible completions in real-time for users as they
type in queries. We study the problem of generating intelligent query
suggestions for users of customized search systems that enable querying
over problem-solution repositories. Due to the small scale and specialized
nature of such systems, we often do not have the luxury of depending on
query logs for finding query suggestions. We propose a retrieval model
for generating query suggestions for search on a set of problem solution
pairs. We harness the problem solution partition inherent in such
repositories to improve upon traditional query suggestion mechanisms
designed for systems that search over general textual corpora. We evaluate
our technique over real problem-solution datasets and illustrate that
our technique provides large and statistically significant
Resumo:
We consider the problem of linking web search queries to entities from a knowledge base such as Wikipedia. Such linking enables converting a user’s web search session to a footprint in the knowledge base that could be used to enrich the user profile. Traditional methods for entity linking have been directed towards finding entity mentions in text documents such as news reports, each of which are possibly linked to multiple entities enabling the usage of measures like entity set coherence. Since web search queries are very small text fragments, such criteria that rely on existence of a multitude of mentions do not work too well on them. We propose a three-phase method for linking web search queries to wikipedia entities. The first phase does IR-style scoring of entities against the search query to narrow down to a subset of entities that are expanded using hyperlink information in the second phase to a larger set. Lastly, we use a graph traversal approach to identify the top entities to link the query to. Through an empirical evaluation on real-world web search queries, we illustrate that our methods significantly enhance the linking accuracy over state-of-the-art methods.
Resumo:
Relatório Final apresentado para a obtenção do grau de Mestre em Educação Pré-escolar e Ensino do 1º ciclo do Ensino Básico. Orientadora:Mestre Helena Luís.Coorientadora:Professora Doutora Gracinda Hamido
Resumo:
Tese de doutoramento, Educação (Avaliação em Educação), Universidade de Lisboa, Instituto de Educação, 2014
Resumo:
Air quality is an increasing concern of the European Union, local authorities, scientists and most of all inhabitants that become more aware of the quality of the surrounding environment. Bioaerosols may be consisted of various elements, and the most important are pollen grains, fungal spores, bacteria, viruses. More than 100 genera of fungal spores have been identified as potential allergens that cause immunological response in susceptible individuals. Alternaria and Cladosporium have been recognised as the most important fungal species responsible for respiratory tract diseases, such as asthma, eczema, rhinitis and chronic sinusitis. While a lot of attention has been given to these fungal species, a limited number of studies can be found on Didymella and Ganoderma, although their allergenic properties were proved clinically. Monitoring of allergenic fungal spore concentration in the air is therefore very important, and in particular at densely populated areas like Worcester, UK. In this thesis a five year spore data set was presented, which was collected using a 7-day volumetric spore trap, analysed with the aid of light microscopy, statistical tests and geographic information system techniques. Although Kruskal-Wallis test detected statistically significant differences between annual concentrations of all examined fungal spore types, specific patterns in their distribution were also found. Alternaria spores were present in the air between mid-May/mid-June until September-October with peak occurring in August. Cladosporium sporulated between mid-May and October, with maximum concentration recorded in July. Didymella spores were seen from June/July up to September, while peaks were found in August. Ganoderma produced spores for 6 months (May-October), and maximum concentration could be found in September. With respect to diurnal fluctuations, Alternaria peaked between 22:00h and 23:00h, Cladosporium 13:00-15:00h, Didymella 04:00-05:00h and 22:00h-23:00h and Ganoderma from 03:00h to 06:00h. Spatial analysis showed that sources of all fungal species were located in England, and there was no evidence for a long distance transport from the continent. The maximum concentration of spores was found several hours delayed in comparison to the approximate time of the spore release from the crops. This was in agreement with diurnal profiles of the spore concentration recorded in Worcester, UK. Spores of Alternaria, Didymella and Ganoderma revealed a regional origin, in contrast to Cladosporium, which sources were situated locally. Hence, the weather conditions registered locally did not exhibit strong statistically significant correlations with fungal spore concentrations. This has had also an impact on the performance of the forecasting models. The best model was obtained for Cladosporium with 66% of the accuracy.