938 resultados para triangulation of documents
Resumo:
"Answer to the strictures of Mr. T. Falconer": and "Mr. Falconer's reply to Mr. Greenhow's answer, with Mr. Greenhow's rejoinder".
Resumo:
In the global strategy for preservation genetic resources of farm animals the implementation of information technology is of great importance. In this regards platform independent information tools and approaches for data exchange are needed in order to obtain aggregate values for regions and countries of spreading a separate breed. The current paper presents a XML based solution for data exchange in management genetic resources of farm animals’ small populations. There are specific requirements to the exchanged documents that come from the goal of data analysis. Three main types of documents are distinguished and their XML formats are discussed. DTD and XML Schema for each type are suggested. Some examples of XML documents are given also.
Resumo:
Collection of documents on the College of Medicine's planning process in preparation for the LMCE's site visit.
Resumo:
In support of research in the debate concerning its relevance to hospitality academics and practitioners, the author presents a discussion of how the philosophy of science impacts approaches to research, including a brief summary of empiricism, and the importance of the triangulation of research orientations. Criticism of research is the hospitality literature often focuses on the lack of an apparent philosophy of science perspective and how this perspective impacts the way in which scholars conduct and interpret research. The Validity Network Schema (VNS) presents a triangulation model for evaluating research progress in a discipline by providing a mechanism for integrating academic and practitioner research studies.
Resumo:
ISO 9000 is a family of international standards for quality management, applicable to all sizes of company, whether public or private.Management Systems ISO 9000 quality make up the human side, administrative and operating companies. By integrating these three aspects, the organization takes full advantage of all its resources, making results more efficiently, reducing administrative and operating expenses.With globalization and opening markets this has become a competitive advantage by providing further confidence and evidence to all customers, subcontractors, personnel and other stakeholders that the organization is committed to establishing, maintaining and improving levels acceptable quality products and services.Another advantage of quality systems is the clear definition of policies and functions, the staff is utilized according to their ability and focus on real customer needs.It should be mentioned that to achieve these benefits, it is necessary that management of the organization, is committed to the development of its quality system and to allocate financial and human resources to do so. These resources are minimal compared with the benefits you can achieve.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
XML has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Self Adaptive Migration Model Genetic Algorithm (SAMCA)[5] and multi class Support Vector Machine (SVM) are used to learn a user model. Based on the feedback from the users the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
This research proposes a method for extracting technology intelligence (TI) systematically from a large set of document data. To do this, the internal and external sources in the form of documents, which might be valuable for TI, are first identified. Then the existing techniques and software systems applicable to document analysis are examined. Finally, based on the reviews, a document-mining framework designed for TI is suggested and guidelines for software selection are proposed. The research output is expected to support intelligence operatives in finding suitable techniques and software systems for getting value from document-mining and thus facilitate effective knowledge management. Copyright © 2012 Inderscience Enterprises Ltd.
Resumo:
This article studies and reproduces a group of documents that includes some hand-written and typed texts most likely authored by Rubén Darío, along with others where Darío’s authorship can be easily contested. These documents seem to have originated during the years of Mundial Magazine (1912-1914), and besides the interest for their probably unpublished nature, they also show the cooperation between Darío and his collaborators in the preparation of his original manuscripts right before being sent to the publishers.
Resumo:
This article considers the issue of low levels of motivation for foreign language learning in England by exploring how language learning is conceptualised by different key voices in that country through the examination of written data: policy documents and reports on the UK's language needs, curriculum documents, and press articles. The extent to which this conceptualisation has changed over time is explored, through the consideration of documents from two time points, before and after a change in government in the UK. The study uses corpus analysis methods in this exploration. The picture that emerges is a complex one regarding how the 'problems' and 'solutions' surrounding language learning in that context are presented in public discourse. This, we conclude, has implications for the likely success of measures adopted to increase language learning uptake in that context.
Resumo:
Even though the digital processing of documents is increasingly widespread in industry, printed documents are still largely in use. In order to process electronically the contents of printed documents, information must be extracted from digital images of documents. When dealing with complex documents, in which the contents of different regions and fields can be highly heterogeneous with respect to layout, printing quality and the utilization of fonts and typing standards, the reconstruction of the contents of documents from digital images can be a difficult problem. In the present article we present an efficient solution for this problem, in which the semantic contents of fields in a complex document are extracted from a digital image.
Resumo:
[EN]In this paper we review the novel meccano method. We summarize the main stages (subdivision, mapping, optimization) of this automatic tetrahedral mesh generation technique and we concentrate the study to complex genus-zero solids. In this case, our procedure only requires a surface triangulation of the solid. A crucial consequence of our method is the volume parametrization of the solid to a cube. We construct volume T-meshes for isogeometric analysis by using this result. The efficiency of the proposed technique is shown with several examples. A comparison between the meccano method and standard mesh generation techniques is introduced.-1…
Resumo:
The study of digital competence remains an issue of interest for both the scientific community and the supranational political agenda. This study uses the Delphi method to validate the design of a questionnaire to determine the perceived importance of digital competence in higher education. The questionnaire was constructed from different framework documents in digital competence standards (NETS, ACLR, UNESCO). The triangulation of non-parametric techniques made it possible to consolidate the results obtained through the Delphi panel, the suitability of which was highlighted through the expert competence index (K). The resulting questionnaire emerges as a good tool for undertaking future national and international studies on digital competence in higher education.
Resumo:
Mode of access: Internet.