968 resultados para STANDARDIZATION
Resumo:
Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging into some form of ontology, but the application of the resulted ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.
Resumo:
As the Service-oriented architecture paradigm has become ever more popular, different standardization efforts have been proposed by various consortia to enable interaction among heterogeneous environments through this paradigm. This chapter will overview the most prevalent of these SOA Efforts. It will first show how technical services can be described, how they can interact with each other and be discovered by users. Next, the chapter will present different standards to facilitate service composition and to design service-oriented environments in light of a universal understanding of service orientation. The chapter will conclude with a summary and a discussion on the limitations of the reviewed standards along their ability to describe service properties. This paves the way to the next chapters where the USDL standard will be presented, which aim to lift such limitations.
Resumo:
With the emergence of Web 2.0, Web users can classify Web items of their interest by using tags. Tags reflect users’ understanding to the items collected in each tag. Exploring user tagging behavior provides a promising way to understand users’ information needs. However, free and relatively uncontrolled vocabulary has its drawback in terms of lack of standardization and semantic ambiguity. Moreover, the relationships among tags have not been explored even there exist rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach to construct tag ontology based on the widely used general ontology WordNet to capture the semantics and the structural relationships of tags. Ambiguity of tags is a challenging problem to deal with in order to construct high quality tag ontology. We propose strategies to find the semantic meanings of tags and a strategy to disambiguate the semantics of tags based on the opinion of WordNet lexicographers. In order to evaluate the usefulness of the constructed tag ontology, in this paper we apply the extracted tag ontology in a tag recommendation experiment. We believe this is the first application of tag ontology for recommendation making. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.
Resumo:
A key issue in the economic development and performance of organizations is the existence of standards. Their definition and control are sources of power and it is important to understand their concept, as it gives standards their direction and their legitimacy, and to explore how they are represented and applied. The difficulties posed by classical micro-economics in establishing a theory of standardization that is compatible with its fundamental axiomatic are acknowledged. We propose to reconsider the problem by taking the opposite perspective in questioning its theoretical base and by reformulating assumptions about the independent and autonomous decisions taken by actors. The Theory of Conventions will offer us a theoretical framework and tools enabling us to understand the systemic dimension and dynamic structure of standards. These will be seen as a special case of conventions. This work aims to provide a sound basis and promote a better consciousness in the development of global project management standards. It aims also to emphasize that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.
Resumo:
Digital human models (DHM) have evolved as useful tools for ergonomic workplace design and product development, and found in various industries and education. DHM systems which dominate the market were developed for specific purposes and differ significantly, which is not only reflected in non-compatible results of DHM simulations, but also provoking misunderstanding of how DHM simulations relate to real world problems. While DHM developers are restricted by uncertainty about the user need and lack of model data related standards, users are confined to one specific product and cannot exchange results, or upgrade to another DHM system, as their previous results would be rendered worthless. Furthermore, origin and validity of anthropometric and biomechanical data is not transparent to the user. The lack of standardisation in DHM systems has become a major roadblock in further system development, affecting all stakeholders in the DHM industry. Evidently, a framework for standardising digital human models is necessary to overcome current obstructions.
Resumo:
With the growth in number and sophistication of services widely available, there is a new urgency for comprehensive service descriptions that take into account both technical and business aspects. The last years have seen a number of efforts for best-of-breed service description focusing on specific aspects of services. The Handbook of Service Description provides the most advanced state of the art insights into these. The main parts of the book provide the most detailed documentation of the Unified Service Description Language (USDL) to date. USDL has been developed across several research institutes and publicly funded projects across Europe and Australia, currently extending to the Americas as part of a standardization push through W3C. The scope of services extends across IT and business, i.e., the socio-technical sense of services scaled to business networks. In this respect, purely human, purely automated and mixed human/automated services were considered, that have a boundary of cognizance that is available through the tasks of service provisioning, discovery, access and delivery. Taken together, the Handbook of Service Description provides a comprehensive reference suitable for a wide-reaching audience including researchers, practitioners, managers, and students who aspire to learn about or to create a deeper scientific foundation for service description and its methodological aspects.
Resumo:
As the service-oriented architecture paradigm has become ever more popular, different standardization efforts have been proposed by various consortia to enable interaction among heterongeneous environments through this paradigm. This chapter will overview the most prevalent of these SOA approaches. It will first show how technical services can be described, how they can interact with each other and be discovered by users. Next, the chapter will present different standards to facilitate service composition and to design service-oriented environments in light of a universal understanding of service orientation. The chapter will conclude with a summary and a discussion on the limitations of the reviewed standards along their ability to describe service properties. This paves the way to the next chapters where the USDL standard will be presented, which aims to lift such limitations.
Resumo:
This paper analyses the attempted installation of the 1990 Australian Education Council commissioned report 'Teacher Education in Australia' (the Ebbeck Report), a document which proposed a radical reformulation and relative standardization of the content and structure of initial teacher education in Australia. The paper draws on Michel Foucault's concept of 'governmentality' to examine the discursive and technological dimensions of this programme of political rule. The paper makes apparent the 'microphysics of power' that were generated within, particularly, the Queensland educational community in the attempt to operationalise this report. Analysing educational policy from the perspective of 'government', the paper contends, directs attention to the conditions of operation of policy practices and reveals the dependence of educational policy on particular technical conditions of existence, routines and rituals of bureaucracy, forms of expertise and intellectual technologies, and the enlistment of agencies and authorities both within and outside the boundaries of the state.
Resumo:
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.
Resumo:
Due to the explosive growth of the Web, the domain of Web personalization has gained great momentum both in the research and commercial areas. One of the most popular web personalization systems is recommender systems. In recommender systems choosing user information that can be used to profile users is very crucial for user profiling. In Web 2.0, one facility that can help users organize Web resources of their interest is user tagging systems. Exploring user tagging behavior provides a promising way for understanding users’ information needs since tags are given directly by users. However, free and relatively uncontrolled vocabulary makes the user self-defined tags lack of standardization and semantic ambiguity. Also, the relationships among tags need to be explored since there are rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach for learning tag ontology based on the widely used lexical database WordNet for capturing the semantics and the structural relationships of tags. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. To personalize further, clustering of users is performed to generate a more accurate ontology for a particular group of users. In order to evaluate the usefulness of the tag ontology, we use the tag ontology in a pilot tag recommendation experiment for improving the recommendation performance by exploiting the semantic information in the tag ontology. The initial result shows that the personalized information has improved the accuracy of the tag recommendation.
Resumo:
For the evaluation, design, and planning of traffic facilities and measures, traffic simulation packages are the de facto tools for consultants, policy makers, and researchers. However, the available commercial simulation packages do not always offer the desired work flow and flexibility for academic research. In many cases, researchers resort to designing and building their own dedicated models, without an intrinsic incentive (or the practical means) to make the results available in the public domain. To make matters worse, a substantial part of these efforts pertains to rebuilding basic functionality and, in many respects, reinventing the wheel. This problem not only affects the research community but adversely affects the entire traffic simulation community and frustrates the development of traffic simulation in general. For this problem to be addressed, this paper describes an open source approach, OpenTraffic, which is being developed as a collaborative effort between the Queensland University of Technology, Australia; the National Institute of Informatics, Tokyo; and the Technical University of Delft, the Netherlands. The OpenTraffic simulation framework enables academies from geographic areas and disciplines within the traffic domain to work together and contribute to a specific topic of interest, ranging from travel choice behavior to car following, and from response to intelligent transportation systems to activity planning. The modular approach enables users of the software to focus on their area of interest, whereas other functional modules can be regarded as black boxes. Specific attention is paid to a standardization of data inputs and outputs for traffic simulations. Such standardization will allow the sharing of data with many existing commercial simulation packages.
Resumo:
"This essay is a critique of the scientific and policy rationales for transnational standardization. It analyzes two examples of policy export: early childhood standards in one of North America’s oldest Indigenous communities and the ongoing development of international standards for university teaching. It examines calls for American education to look to Finland, Canada, and Singapore for models of reform and innovation, focusing on the complex historical, cultural, and political settlements at work in these countries. The author addresses two affiliated challenges: first, the possibility of a principled understanding of evidence and policy in cultural and political-economic context, and second, the possibility of a mediative educational science that might guide policy formation." -- EDUCATIONAL RESEARCHER November 2011 vol. 40 no. 8 367-377
Resumo:
In 1963, the National Institutes of Health (NIH) first issued guidelines for animal housing and husbandry. The most recent 2010 revision emphasizes animal care “in ways judged to be scientifically, technically, and humanely appropriate” (National Institutes of Health, 2010, p. XIII). The goal of these guidelines is to ensure humanitarian treatment of animals and to optimize the quality of research. Although these animal care guidelines cover a substantial amount of information regarding animal housing and husbandry, researchers generally do not report all these variables (see Table Table1).1). The importance of housing and husbandry conditions with respect to standardization across different research laboratories has been debated previously (Crabbe et al., 1999; Van Der Staay and Steckler, 2002; Wahlsten et al., 2003; Wolfer et al., 2004; Van Der Staay, 2006; Richter et al., 2010, 2011). This paper focuses on several animal husbandry and housing issues that are particularly relevant to stress responses in rats, including transportation, handling, cage changing, housing conditions, light levels and the light–dark cycle. We argue that these key animal housing and husbandry variables should be reported in greater detail in an effort to raise awareness about extraneous experimental variables, especially those that have the potential to interact with the stress response.
Resumo:
Process-Aware Information Systems (PAISs) support executions of operational processes that involve people, resources, and software applications on the basis of process models. Process models describe vast, often infinite, amounts of process instances, i.e., workflows supported by the systems. With the increasing adoption of PAISs, large process model repositories emerged in companies and public organizations. These repositories constitute significant information resources. Accurate and efficient retrieval of process models and/or process instances from such repositories is interesting for multiple reasons, e.g., searching for similar models/instances, filtering, reuse, standardization, process compliance checking, verification of formal properties, etc. This paper proposes a technique for indexing process models that relies on their alternative representations, called untanglings. We show the use of untanglings for retrieval of process models based on process instances that they specify via a solution to the total executability problem. Experiments with industrial process models testify that the proposed retrieval approach is up to three orders of magnitude faster than the state of the art.
Resumo:
Thalidomide is an anti-angiogenic agent currently used to treat patients with malignant cachexia or multiple myeloma. Lenalidomide (CC-5013) is an immunomodulatory thalidomide analogue licensed in the United States of America (USA) for the treatment of a subtype of myelodysplastic syndrome. This two-centre, open-label phase I study evaluated dose-limiting toxicities in 55 patients with malignant solid tumours refractory to standard chemotherapies. Lenalidomide capsules were consumed once daily for 12 weeks according to one of the following three schedules: (I) 25 mg daily for the first 7 d, the daily dose increased by 25 mg each week up to a maximum daily dose of 150 mg; (II) 25 mg daily for 21 d followed by a 7-d rest period, the 4-week cycle repeated for 3 cycles; (III) 10 mg daily continuously. Twenty-six patients completed the study period. Two patients experienced a grade 3 hypersensitivity rash. Four patients in cohort I and 4 patients in cohort II suffered grade 3 or 4 neutropaenia. In 2 patients with predisposing medical factors, grade 3 cardiac dysrhythmia was recorded. Grade 1 neurotoxicity was detected in 6 patients. One complete and two partial radiological responses were measured by computed tomography scanning; 8 patients had stable disease after 12 weeks of treatment. Fifteen patients remained on treatment as named patients; 1 with metastatic melanoma remains in clinical remission 3.5 years from trial entry. This study indicates the tolerability and potential clinical efficacy of lenalidomide in patients with advanced solid tumours who have previously received multi-modality treatment. Depending on the extent of myelosuppressive pre-treatment, dose schedules (II) or (III) are advocated for large-scale trials of long-term administration. © 2006 Elsevier Ltd. All rights reserved.