40 resultados para acceptance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

"The Protection of Traditional Knowledge Associated with Genetic Resources: The Role of Databases and Registers" ABSTRACT Yovana Reyes Tagle The misappropriation of TK has sparked a search for national and international laws to govern the use of indigenous peoples knowledge and protection against its commercial exploitation. There is a widespread perception that biopiracy or illegal access to genetic resources and associated traditional knowledge (TK) continues despite national and regional efforts to address this concern. The purpose of this research is to address the question of how documentation of TK through databases and registers could protect TK, in light of indigenous peoples increasing demands to control their knowledge and benefit from its use. Throughout the international debate over the protection of TK, various options have been brought up and discussed. At its core, the discussion over the legal protection of TK comes down to these issues: 1) The doctrinal question: What is protection of TK? 2) The methodological question: How can protection of TK be achieved? 3) The legal question: What should be protected? And 4) The policy questions: Who has rights and how should they be implemented? What kind of rights should indigenous peoples have over their TK? What are the central concerns the TK databases want to solve? The acceptance of TK databases and registers may bring with it both opportunities and dangers. How can the rights of indigenous peoples over their documented knowledge be assured? Documentation of TK was envisaged as a means to protect TK, but there are concerns about how documented TK can be protected from misappropriation. The methodology used in this research seeks to contribute to the understanding of the protection of TK. The steps taken in this research attempt to describe and to explain a) what has been done to protect TK through databases and registers, b) how this protection is taking place, and c) why the establishment of TK databases can or cannot be useful for the protection of TK. The selected case studies (Peru and Venezuela) seek to illustrate the complexity and multidisciplinary nature of the establishment of TK databases, which entail not only legal but also political, socio-economic and cultural issues. The study offers some conclusions and recommendations that have emerged after reviewing the national experiences, international instruments, work of international organizations, and indigenous peoples perspectives. This thesis concludes that if TK is to be protected from disclosure and unauthorized use, confidential databases are required. Finally, the TK database strategy needs to be strengthened by the legal protection of the TK itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intention-based models have been one of the main theoretical orientations in the research on the implementation of information and communication technology (ICT). According to these models, actual behavior can be predicted from the intention towards the behavior. If the level of intention to use technology is high, the probability of actual usage of ICT increases. The purpose of this study was to find out which factors explain vocational teachers intention to use ICT in their teaching. In addition, teachers of media and information sciences and teachers of welfare and health were compared. The study also explored how regularly ICT was applied by teachers and how strong their intention to apply the technology was. This Master s thesis is a quantitative study and the data was collected using an Email survey and Eform. The instruments were based on a decomposed theory of planned behavior. The research group consisted of 22 schools of media and information sciences and 20 schools of welfare and health. The data consisted of 231 vocational teachers: 57 teachers worked with media and information sciences and 174 with welfare and health. The data was analyzed using Mann-Whitney U-test, factor analysis and regression analysis. In addition, categorized results were compared with previous study. In this study, the intention to use ICT in teaching was explained by the teachers attitudes and skills and the attitudes of their work community. However, the environment in which ICT was used, i.e., the technical environment, economical resources and time, did not explain the intention. The results did not directly support any of the intention-based models, but they could be interpreted as congruent with the technology acceptance model. The majority of the teachers used ICT at least weekly. They had a strong intention to continue to do that in the future. The study also revealed that there were more teachers who had a critical attitude towards ICT among the teachers of welfare and health. According to the results of this study, it is not possible to state that ICT would not suit any one profession because in every group with teachers with a critical attitude towards ICT there were also teachers with a positive attitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a service to authors perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction This case study is based on the experiences with the Electronic Journal of Information Technology in Construction (ITcon), founded in 1995. Development This journal is an example of a particular category of open access journals, which use neither author charges nor subscriptions to finance their operations, but rely largely on unpaid voluntary work in the spirit of the open source movement. The journal has, after some initial struggle, survived its first decade and is now established as one of half-a-dozen peer reviewed journals in its field. Operations The journal publishes articles as they become ready, but creates virtual issues through alerting messages to subscribers. It has also started to publish special issues, since this helps in attracting submissions, and also helps in sharing the work-load of review management. From the start the journal adopted a rather traditional layout of the articles. After the first few years the HTML version was dropped and papers are only published in PDF format. Performance The journal has recently been benchmarked against the competing journals in its field. Its acceptance rate of 53% is slightly higher and its average turnaround time of seven months almost a year faster compared to those journals in the sample for which data could be obtained. The server log files for the past three years have also been studied. Conclusions Our overall experience demonstrates that it is possible to publish this type of OA journal, with a yearly publishing volume equal to a quarterly journal and involving the processing of some fifty submissions a year, using a networked volunteer-based organization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article reports on a cross-sectional case study of a large construction project in which Electronic document management (EDM) was used. Attitudes towards EDM from the perspective of individual end users were investigated. Responses from a survey were combined with data from system usage log files to obtain an overview of attitudes prevalent in different user segments of the total population of 334 users. The survey was followed by semi-structured interviews with representative users. A strong majority of users from all segments of the project group considered EDM as a valuable aid in their work processes, despite certain functional limitations of the system used and the complexity of the information mass. Based on the study a model describing the key factors affecting end user EDM adoption is proposed. The model draws on insight from earlier studies of EDM enabled projects and theoretical frameworks on technology acceptance and success of information systems, as well as the insights gained from the case study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Open access is a new model for the publishing of scientific journals enabled by the Internet, in which the published articles are freely available for anyone to read. During the 1990s hundreds of individual open access journals were founded by groups of academics, supported by grants and unpaid voluntary work. During the last five years other types of open access journals, funded by author charges have started to emerge and also established publishers have started to experiment with different variations of open access. This article reports on the experiences of one open access journal (The Electronic Journal of Information Technology in Construction, ITcon) over its ten year history. In addition to a straightforward account of the lessons learned the journal is also benchmarked against a number of competitors in the same research area and its development is put into the larger perspective of changes in scholarly publishing. The main findings are: That a journal publishing around 20-30 articles per year, equivalent to a typical quarterly journal, can sustainable be produced using an open source like production model. The journal outperforms its competitors in some respects, such as the speed of publication, availability of the results and balanced global distribution of authorship, and is on a par with them in most other respects. The key statistics for ITcon are: Acceptance rate 55 %. Average speed of publication 6-7 months. 801 subscribers to email alerts. Average number of downloads by human readers per paper per month 21.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, it is argued that the view on alliance creation presented in the current academic literature is limited, and that using a learning approach helps to explain the dynamic nature of alliance creation. The cases in this study suggest that a wealth of inefficiency elements can be found in alliance creation. These elements can further be divided into categories, which help explain the dynamics of alliance creation. The categories combined with two models brought forward by the study suggest that inefficiency can be avoided through learning during the creation process. Some elements are especially central to this argumentation. First, the elements related to the clarity and acceptance of the strategy of the company, the potential lack of an alliance strategy and the elements related to changes in the strategic context. Second, the elements related to the length of the alliance creation processes and the problems a long process entails. It is further suggested that the different inefficiency elements may create a situation, where the alliance creation process is sequentially and successfully followed to the end, but where the different inefficiencies create a situation where the results are not aligned with the strategic intent. The proposed solution is to monitor and assess the risk for inefficiency elements during the alliance creation process. The learning, which occurs during the alliance creation process as a result of the monitoring, can then lead to realignments in the process. This study proposes a model to mitigate the risk related to the inefficiencies. The model emphasizes creating an understanding of the other alliance partners business, creating a shared vision, using pilot cooperation and building trust within the process. An analytical approach to assessing the benefits of trust is also central in this view. The alliance creation approach suggested by this study, which emphasizes trust and pilot cooperation, is further critically reviewed against contracting as a way to create alliances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Customer value has been identified as the reason for customers to patronize a firm, and as one of the fundamental blocks that market exchanges build upon. Despite the importance of customer value, it is often poorly defined, or seems to refer to different phenomena. This dissertation contributes to current marketing literature by subjecting the value concept to a critical investigation, and by clarifying its conceptual foundation. Based on the literature review, it is proposed that customer value can be divided into two separate, but interrelated aspects: value creation processes, and value outcome determination. This means that on one hand, it is possible to examine those activities through which value is created, and on the other hand, investigate how customers determine the value outcomes they receive. The results further show that customers may determine value in four different ways: value as a benefit/sacrifice ratio, as experience outcomes, as means-end chains, and value as phenomenological. In value as benefit/sacrifice ratio, customers are expected to calculate the ratio between service benefits (e.g. ease of use) and sacrifices (e.g. price). In value as experience outcomes, customers are suggested to experience multiple value components, such as functional, emotional, or social value. Customer value as means-ends chains in turn models value in terms of the relationships between service characteristics, use value, and desirable ends (e.g. social acceptance). Finally, value as phenomenological proposes that value emerges from lived, holistic experiences. The empirical papers investigate customer value in e-services, including online health care and mobile services, and show how value in e-service stems from the process and content quality, use context, and the service combination that a customer uses. In conclusion, marketers should understand that different value definitions generate different types of understanding of customer value. In addition, it is clear that studying value from several perspectives is useful, as it enables a richer understanding of value for the different actors. Finally, the interconnectedness between value creation and determination is surprisingly little researched, and this dissertation proposes initial steps towards understanding the relationship between the two.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypertexts are digital texts characterized by interactive hyperlinking and a fragmented textual organization. Increasingly prominent since the early 1990s, hypertexts have become a common text type both on the Internet and in a variety of other digital contexts. Although studied widely in disciplines like hypertext theory and media studies, formal linguistic approaches to hypertext continue to be relatively rare. This study examines coherence negotiation in hypertext with particularly reference to hypertext fiction. Coherence, or the quality of making sense, is a fundamental property of textness. Proceeding from the premise that coherence is a subjectively evaluated property rather than an objective quality arising directly from textual cues, the study focuses on the processes through which readers interact with hyperlinks and negotiate continuity between hypertextual fragments. The study begins with a typological discussion of textuality and an overview of the historical and technological precedents of modern hypertexts. Then, making use of text linguistic, discourse analytical, pragmatic, and narratological approaches to textual coherence, the study takes established models developed for analyzing and describing conventional texts, and examines their applicability to hypertext. Primary data derived from a collection of hyperfictions is used throughout to illustrate the mechanisms in practice. Hypertextual coherence negotiation is shown to require the ability to cognitively operate between local and global coherence by means of processing lexical cohesion, discourse topical continuities, inferences and implications, and shifting cognitive frames. The main conclusion of the study is that the style of reading required by hypertextuality fosters a new paradigm of coherence. Defined as fuzzy coherence, this new approach to textual sensemaking is predicated on an acceptance of the coherence challenges readers experience when the act of reading comes to involve repeated encounters with referentially imprecise hyperlinks and discourse topical shifts. A practical application of fuzzy coherence is shown to be in effect in the way coherence is actively manipulated in hypertext narratives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.