9 resultados para Collaborative network
em CentAUR: Central Archive University of Reading - UK
Resumo:
Background and aims: GP-TCM is the 1st EU-funded Coordination Action consortium dedicated to traditional Chinese medicine (TCM) research. This paper aims to summarise the objectives, structure and activities of the consortium and introduces the position of the consortium regarding good practice, priorities, challenges and opportunities in TCM research. Serving as the introductory paper for the GPTCM Journal of Ethnopharmacology special issue, this paper describes the roadmap of this special issue and reports how the main outputs of the ten GP-TCM work packages are integrated, and have led to consortium-wide conclusions. Materials and methods: Literature studies, opinion polls and discussions among consortium members and stakeholders. Results: By January 2012, through 3 years of team building, the GP-TCM consortium had grown into a large collaborative network involving ∼200 scientists from 24 countries and 107 institutions. Consortium members had worked closely to address good practice issues related to various aspects of Chinese herbal medicine (CHM) and acupuncture research, the focus of this Journal of Ethnopharmacology special issue, leading to state-of-the-art reports, guidelines and consensus on the application of omics technologies in TCM research. In addition, through an online survey open to GP-TCM members and non-members, we polled opinions on grand priorities, challenges and opportunities in TCM research. Based on the poll, although consortium members and non-members had diverse opinions on the major challenges in the field, both groups agreed that high-quality efficacy/effectiveness and mechanistic studies are grand priorities and that the TCM legacy in general and its management of chronic diseases in particular represent grand opportunities. Consortium members cast their votes of confidence in omics and systems biology approaches to TCM research and believed that quality and pharmacovigilance of TCM products are not only grand priorities, but also grand challenges. Non-members, however, gave priority to integrative medicine, concerned on the impact of regulation of TCM practitioners and emphasised intersectoral collaborations in funding TCM research, especially clinical trials. Conclusions: The GP-TCM consortium made great efforts to address some fundamental issues in TCM research, including developing guidelines, as well as identifying priorities, challenges and opportunities. These consortium guidelines and consensus will need dissemination, validation and further development through continued interregional, interdisciplinary and intersectoral collaborations. To promote this, a new consortium, known as the GP-TCM Research Association, is being established to succeed the 3-year fixed term FP7 GP-TCM consortium and will be officially launched at the Final GP-TCM Congress in Leiden, the Netherlands, in April 2012.
Resumo:
This review provides an overview of the main scientific outputs of a network (Action) supported by the European Cooperation in Science and Technology (COST) in the field of animal science, namely the COST Action Feed for Health (FA0802). The main aims of the COST Action Feed for Health (FA0802) were: to develop an integrated and collaborative network of research groups that focuses on the roles of feed and animal nutrition in improving animal wellbeing and also the quality, safety and wholesomeness of human foods of animal origin; to examine the consumer concerns and perceptions as regards livestock production systems. The COST Action Feed for Health has addressed these scientific topics during the last four years. From a practical point of view three main scientific fields of achievement can be identified: feed and animal nutrition; food of animal origin quality and functionality and consumers’ perceptions. Finally, the present paper has the scope to provide new ideas and solutions to a range of issues associated with the modern livestock production system.
Resumo:
The aim of this research is to exhibit how literary playtexts can evoke multisensory trends prevalent in 21st century theatre. In order to do so, it explores a range of practical forms and theoretical contexts for creating participatory, site-specific and immersive theatre. With reference to literary theory, specifically to semiotics, reader-response theory, postmodernism and deconstruction, it attempts to revise dramatic theory established by Aristotle’s Poetics. Considering Gertrude Stein’s essay, Plays (1935), and relevant trends in theatre and performance, shaped by space, technology and the everchanging role of the audience member, a postdramatic poetics emerges from which to analyze the plays of Mac Wellman and Suzan-Lori Parks. Distinguishing the two textual lives of a play as the performance playtext and the literary playtext, it examines the conventions of the printed literary playtext, with reference to models of practice that radicalize the play form, including works by Mabou Mines, The Living Theatre and Fiona Templeton. The arguments of this practice-led Ph.D. developed out of direct engagement with the practice project, which explores the multisensory potential of written language when combined with hypermedia. The written thesis traces the development process of a new play, Rumi High, which is presented digitally as a ‘hyper(play)text,’ accessible through the Internet at www.RumiHigh.org. Here, ‘playwrighting’ practice is expanded spatially, collaboratively and textually. Plays are built, designed and crafted with many layers of meaning that explore both linguistic and graphic modes of poetic expression. The hyper(play)text of Rumi High establishes playwrighting practice as curatorial, where performance and literary playtexts are in a reciprocal relationship. This thesis argues that digital writing and reading spaces enable new approaches to expressing the many languages of performance, while expanding the collaborative network that produces the work. It questions how participatory forms of immersive and site-specific theatre can be presented as interactive literary playtexts, which enable the reader to have a multisensory experience. Through a reflection on process and an evaluation of the practice project, this thesis problematizes notions of authorship and text.
Resumo:
Would a research assistant - who can search for ideas related to those you are working on, network with others (but only share the things you have chosen to share), doesn’t need coffee and who might even, one day, appear to be conscious - help you get your work done? Would it help your students learn? There is a body of work showing that digital learning assistants can be a benefit to learners. It has been suggested that adaptive, caring, agents are more beneficial. Would a conscious agent be more caring, more adaptive, and better able to deal with changes in its learning partner’s life? Allow the system to try to dynamically model the user, so that it can make predictions about what is needed next, and how effective a particular intervention will be. Now, given that the system is essentially doing the same things as the user, why don’t we design the system so that it can try to model itself in the same way? This should mimic a primitive self-awareness. People develop their personalities, their identities, through interacting with others. It takes years for a human to develop a full sense of self. Nobody should expect a prototypical conscious computer system to be able to develop any faster than that. How can we provide a computer system with enough social contact to enable it to learn about itself and others? We can make it part of a network. Not just chatting with other computers about computer ‘stuff’, but involved in real human activity. Exposed to ‘raw meaning’ – the developing folksonomies coming out of the learning activities of humans, whether they are traditional students or lifelong learners (a term which should encompass everyone). Humans have complex psyches, comprised of multiple strands of identity which reflect as different roles in the communities of which they are part – so why not design our system the same way? With multiple internal modes of operation, each capable of being reflected onto the outside world in the form of roles – as a mentor, a research assistant, maybe even as a friend. But in order to be able to work with a human for long enough to be able to have a chance of developing the sort of rich behaviours we associate with people, the system needs to be able to function in a practical and helpful role. Unfortunately, it is unlikely to get a free ride from many people (other than its developer!) – so it needs to be able to perform a useful role, and do so securely, respecting the privacy of its partner. Can we create a system which learns to be more human whilst helping people learn?
Resumo:
There are three key driving forces behind the development of Internet Content Management Systems (CMS) - a desire to manage the explosion of content, a desire to provide structure and meaning to content in order to make it accessible, and a desire to work collaboratively to manipulate content in some meaningful way. Yet the traditional CMS has been unable to meet the latter of these requirements, often failing to provide sufficient tools for collaboration in a distributed context. Peer-to-Peer (P2P) systems are networks in which every node is an equal participant (whether transmitting data, exchanging content, or invoking services) and there is an absence of any centralised administrative or coordinating authorities. P2P systems are inherently more scalable than equivalent client-server implementations as they tend to use resources at the edge of the network much more effectively. This paper details the rationale and design of a P2P middleware for collaborative content management.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delay.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delays.
Resumo:
This article is the guest editors' introduction to a special issue on using Social Network Research in the field of Human Resource Management. The goals of the special issue are: (1) to draw attention to the points of integration between the two fields, (2) to showcase research that applies social network perspectives and methodology to issues relevant to HRM and (3) to identify common challenges where future collaborative efforts could contribute to advancements in both fields.