978 resultados para Collaborative Networked Organisations
Resumo:
The appearance of the open code paradigm and the demands of social movements have permeated the ways in which today’s cultural institutions are organized. This article analyzes the birth of a new critical and cooperative spatiality and how it is transforming current modes of cultural research and production. It centers on the potential for establishing the new means of cooperation that are being tested in what are defined as collaborative artistic laboratories. These are hybrid spaces of research and creation based on networked and cooperative structures producing a new societal-technical body that forces us to reconsider the traditional organic conditions of the productive scenarios of knowledge and artistic practice.
Resumo:
Purpose – This paper explores the factors which determine the degree of knowledge transfer in inter-firm new product development projects. We test a theoretical model exploring how inter-firm knowledge transfer is enabled or hindered by a buyer’s learning intent, the degree of supplier protectiveness, inter-firm knowledge ambiguity, and absorptive capacity. Design/methodology/approach – A sample of 153 R&D intensive manufacturing firms in the UK automotive, aerospace, pharmaceutical, electrical, chemical, and general manufacturing industries were used to test the framework. Two-step structural equation modeling in AMOS 7.0 was used to analyse the data. Findings – Our results indicate that a buyer’s learning intent increases inter-firm knowledge transfer, but also acts as an incentive for suppliers to protect their knowledge. Such defensive measures increase the degree of inter-firm knowledge ambiguity, encouraging buyer firms to invest in absorptive capacity as a means to interpret supplier knowledge, but also increase the degree of knowledge transfer. Practical implications – Our paper illustrates the effects of focusing on acquisition, rather than accessing, supplier technological knowledge. We show that an overt learning strategy can be detrimental to knowledge transfer between buyer-supplier, as supplier’s react by restricting the flow of information. Organisations are encouraged to consider this dynamic when engaging in multi-organisational new product development projects. Originality/value – This paper examines the dynamics of knowledge transfer within inter-firm NPD projects, showing how transfer is influenced by the buyer firm’s learning intention, supplier’s response, characteristics of the relationship and knowledge to be transferred.
Resumo:
The aim of this paper is to reflect on how conceptions of networked learning have changed, particularly in relation to educational practices and uses of technology, that can nurture new ideas of networked learning to sustain multiple and diverse communities of practice in institutional settings. Our work is framed using two theoretical frameworks: Giddens's (1984) structuration theory and Callon & Latour's (1981) Actor Network Theory as critiqued by Fox (2005) in relation to networked learning. We use these frameworks to analyse and critique ideas of networked learning embodied in both cases. We investigate three questions: (a) the role of individual agency in the development of networked learning; (b) the impact of technological developments on approaches to supporting students within institutional infrastructures; and (c) designing networked learning to incorporate Web 2.0 practices that sustain multiple communities and foster engagement with knowledge in new ways. We use an interpretivist approach by drawing on experiential knowledge of the Masters programme in Networked Collaborative Learning and the decision making process of designing the virtual graduate schools. At this early stage, we have limited empirical data related to the student experience of networked learning in current and earlier projects. Our findings indicate that the use of two different theoretical frameworks provided an essential tool in illuminating, situating and informing the process of designing networked learning that involves supporting multiple and diverse communities of practice in institutional settings. These theoretical frameworks have also helped us to analyze our existing projects as case studies and to problematize and begin to understand the challenges we face in facilitating the participation of research students in networked learning communities of practice and the barriers to that participation. We have also found that this process of theorizing has given us a way of reconceptualizing communities of practice within research settings that have the potential to lead to new ideas of networked learning.
Resumo:
The development of new products or processes involves the creation, re-creation and integration of conceptual models from the related scientific and technical domains. Particularly, in the context of collaborative networks of organisations (CNO) (e.g. a multi-partner, international project) such developments can be seriously hindered by conceptual misunderstandings and misalignments, resulting from participants with different backgrounds or organisational cultures, for example. The research described in this article addresses this problem by proposing a method and the tools to support the collaborative development of shared conceptualisations in the context of a collaborative network of organisations. The theoretical model is based on a socio-semantic perspective, while the method is inspired by the conceptual integration theory from the cognitive semantics field. The modelling environment is built upon a semantic wiki platform. The majority of the article is devoted to developing an informal ontology in the context of a European R&D project, studied using action research. The case study results validated the logical structure of the method and showed the utility of the method.
Resumo:
Le Web se caractérise de bien des façons, un de ses traits dominants étant son caractère hautement évolutif. Bien que relativement jeune, il en est déjà à sa deuxième génération – on parle du Web 2.0 – et certains entrevoient déjà le Web 3.0. Cette évolution n’est pas uniquement technologique mais aussi culturelle, modifiant le rapport des internautes à cet univers numérique et à l’information qui s’y trouve. Les technologies phares du Web 2.0 – blogues, fils RSS, sites Wikis, etc. – offrent aux utilisateurs du Web la possibilité de passer d’un rôle passif d’observateurs à un rôle actif de créateurs. Le Web que l’on côtoie actuellement en est ainsi un plus participatif, dynamique et collaboratif. Les organisations doivent ainsi sérieusement considérer non seulement le potentiel de ces nouveaux environnements pour les aider dans le cadre de leurs activités, mais aussi la nouvelle cyberculture qu’ils engendrent chez leurs employés, clients et partenaires. Les plateformes du Web 2.0 viennent renforcer le potentiel déjà perçu par les organisations des systèmes d’information Web (SIW) à plusieurs niveaux : comme le partage d’information, l’augmentation de la compétitivité ou l’amélioration des relations avec leurs clients. Les milieux documentaires peuvent, au même titre que d’autres types d’organisations, tirer profit des outils de ce Web participatif et de la nouvelle culture collaborative qui en découle. Depuis quelque temps déjà, les bibliothèques se penchent activement sur ces questions et la communauté archivistique emboîte le pas… « vers une Archivistique 2.0 ? » se demanderont certains. Cet article se propose d’examiner le potentiel du Web 2.0 pour les organisations en général ainsi que plus particulièrement pour la communauté archivistique. Nous nous attarderons dans un premier temps à définir ce qu’est le Web 2.0 et à détailler ses technologies et concepts clés. Ces précisions aideront à mieux comprendre par la suite l’apport possible du Web 2.0 en contexte organisationnel. Finalement, des exemples d’utilisation du Web 2.0 par la communauté archivistique viendront conclure cette réflexion sur le Web 2.0, les organisations et l’archivistique.
Resumo:
In this paper, the issues that arise in multi-organisational collaborative groups (MOCGs) in the public sector are discussed and how a technology-based group support system (GSS) could assist individuals within these groups. MOCGs are commonly used in the public sector to find solutions to multifaceted social problems. Finding solutions for such problems is difficult because their scope is outside the boundary of a single government agency. The standard approach to solving such problems is collaborative involving a diverse range of stakeholders. Collaborative working can be advantageous but it also introduces its own pressures. Conflicts can arise due to the multiple contexts and goals of group members and the organisations that they represent. Trust, communication and a shared interface are crucial to making any significant progress. A GSS could support these elements.
Resumo:
We present a conceptual architecture for a Group Support System (GSS) to facilitate Multi-Organisational Collaborative Groups (MOCGs) initiated by local government and including external organisations of various types. Multi-Organisational Collaborative Groups (MOCGs) consist of individuals from several organisations which have agreed to work together to solve a problem. The expectation is that more can be achieved working in harmony than separately. Work is done interdependently, rather than independently in diverse directions. Local government, faced with solving complex social problems, deploy MOCGs to enable solutions across organisational, functional, professional and juridical boundaries, by involving statutory, voluntary, community, not-for-profit and private organisations. This is not a silver bullet as it introduces new pressures. Each member organisation has its own goals, operating context and particular approaches, which can be expressed as their norms and business processes. Organisations working together must find ways of eliminating differences or mitigating their impact in order to reduce the risks of collaborative inertia and conflict. A GSS is an electronic collaboration system that facilitates group working and can offer assistance to MOCGs. Since many existing GSSs have been primarily developed for single organisation collaborative groups, even though there are some common issues, there are some difficulties peculiar to MOCGs, and others that they experience to a greater extent: a diversity of primary organisational goals among members; different funding models and other pressures; more significant differences in other information systems both technologically and in their use than single organisations; greater variation in acceptable approaches to solve problems. In this paper, we analyse the requirements of MOCGs led by local government agencies, leading to a conceptual architecture for an e-government GSS that captures the relationships between 'goal', 'context', 'norm', and 'business process'. Our models capture the dynamics of the circumstances surrounding each individual representing an organisation in a MOCG along with the dynamics of the MOCG itself as a separate community.
Resumo:
Grid portals are increasingly used to provide uniform access to the grid infrastructure. This paper describes how the P-GRADE Grid Portal could be used in a collaborative manner to facilitate group work and support the notion of Virtual Organisations. We describe the development issues involved in the construction of a collaborative portal, including ensuring a consistent view between participants of a collaborative workflow and management of proxy credentials to allow separate nodes of the workflow to be submitted to different grids.
Resumo:
In response to evidence of insect pollinator declines, organisations in many sectors, including the food and farming industry, are investing in pollinator conservation. They are keen to ensure that their efforts use the best available science. We convened a group of 32 ‘conservation practitioners’ with an active interest in pollinators and 16 insect pollinator scientists. The conservation practitioners include representatives from UK industry (including retail), environmental non-government organisations and nature conservation agencies. We collaboratively developed a long list of 246 knowledge needs relating to conservation of wild insect pollinators in the UK. We refined and selected the most important knowledge needs, through a three-stage process of voting and scoring, including discussions of each need at a workshop. We present the top 35 knowledge needs as scored by conservation practitioners or scientists. We find general agreement in priorities identified by these two groups. The priority knowledge needs will structure ongoing work to make science accessible to practitioners, and help to guide future science policy and funding. Understanding the economic benefits of crop pollination, basic pollinator ecology and impacts of pesticides on wild pollinators emerge strongly as priorities, as well as a need to monitor floral resources in the landscape.
Resumo:
Tagging provides support for retrieval and categorization of online content depending on users' tag choice. A number of models of tagging behaviour have been proposed to identify factors that are considered to affect taggers, such as users' tagging history. In this paper, we use Semiotics Analysis and Activity theory, to study the effect the system designer has over tagging behaviour. The framework we use shows the components that comprise the tagging system and how they interact together to direct tagging behaviour. We analysed two collaborative tagging systems: CiteULike and Delicious by studying their components by applying our framework. Using datasets from both systems, we found that 35% of CiteULike users did not provide tags compared to only 0.1% of Delicious users. This was directly linked to the type of tools used by the system designer to support tagging.
Resumo:
This paper is an extension of our previous study on pragmatic interoperability assessment for process alignment. In this study, we conduct four case studies in industrial companies and hospitals in order to gather their viewpoints regarding the concerns when condensing process alignment in a collaborative working environment. Used techniques include interview, observation, and documentation. The collected results firstly are summarised into three layers based on our previous developed pragmatic assessment model, and then are transformed into the elements which constitutes the purposed method, and finally based on the summarised results we purpose a method for assessing pragmatic interoperability for process alignment in collaborative working environment. The method contains two parts: one gives all the elements of pragmatic interoperability that should be concerned when considering process alignment in collaborative working environment, and the other one is a supplementary method for dealing with technical concerns.
Resumo:
During the second half of the nineteenth century fraternal and benevolent associations of numerous descriptions grew and prospered in mining communities everywhere. They played an important, but neglected role, in assisting transatlantic migration and movement between mining districts as well as building social capital within emerging mining communities. They helped to build bridges between different ethnic communities, provided conduits between labour and management, and networked miners into the non-mining community. Their influence spread beyond the adult males that made up most of their membership to their wives and families and provided levels of social and economic support otherwise unobtainable at that time. Of course, the influence of these organisations could also be divisive where certain groups or religions were excluded and they may have worked to exacerbate, as much as ameliorate, the problems of community development. This paper will examine some of these issues by looking particularly at the role of Freemasonry and Oddfellowry in Cornwall, Calumet, and Nevada City between 1860 and 1900. Work on fraternity in the Keweenaw was undertaken in Houghton some years ago with a grant from the Copper Country Archive and has since been continued by privately funded research in California and other Western mining states. Some British aspects of this research can be found in my article on mining industrial relations in Labour History Review April 2006
Resumo:
Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.