728 resultados para Collaborative Networked Organizations
Resumo:
Integrated manufacturing constitutes a complex system made of heterogeneous information and control subsystems. Those subsystems are not designed to the cooperation. Typically each subsystem automates specific processes, and establishes closed application domains, therefore it is very difficult to integrate it with other subsystems in order to respond to the needed process dynamics. Furthermore, to cope with ever growing marketcompetition and demands, it is necessary for manufacturing/enterprise systems to increase their responsiveness based on up-to-date knowledge and in-time data gathered from the diverse information and control systems. These have created new challenges for manufacturing sector, and even bigger challenges for collaborative manufacturing. The growing complexity of the information and communication technologies when coping with innovative business services based on collaborative contributions from multiple stakeholders, requires novel and multidisciplinary approaches. Service orientation is a strategic approach to deal with such complexity, and various stakeholders' information systems. Services or more precisely the autonomous computational agents implementing the services, provide an architectural pattern able to cope with the needs of integrated and distributed collaborative solutions. This paper proposes a service-oriented framework, aiming to support a virtual organizations breeding environment that is the basis for establishing short or long term goal-oriented virtual organizations. The notion of integrated business services, where customers receive some value developed through the contribution from a network of companies is a key element.
Resumo:
Most definitions of virtual enterprise (VE) incorporate the idea of extended and collaborative outsourcing to suppliers and subcontractors in order to achieve a competitive response to market demands (Webster, Sugden, & Tayles, 2004). As suggested by several authors (Browne & Zhang, 1999; Byrne, 1993; Camarinha-Matos & Afsarmanesh, 1999; Cunha, Putnik, & Ávila, 2000; Davidow & Malone, 1992; Preiss, Goldman, & Nagel, 1996), a VE consists of a network of independent enterprises (resources providers) with reconfiguration capability in useful time, permanently aligned with the market requirements, created to take profit from a specific market opportunity, and where each participant contributes with its best practices and core competencies to the success and competitiveness of the structure as a whole. Even during the operation phase of the VE, the configuration can change, to assure business alignment with the market demands, traduced by the identification of reconfiguration opportunities and continuous readjustment or reconfiguration of the VE network, to meet unexpected situations or to keep permanent competitiveness and maximum performance (Cunha & Putnik, 2002, 2005a, 2005b).
Resumo:
Dissertation presented at the Faculty of Sciences and Technology of the New University of Lisbon to obtain the degree of Doctor in Electrical Engineering, specialty of Robotics and Integrated Manufacturing
Resumo:
La idea subjacent al Projecte de Recerca i Desenvolupament COINE és permetre a la gent contar les seves pròpies històries. COINE pretén proporcionar les eines necessàries per crear estructuradament, un entorn basat en el World Wide Web, que permeti compartir continguts. Els resultats del Projecte ajudaran al desenvolupament d'estàndards per a la implantació i la recuperació estructurades de recursos digitals en entorns en xarxa distribuïda. El Projecte de COINE s'inicià el març de 2002 i finalitzà l'agost de 2004. Avui en dia estem al WorkPackage 5 on estem construint el Sistema, el programari i les interfícies. COINE pretén cobrir la gamma més àmplia possible d'usuaris potencials, des d'organitzacions de patrimoni cultural i institucions de qualsevol mida (principalment biblioteques, arxius i museus) fins a individus de qualsevol edat sense habilitats en l'ús de les TIC, o a grups petits de ciutadans. Els usuaris no utilitzaran només COINE com a eina de cerca, sinó que contribuiran amb el seu propi contingut.
Resumo:
BACKGROUND Challenges exist in the clinical diagnosis of drug-induced liver injury (DILI) and in obtaining information on hepatotoxicity in humans. OBJECTIVE (i) To develop a unified list that combines drugs incriminated in well vetted or adjudicated DILI cases from many recognized sources and drugs that have been subjected to serious regulatory actions due to hepatotoxicity; and (ii) to supplement the drug list with data on reporting frequencies of liver events in the WHO individual case safety report database (VigiBase). DATA SOURCES AND EXTRACTION (i) Drugs identified as causes of DILI at three major DILI registries; (ii) drugs identified as causes of drug-induced acute liver failure (ALF) in six different data sources, including major ALF registries and previously published ALF studies; and (iii) drugs identified as being subjected to serious governmental regulatory actions due to their hepatotoxicity in Europe or the US were collected. The reporting frequency of adverse events was determined using VigiBase, computed as Empirical Bayes Geometric Mean (EBGM) with 90% confidence interval for two customized terms, 'overall liver injury' and 'ALF'. EBGM of >or=2 was considered a disproportional increase in reporting frequency. The identified drugs were then characterized in terms of regional divergence, published case reports, serious regulatory actions, and reporting frequency of 'overall liver injury' and 'ALF' calculated from VigiBase. DATA SYNTHESIS After excluding herbs, supplements and alternative medicines, a total of 385 individual drugs were identified; 319 drugs were identified in the three DILI registries, 107 from the six ALF registries (or studies) and 47 drugs that were subjected to suspension or withdrawal in the US or Europe due to their hepatotoxicity. The identified drugs varied significantly between Spain, the US and Sweden. Of the 319 drugs identified in the DILI registries of adjudicated cases, 93.4% were found in published case reports, 1.9% were suspended or withdrawn due to hepatotoxicity and 25.7% were also identified in the ALF registries/studies. In VigiBase, 30.4% of the 319 drugs were associated with disproportionally higher reporting frequency of 'overall liver injury' and 83.1% were associated with at least one reported case of ALF. CONCLUSIONS This newly developed list of drugs associated with hepatotoxicity and the multifaceted analysis on hepatotoxicity will aid in causality assessment and clinical diagnosis of DILI and will provide a basis for further characterization of hepatotoxicity.
Resumo:
CSCL applications are complex distributed systems that posespecial requirements towards achieving success in educationalsettings. Flexible and efficient design of collaborative activitiesby educators is a key precondition in order to provide CSCL tailorable systems, capable of adapting to the needs of eachparticular learning environment. Furthermore, some parts ofthose CSCL systems should be reused as often as possible inorder to reduce development costs. In addition, it may be necessary to employ special hardware devices, computational resources that reside in other organizations, or even exceed thepossibilities of one specific organization. Therefore, theproposal of this paper is twofold: collecting collaborativelearning designs (scripting) provided by educators, based onwell-known best practices (collaborative learning flow patterns) in a standard way (IMS-LD) in order to guide the tailoring of CSCL systems by selecting and integrating reusable CSCL software units; and, implementing those units in the form of grid services offered by third party providers. More specifically, this paper outlines a grid-based CSCL system having these features and illustrates its potential scope and applicability by means of a sample collaborative learning scenario.
Resumo:
The objective of this research was to study the role of key individuals in facilitation of technology enabled bottom-up innovation in large organization context. The development of innovation was followed from the point of view of individual actor (key individual) in two cases, through three levels: individual, team and organization, by using knowledge creation and innovation models. This study provides theoretical synthesis and framework through which the study is driven. The results of the study indicate, that in bottom-up initiated innovations the role of key individuals is still crucial, but innovation today is collective effort and there acts several entrepreneurial key individuals: innovator, user champion and organizational sponsor, whose collaboration and developing interaction drives innovation further. The team work is functional and fluent, but it meets great problems in interaction with organization. The large organizations should develop its practices and ability to react on emerging bottom-up initiations, in order to embed innovation to organization and gain sustainable innovation. In addition, bottom-up initiated innovations are demonstrations of peoples knowing, tacit knowledge and therefore renewing of an organization.
Resumo:
The objective of the thesis is to enhance the understanding about the management of the front end phases of the innovation process in a networked environment. The thesis approaches the front end of innovation from three perspectives, including the strategy, processes and systems of innovation. The purpose of the use of different perspectives in the thesis is that of providing an extensive systemic view of the front end, and uncovering the complex nature of innovation management. The context of the research is the networked operating environment of firms. The unit of analysis is the firm itself or its innovation processes, which means that this research approaches the innovation networks from the point of view of a firm. The strategy perspective of the thesis emphasises the importance of purposeful innovation management, the innovation strategy of firms. The role of innovation processes is critical in carrying out innovation strategies in practice, supporting the development of organizational routines for innovation, and driving the strategic renewal of companies. The primary focus of the thesis from systems perspective is on idea management systems, which are defined as a part of innovation management systems, and defined for this thesis as any working combination of methodology and tools (manual or IT-supported) that enhance the management of innovations within their early phases. The main contribution of the thesis are the managerial frameworks developed for managing the front end of innovation, which purposefully “wire” the front end of innovation into the strategy and business processes of a firm. The thesis contributes to modern innovation management by connecting the internal and external collaboration networks as foundational elements for successful management of the early phases of innovation processes in a dynamic environment. The innovation capability of a firm is largely defined by its ability to rely on and make use of internal and external collaboration already during the front end activities, which by definition include opportunity identification and analysis, idea generation, profileration and selection, and concept definition. More specifically, coordination of the interfaces between these activities, and between the internal and external innovation environments of a firm is emphasised. The role of information systems, in particular idea management systems, is to support and delineate the innovation-oriented behaviour and interaction of individuals and organizations during front end activities. The findings and frameworks developed in the thesis can be used by companies for purposeful promotion of their front end processes. The thesis provides a systemic strategy framework for managing the front end of innovation – not as a separate process, but as an elemental bundle ofactivities that is closely linked to the overall innovation process and strategy of a firm in a distributed environment. The theoretical contribution of the thesis relies on the advancement of the open innovation paradigm in the strategic context of a firm within its internal and external innovation environments. This thesis applies the constructive research approach and case study methodology to provide theoretically significant results, which are also practically beneficial.
Resumo:
The thesis deals with the phenomenon of learning between organizations in innovation networks that develop new products, services or processes. Inter organizational learning is studied especially at the level of the network. The role of the network can be seen as twofold: either the network is a context for inter organizational learning, if the learner is something else than the network (organization, group, individual), or the network itself is the learner. Innovations are regarded as a primary source of competitiveness and renewal in organizations. Networking has become increasingly common particularly because of the possibility to extend the resource base of the organization through partnerships and to concentrate on core competencies. Especially in innovation activities, networks provide the possibility to answer the complex needs of the customers faster and to share the costs and risks of the development work. Networked innovation activities are often organized in practice as distributed virtual teams, either within one organization or as cross organizational co operation. The role of technology is considered in the research mainly as an enabling tool for collaboration and learning. Learning has been recognized as one important collaborative process in networks or as a motivation for networking. It is even more important in the innovation context as an enabler of renewal, since the essence of the innovation process is creating new knowledge, processes, products and services. The thesis aims at providing enhanced understanding about the inter organizational learning phenomenon in and by innovation networks, especially concentrating on the network level. The perspectives used in the research are the theoretical viewpoints and concepts, challenges, and solutions for learning. The methods used in the study are literature reviews and empirical research carried out with semi structured interviews analyzed with qualitative content analysis. The empirical research concentrates on two different areas, firstly on the theoretical approaches to learning that are relevant to innovation networks, secondly on learning in virtual innovation teams. As a result, the research identifies insights and implications for learning in innovation networks from several viewpoints on organizational learning. Using multiple perspectives allows drawing a many sided picture of the learning phenomenon that is valuable because of the versatility and complexity of situations and challenges of learning in the context of innovation and networks. The research results also show some of the challenges of learning and possible solutions for supporting especially network level learning.
Resumo:
Unlike their counterparts in Europe and America, the citizen organizations acting for the well-being of animals in Japan have not received scholarly attention. In this research, I explore the activities of twelve Japanese pro-animal organizations in Tokyo and Kansai area from the perspective of social movement and civil society studies. The concept of a ‘pro-animal organization’ is used to refer generally to the collectives promoting animal well-being. By using the collective action frame analysis and the three core framing tasks – diagnostic, prognostic, and motivational – as the primarily analytical tools, I explore the grievances, tactics, motivational means, constructions of agency and identity as well as framing of civil society articulated in the newsletters and the interviews of the twelve organizations I interviewed in Japan in 2010. As the frame construction is always done in relation to the social and political context, I study how the organizations construct their roles as civil society actors in relation to other actors, such as the state, and the idea of citizen activism. The deficiencies in the animal welfare law and lack of knowledge among the public are identified as the main grievances. The primary tactic to overcome these problems was to educate and inform the citizens and authorities, because most organizations lack the channels to influence politically. The audiences were mostly portrayed as either ignorant bystanders or potential adherents. In order to motivate people to join their cause and to enforce the motivation within the organization, the organizations emphasized their uniqueness, proved their efficiency, claimed credit and celebrated even small improvements. The organizations tended to create three different roles for citizen pro-organizations in civil society: reactive, apolitical and emphatic animal lovers concentrating on saving individual animals, proactive, educative bridge-builders seeking to establish equal collaborative relations with authorities, and corrective, supervising watchdogs demanding change in delinquencies offending animal rights. Based on the results of this research, I suggest that by studying how and why the different relations between civil society and the governing actors of the state are constructed, a more versatile approach to citizens’ activism in its context can be achieved.
Resumo:
Université de Montréal implemented an interprofessional education (IPE) curriculum on collaborative practice in a large cohort of students (>1,100) from 10 health sciences and psychosocial sciences training programs. It is made up of three one-credit undergraduate courses (CSS1900, CSS2900, CSS3900) spanning the first 3 years of training. The course content and activities aim for development of the six competency domains identified by the Canadian Interprofessional Health Collaborative. This paper describes the IPE curriculum and highlights the features contributing to its success and originality. Among main success key factors were: administrative cooperation among participating faculties, educators eager to develop innovative approaches, extensive use of clinical situations conducive to knowledge and skill application, strong logistic support, close cooperation with health care delivery organizations, and partnership between clinicians and patients. A distinguishing feature of this IPE curriculum is the concept of partnership in care between the patient and caregivers. Patients’ representatives were involved in course planning, and patients were trained to become patients-as-trainers (PT) and cofacilitate interprofessional discussion workshops. They give feed- back to students regarding integration and application of the patient partnership concept from a patient’s point of view. Lire l'article/Read the article : http://openurl.ingenta.com/content?genre=article&issn=0090-7421&volume=42&issue=4&spage=97E&epage=106E
Resumo:
Nowadays, companies are living great difficulties on managing their business due to constant and unpredictable economic market fluctuations. Recent changes in market trends (such as the constant demand for new products and services, mass customization and the drastic reduction of delivery time) lead companies to adopt strategies of creating partnerships with other companies as a way to respond effectively to such difficult economical times. Collaborative Networks’ concept born by the consequence of companies could no longer consider their internal business processes’ management as sufficient and tend to seek for a collaborative approach with other partners for their critical processes. Information technologies (ICT) assumed a major role acting as “enablers” of these kinds of networks, enhancing information sharing and business process integration. Several new trends concerning ICT architectures have been created to support collaborative networks requirements, but still doesn’t exist a common platform to reduce the needed integration effort on virtual organizations. This study aims to investigate the current technological solutions available in the market which enhances the management of companies’ business processes (specially, Collaborative Planning). Finally, the research work ends with the presentation of a conceptual model to answer to the constraints evaluated.
Resumo:
Research literature is replete with the importance of collaboration in schools, the lack of its implementation, the centrality of the role of the principal, and the existence of a gap between knowledge and practice--or a "Knowing-Doing Gap." In other words, there is a set of knowledge that principals must know in order to create a collaborative workplace environment for teachers. This study sought to describe what high school principals know about creating such a culture of collaboration. The researcher combed journal articles, studies and professional literature in order to identify what principals must know in order to create a culture of collaboration. The result was ten elements of principal knowledge: Staff involvement in important decisions, Charismatic leadership not being necessary for success, Effective elements of teacher teams, Administrator‘s modeling professional learning, The allocation of resources, Staff meetings focused on student learning, Elements of continuous improvement, and Principles of Adult Learning, Student Learning and Change. From these ten elements, the researcher developed a web-based survey intended to measure nine of those elements (Charismatic leadership was excluded). Principals of accredited high schools in the state of Nebraska were invited to participate in this survey, as high schools are well-known for the isolation that teachers experience--particularly as a result of departmentalization. The results indicate that principals have knowledge of eight of the nine measured elements. The one that they lacked an understanding of was Principles of Student Learning. Given these two findings of what principals do and do not know, the researcher recommends that professional organizations, intermediate service agencies and district-level support staff engage in systematic and systemic initiatives to increase the knowledge of principals in the element of lacking knowledge. Further, given that eight of the nine elements are understood by principals, it would be wise to examine reasons for the implementation gap (Knowing-Doing Gap) and how to overcome it.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Critical infrastructures support everyday activities in modern societies, facilitating the exchange of services and quantities of various nature. Their functioning is the result of the integration of diverse technologies, systems and organizations into a complex network of interconnections. Benefits from networking are accompanied by new threats and risks. In particular, because of the increased interdependency, disturbances and failures may propagate and render unstable the whole infrastructure network. This paper presents a methodology of resilience analysis of networked systems of systems. Resilience generalizes the concept of stability of a system around a state of equilibrium, with respect to a disturbance and its ability of preventing, resisting and recovery. The methodology provides a tool for the analysis of off-equilibrium conditions that may occur in a single system and propagate through the network of dependencies. The analysis is conducted in two stages. The first stage of the analysis is qualitative. It identifies the resilience scenarios, i.e. the sequence of events, triggered by an initial disturbance, which include failures and the system response. The second stage is quantitative. The most critical scenarios can be simulated, for the desired parameter settings, in order to check if they are successfully handled, i.e recovered to nominal conditions, or they end into the network failure. The proposed methodology aims at providing an effective support to resilience-informed design.