979 resultados para Collaborative network
Resumo:
A gestão colaborativa é, atualmente, um elemento-chave no contexto da gestão da cadeia de suprimentos. Neste artigo, o tema é abordado mediante a análise de um caso real, em que uma grande rede mundial de fast-food e seu prestador de serviço logístico (PSL) trabalharam conjuntamente no Brasil em um projeto-piloto para a implementação de um collaborative planning, forecasting, and replenishment (CPFR). O trabalho faz uso de uma metodologia de pesquisa-ação e apresenta as principais variáveis que influenciaram o projeto, abordando os processos necessários para a implementação e os pontos que favorecem o CPFR. Com base no caso estudado, o trabalho apresenta um conjunto de propostas sobre o papel dos agentes da cadeia em projetos dessa natureza. A gestão da cadeia de suprimentos por intermédio da coordenação direta de um PSL também permite demonstrar as possibilidades e dificuldades desse sistema, contribuindo com a visão colaborativa na cadeia de suprimentos a partir da relação entre seus agentes.
Resumo:
OBJETIVO: Descrever o recrutamento de pacientes, instrumentos de avaliação, métodos para o desenvolvimento de estudos colaborativos multicêntricos e os resultados preliminares do Consórcio Brasileiro de Pesquisa em Transtornos do Espectro Obsessivo-Compulsivo, que inclui sete centros universitários. MÉTODO: Este estudo transversal incluiu entrevistas semi-estruturadas (dados sociodemográficos, histórico médico e psiquiátrico, curso da doença e diagnósticos psiquiátricos comórbidos) e instrumentos que avaliam os sintomas do transtorno obsessivo-compulsivo (Escala para Sintomas Obsessivo-Compulsivos de Yale-Brown e Escala Dimensional para Sintomas Obsessivo-Compulsivos de Yale-Brown), sintomas depressivos (Inventário de Depressão de Beck), sintomas ansiosos (Inventário de Ansiedade de Beck), fenômenos sensoriais (Escala de Fenômenos Sensoriais da Universidade de São Paulo), juízo crítico (Escala de Avaliação de Crenças de Brown), tiques (Escala de Gravidade Global de Tiques de Yale) e qualidade de vida (questionário genérico de avaliação de qualidade de vida, Medical Outcome Quality of Life Scale Short-form-36 e Escala de Avaliação Social). O treinamento dos avaliadores consistiu em assistir cinco entrevistas filmadas e entrevistar cinco pacientes junto com um pesquisador mais experiente, antes de entrevistar pacientes sozinhos. A confiabilidade entre todos os líderes de grupo para os instrumentos mais importantes (Structured Clinical Interview for DSM-IV, Dimensional Yale-Brown Obsessive-Compulsive Scale, Universidade de São Paulo Sensory Phenomena Scale ) foi medida após seis entrevistas completas. RESULTADOS: A confiabilidade entre avaliadores foi de 96%. Até março de 2008, 630 pacientes com transtorno obsessivo-compulsivo tinham sido sistematicamente avaliados. A média de idade (±SE) foi de 34,7 (±0,51), 56,3% eram do sexo feminino e 84,6% caucasianos. Os sintomas obsessivo-compulsivos mais prevalentes foram os de simetria e os de contaminação. As comorbidades psiquiátricas mais comuns foram depressão maior, ansiedade generalizada e transtorno de ansiedade social. O transtorno de controle de impulsos mais comum foi escoriação neurótica. CONCLUSÃO: Este consórcio de pesquisa, pioneiro no Brasil, permitiu delinear o perfil sociodemográfico, clínico e terapêutico do paciente com transtorno obsessivo-compulsivo em uma grande amostra clínica de pacientes. O Consórcio Brasileiro de Pesquisa em Transtornos do Espectro Obsessivo-Compulsivo estabeleceu uma importante rede de colaboração de investigação clínica padronizada sobre o transtorno obsessivo-compulsivo e pode abrir o caminho para projetos semelhantes destinados a integrar outros grupos de pesquisa no Brasil e em todo o mundo.
Resumo:
BACKGROUND: Few data are available on the long-term immunologic response to antiretroviral therapy (ART) in resource-limited settings, where ART is being rapidly scaled up using a public health approach, with a limited repertoire of drugs. OBJECTIVES: To describe immunologic response to ART among ART patients in a network of cohorts from sub-Saharan Africa, Latin America, and Asia. STUDY POPULATION/METHODS: Treatment-naive patients aged 15 and older from 27 treatment programs were eligible. Multilevel, linear mixed models were used to assess associations between predictor variables and CD4 cell count trajectories following ART initiation. RESULTS: Of 29 175 patients initiating ART, 8933 (31%) were excluded due to insufficient follow-up time and early lost to follow-up or death. The remaining 19 967 patients contributed 39 200 person-years on ART and 71 067 CD4 cell count measurements. The median baseline CD4 cell count was 114 cells/microl, with 35% having less than 100 cells/microl. Substantial intersite variation in baseline CD4 cell count was observed (range 61-181 cells/microl). Women had higher median baseline CD4 cell counts than men (121 vs. 104 cells/microl). The median CD4 cell count increased from 114 cells/microl at ART initiation to 230 [interquartile range (IQR) 144-338] at 6 months, 263 (IQR 175-376) at 1 year, 336 (IQR 224-472) at 2 years, 372 (IQR 242-537) at 3 years, 377 (IQR 221-561) at 4 years, and 395 (IQR 240-592) at 5 years. In multivariable models, baseline CD4 cell count was the most important determinant of subsequent CD4 cell count trajectories. CONCLUSION: These data demonstrate robust and sustained CD4 response to ART among patients remaining on therapy. Public health and programmatic interventions leading to earlier HIV diagnosis and initiation of ART could substantially improve patient outcomes in resource-limited settings.
Resumo:
To master changing performance demands, autonomous transport vehicles are deployed to make inhouse material flow applications more flexible. The socalled cellular transport system consists of a multitude of small scale transport vehicles which shall be able to form a swarm. Therefore the vehicles need to detect each other, exchange information amongst each other and sense their environment. By provision of peripherally acquired information of other transport entities, more convenient decisions can be made in terms of navigation and collision avoidance. This paper is a contribution to collective utilization of sensor data in the swarm of cellular transport vehicles.
Resumo:
Environmental policy and decision-making are characterized by complex interactions between different actors and sectors. As a rule, a stakeholder analysis is performed to understand those involved, but it has been criticized for lacking quality and consistency. This lack is remedied here by a formal social network analysis that investigates collaborative and multi-level governance settings in a rigorous way. We examine the added value of combining both elements. Our case study examines infrastructure planning in the Swiss water sector. Water supply and wastewater infrastructures are planned far into the future, usually on the basis of projections of past boundary conditions. They affect many actors, including the population, and are expensive. In view of increasing future dynamics and climate change, a more participatory and long-term planning approach is required. Our specific aims are to investigate fragmentation in water infrastructure planning, to understand how actors from different decision levels and sectors are represented, and which interests they follow. We conducted 27 semi-structured interviews with local stakeholders, but also cantonal and national actors. The network analysis confirmed our hypothesis of strong fragmentation: we found little collaboration between the water supply and wastewater sector (confirming horizontal fragmentation), and few ties between local, cantonal, and national actors (confirming vertical fragmentation). Infrastructure planning is clearly dominated by engineers and local authorities. Little importance is placed on longer-term strategic objectives and integrated catchment planning, but this was perceived as more important in a second analysis going beyond typical questions of stakeholder analysis. We conclude that linking a stakeholder analysis, comprising rarely asked questions, with a rigorous social network analysis is very fruitful and generates complementary results. This combination gave us deeper insight into the socio-political-engineering world of water infrastructure planning that is of vital importance to our well-being.
Resumo:
Different socio-economic and environmental drivers lead local communities in mountain regions to adapt land use practices and engage in protection policies. The political system also has to develop new approaches to adapt to those drivers. Local actors are the target group of those policy approaches, and the question arises of if and how much those actors are consulted or even integrated into the design of local land use and protection policies. This article addresses this question by comparing seven different case studies in Swiss mountain regions. Through a formal social network analysis, the inclusion of local actors in collaborative policy networks is investigated and compared to the involvement of other stakeholders representing the next higher sub-national or national decisional levels. Results show that there is a significant difference (1) in how local actors are embedded compared to other stakeholders; and (2) between top-down versus bottom-up designed policy processes.
Resumo:
The central assumption in the literature on collaborative networks and policy networks is that political outcomes are affected by a variety of state and nonstate actors. Some of these actors are more powerful than others and can therefore have a considerable effect on decision making. In this article, we seek to provide a structural and institutional explanation for these power differentials in policy networks and support the explanation with empirical evidence. We use a dyadic measure of influence reputation as a proxy for power, and posit that influence reputation over the political outcome is related to vertical integration into the political system by means of formal decision-making authority, and to horizontal integration by means of being well embedded into the policy network. Hence, we argue that actors are perceived as influential because of two complementary factors: (a) their institutional roles and (b) their structural positions in the policy network. Based on temporal and cross-sectional exponential random graph models, we compare five cases about climate, telecommunications, flood prevention, and toxic chemicals politics in Switzerland and Germany. The five networks cover national and local networks at different stages of the policy cycle. The results confirm that institutional and structural drivers seem to have a crucial impact on how an actor is perceived in decision making and implementation and, therefore, their ability to significantly shape outputs and service delivery.
Resumo:
Atrial fibrillation (AF) is the most common sustained arrhythmia in the general population. As an age-related arrhythmia AF is becoming a huge socio-economic burden for European healthcare systems. Despite significant progress in our understanding of the pathophysiology of AF, therapeutic strategies for AF have not changed substantially and the major challenges in the management of AF are still unmet. This lack of progress may be related to the multifactorial pathogenesis of atrial remodelling and AF that hampers the identification of causative pathophysiological alterations in individual patients. Also, again new mechanisms have been identified and the relative contribution of these mechanisms still has to be established. In November 2010, the European Union launched the large collaborative project EUTRAF (European Network of Translational Research in Atrial Fibrillation) to address these challenges. The main aims of EUTRAF are to study the main mechanisms of initiation and perpetuation of AF, to identify the molecular alterations underlying atrial remodelling, to develop markers allowing to monitor this processes, and suggest strategies to treat AF based on insights in newly defined disease mechanisms. This article reports on the objectives, the structure, and initial results of this network.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
One of the main outputs of the project is a collaborative platform which integrates a myriad of research and learning resources. This article presents the first prototype of this platform: the AFRICA BUILD Portal (ABP 1.0). The ABP is a Web 2.0 platform which facilitates the access, in a collaborative manner, to these resources. Through a usable web interface, the ABP has been designed to avoid, as much as possible, the connectivity problems of African institutions. In this paper, we suggest that the access to complex systems does not imply slow response rates, and that their development model guides the project to a natural technological transfer, adaptation and user acceptance. Finally, this platform aims to motivate research attitudes during the learning process and stimulate user?s collaborations.
Resumo:
Dynamic and Partial Reconfiguration allows systems to change some parts of their hardware at run time. This feature favours the inclusion of evolutionary strategies to provide optimised solutions to the same problem so that they can be mixed and compared in a way that only the best ones prevail. At the same time, distributed intelligence permits systems to work in a collaborative way to jointly improve their global capabilities. This work presents a combination of both approaches where hardware evolution is performed both at local and network level in order to improve an image filter application in terms of performance, robustness and providing the capacity of avoiding local minimums, which is the main drawback of some evolutionary approaches.
Resumo:
In 1995, the National Library of Medicine (NLM) and the Public Health Service (PHS) recommended that special attention be given to the information needs of unaffiliated public health professionals. In response, the National Network of Libraries of Medicine (NN/LM) Greater Midwest Region initiated a collaborative outreach program for public health professionals working in rural east and central Iowa. Five public health agencies were provided equipment, training, and support for accessing the Internet. Key factors in the success of this project were: (1) the role of collaborating agencies in the implementation and ongoing success of information access outreach projects; (2) knowledge of the socio-cultural factors that influence the information-seeking habits of project participants (public health professionals); and (3) management of changing or varying technological infrastructures. Working with their funding, personnel from federal, state, and local governments enhanced the information-seeking skills of public health professionals in rural eastern and central Iowa communities.
Resumo:
There is a growing need for innovative methods of dealing with complex, social problems. New types of collaborative efforts have emerged as a result of the inability of more traditional bureaucratic hierarchical arrangements such as departmental program, to resolve these problems. Network structures are one such arrangement that Is at the forefront of this movement. Although collaboration through network structures establishes an innovative response to dealing with social issues, there remains an expectation that outcomes and processes are based on traditional ways of working. It is necessary for practitioners and policy makers alike to begin to understand the realities of what can be expected from network structures in order to maximize the benefits of these unique mechanisms.
Resumo:
This is a theoretical paper that examines the interplay between individual and collective capabilities and competencies and value transactions in collaborative environments. The theory behind value creation is examined and two types of value are identified, internal value (Shareholder value) and external value (Value proposition). The literature on collaborative enterprises/network is also examined with particular emphasis on supply chains, extended/virtual enterprises and clusters as representatives of different forms and maturities of collaboration. The interplay of value transactions and competencies and capabilities are examined and discussed in detail. Finally, a model is presented which consists of value transactions and a table which compares the characteristics of different types of collaborative enterprises/networks. It is proposed that this model presents a platform for further research to develop an in-depth understanding into how value may be created and managed in collaborative enterprises/networks.
Resumo:
Purpose – The purpose of this paper is to investigate how research and development (R&D) collaboration takes place for complex new products in the automotive sector. The research aims to give guidelines to increase the effectiveness of such collaborations. Design/methodology/approach – The methodology used to investigate this issue was grounded theory. The empirical data were collected through a mixture of interviews and questionnaires. The resulting inducted conceptual models were subsequently validated in industrial workshops. Findings – The findings show that frontloading of the collaborative members was a major issue in managing successful R&D collaborations. Research limitations/implications – The limitation of this research is that it is only based in the German automotive industry. Practical implications – Practical implications have come out of this research. Models and guidelines are given to help make a success of collaborative projects and their potential impacts on time, cost and quality metrics. Originality/value – Frontloading is not often studied in a collaborative manner; it is normally studied within just one organisation. This study has novel value because it has involved a number of different members throughout the supplier network.