969 resultados para Collaborative environment
Resumo:
Developmental Coordination Disorder (DCD), a chronic and usually permanent condition found in children, is characterized by motor impairment that interferes with a child's activities of daily living and with academic achievement. One of the most popular tests for the quantitative diagnosis of DCD is the Movement Assessment Battery for Children (MABC). Based on the Battery's standardized scores, it is possible to identify children with typical development, children at risk of developing DCD, and children with DCD. This article describes a computational system we developed to assist with the analysis of results obtained in the MABC test. The tool was developed for the web environment and its database provides integration of MABC data. Thus, researchers around the world can share data and develop collaborative work in the DCD field. In order to help analysis processes, our system provides services for filtering data to show more specific sets of information and present the results in textual, table, and graphic formats, allowing easier and more comprehensive evaluation of the results.
Resumo:
Research literature is replete with the importance of collaboration in schools, the lack of its implementation, the centrality of the role of the principal, and the existence of a gap between knowledge and practice--or a "Knowing-Doing Gap." In other words, there is a set of knowledge that principals must know in order to create a collaborative workplace environment for teachers. This study sought to describe what high school principals know about creating such a culture of collaboration. The researcher combed journal articles, studies and professional literature in order to identify what principals must know in order to create a culture of collaboration. The result was ten elements of principal knowledge: Staff involvement in important decisions, Charismatic leadership not being necessary for success, Effective elements of teacher teams, Administrator‘s modeling professional learning, The allocation of resources, Staff meetings focused on student learning, Elements of continuous improvement, and Principles of Adult Learning, Student Learning and Change. From these ten elements, the researcher developed a web-based survey intended to measure nine of those elements (Charismatic leadership was excluded). Principals of accredited high schools in the state of Nebraska were invited to participate in this survey, as high schools are well-known for the isolation that teachers experience--particularly as a result of departmentalization. The results indicate that principals have knowledge of eight of the nine measured elements. The one that they lacked an understanding of was Principles of Student Learning. Given these two findings of what principals do and do not know, the researcher recommends that professional organizations, intermediate service agencies and district-level support staff engage in systematic and systemic initiatives to increase the knowledge of principals in the element of lacking knowledge. Further, given that eight of the nine elements are understood by principals, it would be wise to examine reasons for the implementation gap (Knowing-Doing Gap) and how to overcome it.
Resumo:
There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.
Resumo:
Motivated by the need to understand which are the underlying forces that trigger network evolution, we develop a multilevel theoretical and empirically testable model to examine the relationship between changes in the external environment and network change. We refer to network change as the dissolution or replacement of an interorganizational tie, adding also the case of the formation of new ties with new or preexisting partners. Previous research has paid scant attention to the organizational consequences of quantum change enveloping entire industries in favor of an emphasis on continuous change. To highlight radical change we introduce the concept of environmental jolt. The September 11 terrorist attacks provide us with a natural experiment to test our hypotheses on the antecedents and the consequences of network change. Since network change can be explained at multiple levels, we incorporate firm-level variables as moderators. The empirical setting is the global airline industry, which can be regarded as a constantly changing network of alliances. The study reveals that firms react to environmental jolts by forming homophilous ties and transitive triads as opposed to the non jolt periods. Moreover, we find that, all else being equal, firms that adopt a brokerage posture will have positive returns. However, we find that in the face of an environmental jolt brokerage relates negatively to firm performance. Furthermore, we find that the negative relationship between brokerage and performance during an environmental jolt is more significant for larger firms. Our findings suggest that jolts are an important predictor of network change, that they significantly affect operational returns and should be thus incorporated in studies of network dynamics.
Resumo:
This dissertation explores the viability of invitational rhetoric as a mode of advocacy for sustainable energy use in the residential built environment. The theoretical foundations for this study join ecofeminist concepts and commitments with the conditions and resources of invitational rhetoric, developing in particular the rhetorical potency of the concepts of re-sourcement and enfoldment. The methodological approach is autoethnography using narrative reflection and journaling, both adapted to and developed within the autoethnographic project. Through narrative reflection, the author explores her lived experiences in advocating for energy-responsible residential construction in the Keweenaw Peninsula of Michigan. The analysis reveals the opportunities for cooperative, collaborative advocacy and the struggle against traditional conventions of persuasive advocacy, particularly the centrality of the rhetor. The author also conducted two field trips to India, primarily the state of Kerala. Drawing on autoethnographic journaling, the analysis highlights the importance of sensory relations in lived advocacy and the resonance of everyday Indian culture to invitational principles. Based on field research, the dissertation proposes autoethnography as a critical development in encouraging invitational rhetoric as an alternative mode of effecting change. The invitational force of autoethnography is evidenced in portraying the material advocacy of the built environment itself, specifically the sensual experience of material arrangements and ambience, as well as revealing the corporeality of advocacy, that is, the body as the site of invitational engagement, emotional encounter, and sensory experience. This study concludes that vulnerability of self in autoethnographic work and the vulnerability of rhetoric as invitational constitute the basis for transformation. The dissertation confirms the potential of an ecofeminist invitational advocacy conveyed autoethnographically for transforming perceptions and use of energy in a smaller-scale residential environment appropriate for culture, climate, and ultimately part of the challenge of sustaining life on this planet.
Resumo:
To master changing performance demands, autonomous transport vehicles are deployed to make inhouse material flow applications more flexible. The socalled cellular transport system consists of a multitude of small scale transport vehicles which shall be able to form a swarm. Therefore the vehicles need to detect each other, exchange information amongst each other and sense their environment. By provision of peripherally acquired information of other transport entities, more convenient decisions can be made in terms of navigation and collision avoidance. This paper is a contribution to collective utilization of sensor data in the swarm of cellular transport vehicles.
Resumo:
Recent mathematics education reform efforts call for the instantiation of mathematics classroom environments where students have opportunities to reason and construct their understandings as part of a community of learners. Despite some successes, traditional models of instruction still dominate the educational landscape. This limited success can be attributed, in part, to an underdeveloped understanding of the roles teachers must enact to successfully organize and participate in collaborative classroom practices. Towards this end, an in-depth longitudinal case study of a collaborative high school mathematics classroom was undertaken guided by the following two questions: What roles do these collaborative practices require of teacher and students? How does the community’s capacity to engage in collaborative practices develop over time? The analyses produced two conceptual models: one of the teacher’s role, along with specific instructional strategies the teacher used to organize a collaborative learning environment, and the second of the process by which the class’s capacity to participate in collaborative inquiry practices developed over time.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Background: Cognitive skills training for minimally invasive surgery has traditionally relied upon diverse tools, such as seminars or lectures. Web technologies for e-learning have been adopted to provide ubiquitous training and serve as structured repositories for the vast amount of laparoscopic video sources available. However, these technologies fail to offer such features as formative and summative evaluation, guided learning, or collaborative interaction between users. Methodology: The "TELMA" environment is presented as a new technology-enhanced learning platform that increases the user's experience using a four-pillared architecture: (1) an authoring tool for the creation of didactic contents; (2) a learning content and knowledge management system that incorporates a modular and scalable system to capture, catalogue, search, and retrieve multimedia content; (3) an evaluation module that provides learning feedback to users; and (4) a professional network for collaborative learning between users. Face validation of the environment and the authoring tool are presented. Results: Face validation of TELMA reveals the positive perception of surgeons regarding the implementation of TELMA and their willingness to use it as a cognitive skills training tool. Preliminary validation data also reflect the importance of providing an easy-to-use, functional authoring tool to create didactic content. Conclusion: The TELMA environment is currently installed and used at the Jesús Usón Minimally Invasive Surgery Centre and several other Spanish hospitals. Face validation results ascertain the acceptance and usefulness of this new minimally invasive surgery training environment.
Resumo:
Enhanced learning environments are arising with great success within the field of cognitive skills training in minimally invasive surgery (MIS) because they provides multiple benefits since they avoid time, spatial and cost constraints. TELMA [1,2] is a new technology enhanced learning platform that promotes collaborative and ubiquitous training of surgeons. This platform is based on four main modules: an authoring tool, a learning content and knowledge management system, an evaluation module and a professional network. TELMA has been designed and developed focused on the user; therefore it is necessary to carry out a user validation as final stage of the development. For this purpose, e-MIS validity [3] has been defined. This validation includes usability, contents and functionality validities both for the development and production stages of any e-Learning web platform. Using e-MIS validity, the e-Learning is fully validated since it includes subjective and objective metrics. The purpose of this study is to specify and apply a set of objective and subjective metrics using e-MIS validity to test usability, contents and functionality of TELMA environment within the development stage.
Resumo:
Mode of access: Internet.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Aims: To determine if general practitioners' (GPs) experience of education on alcohol, support in their working environment for intervening with alcohol problems, and their attitudes have an impact on the number of patients they manage with alcohol problems. Methods: 1300 GPs from nine countries were surveyed with a postal questionnaire as part of a World Health Organization (WHO) collaborative study. Results: GPs who received more education on alcohol (OR = 1.5; 95% CI, 1.3-1.7), who perceived that they were working in a supportive environment (OR = 1.6; 95% CI, 1.4-1.9), who expressed higher role security in working with alcohol problems (OR = 2.0; 95% CI, 1.5-2.5) and who reported greater therapeutic commitment to working with alcohol problems (OR = 1.4: 95% CI, 1.1-1.7) were more likely to manage patients with alcohol-related harm. Conclusion: Both education and support in the working environment need to be provided to enhance the involvement of GPs in the management of alcohol problems.
Resumo:
Achieving more sustainable land and water use depends on high-quality information and its improved use. In other words, better linkages are needed between science and management. Since many stakeholders with different relationships to the natural resources are inevitably involved, we suggest that collaborative learning environments and improved information management are prerequisites for integrating science and management. Case studies that deal with resource management issues are presented that illustrate the creation of collaborative learning environments through systems analyses with communities, and an integration of scientific and experiential knowledge of components of the system. This new knowledge needs to be captured and made accessible through innovative information management systems designed collaboratively with users, in forms which fit the users' 'mental models' of how their systems work. A model for linking science and resource management more effectively is suggested. This model entails systems thinking in a collaborative learning environment, and processes to help convergence of views and value systems, and make scientists and different kinds of managers aware of their interdependence. Adaptive management provides a mechanism for applying and refining scientists' and managers' knowledge. Copyright (C) 2003 John Wiley Sons, Ltd.