974 resultados para content sharing
Resumo:
Cost-sharing, which involves government-farmer partnership in the funding of agricultural extension service, is one of the reforms aimed at achieving sustainable funding for extension systems. This study examined the perceptions of farmers and extension professionals on this reform agenda in Nigeria. The study was carried out in six geopolitical zones of Nigeria. A multi-stage random sampling technique was applied in the selection of respondents. A sample size of 268 farmers and 272 Agricultural Development Programme (ADP) extension professionals participated in the study. Both descriptive and inferential statistics were used in analysing the data generated from this research. The results show that majority of farmers (80.6%) and extension professionals (85.7%) had favourable perceptions towards cost-sharing. Furthermore, the overall difference in their perceptions was not significant (t =0.03). The study concludes that the strong favourable perception held by the respondents is a pointer towards acceptance of the reform. It therefore recommends that government, extension administrators and policymakers should design and formulate effective strategies and regulations for the introduction and use of cost-sharing as an alternative approach to financing agricultural technology transfer in Nigeria.
Resumo:
Purpose: To investigate the relationship between research data management (RDM) and data sharing in the formulation of RDM policies and development of practices in higher education institutions (HEIs). Design/methodology/approach: Two strands of work were undertaken sequentially: firstly, content analysis of 37 RDM policies from UK HEIs; secondly, two detailed case studies of institutions with different approaches to RDM based on semi-structured interviews with staff involved in the development of RDM policy and services. The data are interpreted using insights from Actor Network Theory. Findings: RDM policy formation and service development has created a complex set of networks within and beyond institutions involving different professional groups with widely varying priorities shaping activities. Data sharing is considered an important activity in the policies and services of HEIs studied, but its prominence can in most cases be attributed to the positions adopted by large research funders. Research limitations/implications: The case studies, as research based on qualitative data, cannot be assumed to be universally applicable but do illustrate a variety of issues and challenges experienced more generally, particularly in the UK. Practical implications: The research may help to inform development of policy and practice in RDM in HEIs and funder organisations. Originality/value: This paper makes an early contribution to the RDM literature on the specific topic of the relationship between RDM policy and services, and openness – a topic which to date has received limited attention.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
"Se tornar viral" é visto pelos comerciantes como o novo Graal para alcançar grandes comunidades online. Neste contexto viral, vídeos têm um papel especial dada a forte capacidade deles de se espalhar exponencialmente pela internet através do compartilhamento social. Cada ano se vê a quebra de novos recordes através deste tipo de viralidade. Em março de 2012, o vídeo "Kony 2012" envolvendo a ação unida contra o líder de milícia Africano epônimo, atingiu mais de 34 milhões de visualizações em seu primeiro dia de lançamento. Em dezembro de 2012, o vídeo-clipe da música "Gangnam Style" tornou-se o primeiro vídeo do YouTube a alcançar mais de um bilhão de visualizações, totalizando mais de 1,4 bilhões de visualizações em março de 2013. Tais ilustrações mostram claramente a nova escala que a internet deu ao fenômeno do boca-a-boca. Os comerciantes entenderam o potencial fantástico dos vídeos virais e tentaram aproveitar o fenômeno de modo a reproduzi-lo para fins comerciais. Esta pesquisa oferece para os acadêmicos e os profissionais de marketing uma análise dos determinantes do compartilhamento de vídeos comerciais online. Mais especificamente, o foco da dissertação foi definido sobre o papel das emoções no compartilhamento, para identificar quais delas levam e como levam à partilha de vídeos comerciais online. A pesquisa foi realizada a partir de dois métodos científicos: uma pesquisa e uma análise de texto sobre a atribuição de emoções para comentários dos vídeos mais compartilhados do YouTube. A pesquisa confirma, com novos métodos, hipóteses previamente testadas e validadas por acadêmicos. Ela mostra que a positividade e a força das emoções são determinantes de compartilhamento maiores do que a negatividade e a fraqueza (Lindgreen and Vanhamme, 2005; Dobele et al., 2007). A dissertação também argumenta que o conteúdo do vídeo, bem como o contexto são determinantes significativos de compartilhamento de vídeo (Laskey et al., 1989; Taylor, 1999). Além de validar teorias existentes, a pesquisa trouxe novos conceitos para a discussão, especialmente o papel da dimensão força / fraqueza de emoções para analisar o fenômeno viral, e a importância de uma clara "chamada à ação" incluída no vídeo para aumentar a sua partilha. Estes novos conceitos enriquecem a literatura do tema – que evolui muito rapidamente – e preparam o caminho para futuras pesquisas.
Resumo:
Content marketing refers to marketing format that involves the creation and sharing of media and publishing content in order to acquire customers. It is focused not on selling, but on communicating with customers and prospects. In today world´s, a trend has been seen in brands becoming publishers in order to keep up with their competition and more importantly to keep their base of fans and followers. Content Marketing is making companies to engage consumers by publishing engaging and value-filled content. This study aims to investigate if there is a link between brand engagement and Facebook Content Marketing practices in the e-commerce industry in Brazil. Based on the literature review, this study defines brand engagement on Facebook as the numbers of "likes" "comments" and "shares" that a company receives from its fans. These actions reflect the popularity of the brand post and leads to engagement. The author defines a scale where levels of Content Marketing practices are developed in order to analyze brand posts on Facebook of an ecommerce company in Brazil. The findings reveal that the most important criterion for the company is the one regarding the picture of the post, where it examines whether the photo content is appealing to the audience. Moreover, it was perceived that the higher the level of these criterion in a post, the greater the number of likes, comments and shares the post receives. The time when a post is published does not present a significant role in determining customer engagement and the most important factor within a publication is to reach the maximum level in the Content Marketing Scale.
Resumo:
With the current proliferation of sensor equipped mobile devices such as smartphones and tablets, location aware services are expanding beyond the mere efficiency and work related needs of users, evolving in order to incorporate fun, culture and the social life of users. Today people on the move have more and more connectivity and are expected to be able to communicate with their usual and familiar social networks. That means communications not only with their peers and colleagues, friends and family but also with unknown people that might share their interests, curiosities or happen to use the same social network. Through social networks, location aware blogging, cultural mobile applications relevant information is now available at specific geographical locations and open to feedback and conversations among friends as well as strangers. In fact, nowadays smartphone technologies aloud users to post and retrieve content while on the move, often relating to specific physical landmarks or locations, engaging and being engaged in conversations with strangers as much as their own social network. The use of such technologies and applications while on the move can often lead people to serendipitous discoveries and interactions. Throughout our thesis we are engaging on a two folded investigation: how can we foster and support serendipitous discoveries and what are the best interfaces for it? In fact, to read and write content while on the move is a cognitively intensive task. While the map serves the function of orienting the user, it also absorbs most of the user’s concentration. In order to address this kind of cognitive overload issue with Breadcrumbs we propose a 360 degrees interface that enables the user to find content around them by means of scanning the surrounding space with the mobile device. By using a loose metaphor of a periscope, harnessing the power of the smartphone sensors we designed an interactive interface capable of detecting content around the users and display it in the form of 2 dimensional bubbles which diameter depends on their distance from the users. Users will navigate the space in relation to the content that they are curious about, rather than in relation to the traditional geographical map. Through this model we envisage alleviating a certain cognitive overload generated by having to continuously confront a two dimensional map with the real three dimensional space surrounding the user, but also use the content as a navigational filter. Furthermore this alternative mean of navigating space might bring serendipitous discovery about places that user where not aware of or intending to reach. We hence conclude our thesis with the evaluation of the Breadcrumbs application and the comparison of the 360 degrees interface with a traditional 2 dimensional map displayed on the devise screen. Results from the evaluation are compiled in findings and insights for future use in designing and developing context aware mobile applications.
Resumo:
Despite the abundant availability,of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Fir fox browser; and demonstrates the advantages of such interoperability over conventional distributed data access strategies.
Resumo:
Despite the abundant availability of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Firefox browser, and demonstrates the advantages of such interoperability over conventional distributed data access strategies. © 2009 IEEE.
Resumo:
Purpose: The purpose of this paper is to identify factors that facilitate tacit knowledge sharing in unstructured work environments, such as those found in automated production lines. Design/methodology/approach: The study is based on a qualitative approach, and it draws data from a four-month field study at a blown-molded glass factory. Data collection techniques included interviews, informal conversations and on-site observations, and data were interpreted using content analysis. Findings: The results indicated that sharing of tacit knowledge is facilitated by an engaging environment. An engaging environment is supported by shared language and knowledge, which are developed through intense communication and a strong sense of collegiality and a social climate that is dominated by openness and trust. Other factors that contribute to the creation of an engaging environment include managerial efforts to provide appropriate work conditions and to communicate company goals, and HRM practices such as the provision of formal training, on-the-job training and incentives. Practical implications: This paper clarifies the scope of managerial actions that impact knowledge creation and sharing among blue-collar workers. Originality/value: Despite the acknowledgement of the importance of blue-collar workers' knowledge, both the knowledge management and operations management literatures have devoted limited attention to it. Studies related to knowledge management in unstructured working environments are also not abundant. © Emerald Group Publishing Limited.
Resumo:
In November 2010, nearly 110,000 people in the United States were waiting for organs for transplantation. Despite the fact that the organ donor registration rate has doubled in the last year, Texas has the lowest registration rate in the nation. Due to the need for improved registration rates in Texas, this practice-based culminating experience was to write an application for federal funding for the central Texas organ procurement organization, Texas Organ Sharing Alliance. The culminating experience has two levels of significance for public health – (1) to engage in an activity to promote organ donation registration, and (2) to provide professional experience in grant writing. ^ The process began with a literature review. The review was to identify successful intervention activities in motivating organ donation registration that could be used in intervention design for the grant application. Conclusions derived from the literature review included (1) the need to specifically encourage family discussions, (2) religious and community leaders can be leveraged to facilitate organ donation conversations in families, (3) communication content must be culturally sensitive and (4) ethnic disparities in transplantation must be acknowledged and discussed.^ Post the literature review; the experience followed a five step process of developing the grant application. The steps included securing permission to proceed, assembling a project team, creation of a project plan and timeline, writing each element of the grant application including the design of proposed intervention activities, and completion of the federal grant application. ^ After the grant application was written, an evaluation of the grant writing process was conducted. Opportunities for improvement were identified. The first opportunity was the need for better timeline management to allow for review of the application by an independent party, iterative development of the budget proposal, and development of collaborative partnerships. Another improvement opportunity was the management of conflict regarding the design of the intervention that stemmed from marketing versus evidence-based approaches. The most important improvement opportunity was the need to develop a more exhaustive evaluation plan.^ Eight supplementary files are attached to appendices: Feasibility Discussion in Appendix 1, Grant Guidance and Workshop Notes in Appendix 2, Presentation to Texas Organ Sharing Alliance in Appendix 3, Team Recruitment Presentation in Appendix 5, Grant Project Narrative in Appendix 7, Federal Application Form in Appendix 8, and Budget Workbook with Budget Narrative in Appendix 9.^
Resumo:
Data-related properties of the activities involved in a service composition can be used to facilitate several design-time and run-time adaptation tasks, such as service evolution, distributed enactment, and instance-level adaptation. A number of these properties can be expressed using a notion of sharing. We present an approach for automated inference of data properties based on sharing analysis, which is able to handle service compositions with complex control structures, involving loops and sub-workflows. The properties inferred can include data dependencies, information content, domain-defined attributes, privacy or confidentiality levels, among others. The analysis produces characterizations of the data and the activities in the composition in terms of minimal and maximal sharing, which can then be used to verify compliance of potential adaptation actions, or as supporting information in their generation. This sharing analysis approach can be used both at design time and at run time. In the latter case, the results of analysis can be refined using the composition traces (execution logs) at the point of execution, in order to support run-time adaptation.
Resumo:
La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
We extend our earlier work on ways in which defining sets of combinatorial designs can be used to create secret sharing schemes. We give an algorithm for classifying defining sets or designs according to their security properties and summarise the results of this algorithm for many small designs. Finally, we discuss briefly how defining sets can be applied to variations of the basic secret sharing scheme.