901 resultados para Data dissemination and sharing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims to categorize Brazilian Internet users according to the diversity of their online activities and to assess the propensity of these Internet users´ groups to use electronic government (e-gov) services. The Amartya Sen’s Capability Approach was adopted as the theoretical framework for its consideration of people’s freedom to decide on their use of available resources and their competencies for these decisions, leading to the use of e-gov services. Multivariate statistical techniques were used to perform data analysis from the 2007, 2009 and 2011 editions of ICT Household Survey. The results showed that Internet users belonging to the advanced and intermediate use groups were more likely to use e-gov services than those who belong to the sporadic use group. Moreover, the results also demonstrated that the Internet user group of intermediate use presented a higher tendency to use e-gov services than the Internet user group of advanced use. This tendency is possibly related to the extensive use of interactive and collaborative activities of leisure and entertainment performed by this type of user. The findings of this research may be useful in guiding public policies for the dissemination and provision of electronic government services in Brazil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How have cooperative airspace arrangements contributed to cooperation and discord in the Euro-Atlantic region? This study analyzes the role of three sets of airspace arrangements developed by Euro-Atlantic states since the end of the Cold War—(1) cooperative aerial surveillance of military activity, (2) exchange of air situational data, and (3) joint engagement of theater air and missile threats—in political-military relations among neighbors and within the region. These arrangements provide insights into the integration of Central and Eastern European states into Western security institutions, and the current discord that centers on the conflict in Ukraine and Russia’s place in regional security. The study highlights the role of airspace incidents as contributors to conflict escalation and identifies opportunities for transparency- and confidence-building measures to improve U.S./NATO-Russian relations. The study recommends strengthening the Open Skies Treaty in order to facilitate the resolution of conflicts and improve region-wide military transparency. It notes that political-military arrangements for engaging theater air and missile threats created by NATO and Russia over the last twenty years are currently postured in a way that divides the region and inhibits mutual security. In turn, the U.S.-led Regional Airspace Initiatives that facilitated the exchange of air situational data between NATO and then-NATO-aspirants such as Poland and the Baltic states, offer a useful precedent for improving air sovereignty and promoting information sharing to reduce the fear of war among participating states. Thus, projects like NATO’s Air Situational Data Exchange and the NATO-Russia Council Cooperative Airspace Initiative—if extended to the exchange of data about military aircraft—have the potential to buttress deterrence and contribute to conflict prevention. The study concludes that documenting the evolution of airspace arrangements since the end of the Cold War contributes to understanding of the conflicting narratives put forward by Russia, the West, and the states “in-between” with respect to reasons for the current state of regional security. The long-term project of developing a zone of stable peace in the Euro-Atlantic must begin with the difficult task of building inclusive security institutions to accommodate the concerns of all regional actors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal was to understand, document and module how information is currently flown internally in the largest dairy organization in Finland. The organization has undergone radical changes in the past years due to economic sanctions between European Union and Russia. Therefore, organization’s ultimate goal would be to continue its growth through managing its sales process more efficiently. The thesis consists of a literature review and an empirical part. The literature review consists of knowledge management and process modeling theories. First, the knowledge management discusses how data, information and knowledge are exchanged in the process. Knowledge management models and processes are describing how knowledge is created, exchanged and can be managed in an organization. Secondly, the process modeling is responsible for visualizing information flow through discussion of modeling approaches and presenting different methods and techniques. Finally, process’ documentation procedure was presented. In the end, a constructive research approach was used in order to identify process’ related problems and bottlenecks. Therefore, possible solutions were presented based on this approach. The empirical part of the study is based on 37 interviews, organization’s internal data sources and theoretical framework. The acquired data and information were used to document and to module the sales process in question with a flowchart diagram. Results are conducted through construction of the flowchart diagram and analysis of the documentation. In fact, answers to research questions are derived from empirical and theoretical parts. In the end, 14 problems and two bottlenecks were identified in the process. The most important problems are related to approach and/or standardization for information sharing, insufficient information technology tool utilization and lack of systematization of documentation. The bottlenecks are caused by the alarming amount of changes to files after their deadlines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The changes in time and location of surface temperature from a water body has an important effect on climate activities, marine biology, sea currents, salinity and other characteristics of the seas and lakes water. Traditional measurement of temperature is costly and time consumer due to its dispersion and instability. In recent years the use of satellite technology and remote sensing sciences for data acquiring and parameter and lysis of climatology and oceanography is well developed. In this research we used the NOAA’s Satellite images from its AVHRR system to compare the field surface temperature data with the satellite images information. Ten satellite images were used in this project. These images were calibrated with the field data at the exact time of satellite pass above the area. The result was a significant relation between surface temperatures from satellite data with the field work. As the relative error less than %40 between these two data is acceptable, therefore in our observation the maximum error is %21.2 that can be considered it as acceptable. In all stations the result of satellite measurements is usually less than field data that cores ponds with the global result too. As this sea has a vast latitude, therefore the different in the temperature is natural. But we know this factor is not the only cause for surface currents. The information of all satellites were images extracted by ERDAS software, and the “Surfer” software is used to plot the isotherm lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Gestão de Sistemas de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding spatial patterns of land use and land cover is essential for studies addressing biodiversity, climate change and environmental modeling as well as for the design and monitoring of land use policies. The aim of this study was to create a detailed map of land use land cover of the deforested areas of the Brazilian Legal Amazon up to 2008. Deforestation data from and uses were mapped with Landsat-5/TM images analysed with techniques, such as linear spectral mixture model, threshold slicing and visual interpretation, aided by temporal information extracted from NDVI MODIS time series. The result is a high spatial resolution of land use and land cover map of the entire Brazilian Legal Amazon for the year 2008 and corresponding calculation of area occupied by different land use classes. The results showed that the four classes of Pasture covered 62% of the deforested areas of the Brazilian Legal Amazon, followed by Secondary Vegetation with 21%. The area occupied by Annual Agriculture covered less than 5% of deforested areas; the remaining areas were distributed among six other land use classes. The maps generated from this project ? called TerraClass - are available at INPE?s web site (http://www.inpe.br/cra/projetos_pesquisas/terraclass2008.php)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social plugins for sharing news through Facebook and Twitter have become increasingly salient features on news sites. Together with the user comment feature, social plugins are the most common way for users to contribute. The wide use of multiple features has opened new areas to comprehensively study users’ participatory practices. However, how do these opportunities to participate vary between the participatory spaces that news sites affiliated with local, national broadsheet and tabloid news constitute? How are these opportunities appropriated by users in terms of participatory practices such as commenting and sharing news through Facebook and Twitter? In addition, what differences are there between news sites in these respects? To answer these questions, a quantitative content analysis has been conducted on 3,444 articles from nine Swedish online newspapers. Local newspapers are more likely to allow users to comment on articles than are national newspapers. Tweeting news is appropriated only on news sites affiliated with evening tabloids and national morning newspapers. Sharing news through Facebook is 20 times more common than tweeting news or commenting. The majority of news items do not attract any user interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods: A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results: The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions: The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presents constructs from classification theory and relates them to the study of hashtags and other forms of tags in social media data. Argues these constructs are useful to the study of the intersectionality of race, gender, and sexuality. Closes with an introduction to an historical case study from Amazon.com.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collecting ground truth data is an important step to be accomplished before performing a supervised classification. However, its quality depends on human, financial and time ressources. It is then important to apply a validation process to assess the reliability of the acquired data. In this study, agricultural infomation was collected in the Brazilian Amazonian State of Mato Grosso in order to map crop expansion based on MODIS EVI temporal profiles. The field work was carried out through interviews for the years 2005-2006 and 2006-2007. This work presents a methodology to validate the training data quality and determine the optimal sample to be used according to the classifier employed. The technique is based on the detection of outlier pixels for each class and is carried out by computing Mahalanobis distances for each pixel. The higher the distance, the further the pixel is from the class centre. Preliminary observations through variation coefficent validate the efficiency of the technique to detect outliers. Then, various subsamples are defined by applying different thresholds to exclude outlier pixels from the classification process. The classification results prove the robustness of the Maximum Likelihood and Spectral Angle Mapper classifiers. Indeed, those classifiers were insensitive to outlier exclusion. On the contrary, the decision tree classifier showed better results when deleting 7.5% of pixels in the training data. The technique managed to detect outliers for all classes. In this study, few outliers were present in the training data, so that the classification quality was not deeply affected by the outliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the digital age, e-health technologies play a pivotal role in the processing of medical information. As personal health data represents sensitive information concerning a data subject, enhancing data protection and security of systems and practices has become a primary concern. In recent years, there has been an increasing interest in the concept of Privacy by Design, which aims at developing a product or a service in a way that it supports privacy principles and rules. In the EU, Article 25 of the General Data Protection Regulation provides a binding obligation of implementing Data Protection by Design technical and organisational measures. This thesis explores how an e-health system could be developed and how data processing activities could be carried out to apply data protection principles and requirements from the design stage. The research attempts to bridge the gap between the legal and technical disciplines on DPbD by providing a set of guidelines for the implementation of the principle. The work is based on literature review, legal and comparative analysis, and investigation of the existing technical solutions and engineering methodologies. The work can be differentiated by theoretical and applied perspectives. First, it critically conducts a legal analysis on the principle of PbD and it studies the DPbD legal obligation and the related provisions. Later, the research contextualises the rule in the health care field by investigating the applicable legal framework for personal health data processing. Moreover, the research focuses on the US legal system by conducting a comparative analysis. Adopting an applied perspective, the research investigates the existing technical methodologies and tools to design data protection and it proposes a set of comprehensive DPbD organisational and technical guidelines for a crucial case study, that is an Electronic Health Record system.