844 resultados para Data dissemination and sharing
Resumo:
The business environment context points at the necessity of new forms of management for the sustainable competitiveness of organizations through time. Coopetition is characterized as an alternative in the interaction of different actors, which compete and cooperate simultaneously, in the pursuit of common goals. This dual relation, within a gain-increasing perspective, converts competitors into partners and fosters competitiveness, especially that of organizations within a specific sector. The field of competitive intelligence has, in its turn, assisted organizations, individually, in the systematization of information valuable to decision-making processes, which benefits competitiveness. It follows that it is possible to combine coopetition and competitive intelligence in a systematized process of sectorial intelligence for coopetitive relations. The general aim of this study is, therefore, to put forth a model of sectorial coopetitive intelligence. The methodological outlining of the study is characterized as a mixed approach (quantitative and qualitative methods), of an applied nature, of exploratory and descriptive aims. The Coordination of the Strategic Roadmapping Project for the Future of Paraná's Industry is the selected object of investigation. Protocols have been designed to collect primary and secondary data. In the collection of the primary ata, online questionary were sent to the sectors selected for examination. A total of 149 answers to the online questionary were obtained, and interviews were performed with all embers of the technical team of the Coordination, in a total of five interviewees. After the collection, all the data were tabulated, analyzed and validated by means of focal groups with the same five members of the Coordination technical team, and interviews were performed with a representative of each of the four sectors selected, in a total of nine participants in the validation. The results allowed the systematization of a sectorial coopetitive intelligence model called ICoops. This model is characterized by five stages, namely, planning, collection, nalysis, project development, dissemination and evaluation. Each stage is detailed in inputs, activities and outputs. The results suggest that sectorial coopetition is motivated mainly by knowledge sharing, technological development, investment in R&D, innovation, chain integration and resource complementation. The importance of a neutral institution has been recognized as a facilitator and incentive to the approximation of organizations. Among the main difficulties are the financing of the projects, the adhesion of new members, the lack of tools for the analysis of information and the dissemination of the actions.
Resumo:
With the prevalence of smartphones, new ways of engaging citizens and stakeholders in urban planning and govern-ance are emerging. The technologies in smartphones allow citizens to act as sensors of their environment, producing and sharing rich spatial data useful for new types of collaborative governance set-ups. Data derived from Volunteered Geographic Information (VGI) can support accessible, transparent, democratic, inclusive, and locally-based governance situations of interest to planners, citizens, politicians, and scientists. However, there are still uncertainties about how to actually conduct this in practice. This study explores how social media VGI can be used to document spatial tendencies regarding citizens’ uses and perceptions of urban nature with relevance for urban green space governance. Via the hashtag #sharingcph, created by the City of Copenhagen in 2014, VGI data consisting of geo-referenced images were collected from Instagram, categorised according to their content and analysed according to their spatial distribution patterns. The results show specific spatial distributions of the images and main hotspots. Many possibilities and much potential of using VGI for generating, sharing, visualising and communicating knowledge about citizens’ spatial uses and preferences exist, but as a tool to support scientific and democratic interaction, VGI data is challenged by practical, technical and ethical concerns. More research is needed in order to better understand the usefulness and application of this rich data source to governance.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.
Resumo:
This paper aims to categorize Brazilian Internet users according to the diversity of their online activities and to assess the propensity of these Internet users´ groups to use electronic government (e-gov) services. The Amartya Sen’s Capability Approach was adopted as the theoretical framework for its consideration of people’s freedom to decide on their use of available resources and their competencies for these decisions, leading to the use of e-gov services. Multivariate statistical techniques were used to perform data analysis from the 2007, 2009 and 2011 editions of ICT Household Survey. The results showed that Internet users belonging to the advanced and intermediate use groups were more likely to use e-gov services than those who belong to the sporadic use group. Moreover, the results also demonstrated that the Internet user group of intermediate use presented a higher tendency to use e-gov services than the Internet user group of advanced use. This tendency is possibly related to the extensive use of interactive and collaborative activities of leisure and entertainment performed by this type of user. The findings of this research may be useful in guiding public policies for the dissemination and provision of electronic government services in Brazil.
Resumo:
How have cooperative airspace arrangements contributed to cooperation and discord in the Euro-Atlantic region? This study analyzes the role of three sets of airspace arrangements developed by Euro-Atlantic states since the end of the Cold War—(1) cooperative aerial surveillance of military activity, (2) exchange of air situational data, and (3) joint engagement of theater air and missile threats—in political-military relations among neighbors and within the region. These arrangements provide insights into the integration of Central and Eastern European states into Western security institutions, and the current discord that centers on the conflict in Ukraine and Russia’s place in regional security. The study highlights the role of airspace incidents as contributors to conflict escalation and identifies opportunities for transparency- and confidence-building measures to improve U.S./NATO-Russian relations. The study recommends strengthening the Open Skies Treaty in order to facilitate the resolution of conflicts and improve region-wide military transparency. It notes that political-military arrangements for engaging theater air and missile threats created by NATO and Russia over the last twenty years are currently postured in a way that divides the region and inhibits mutual security. In turn, the U.S.-led Regional Airspace Initiatives that facilitated the exchange of air situational data between NATO and then-NATO-aspirants such as Poland and the Baltic states, offer a useful precedent for improving air sovereignty and promoting information sharing to reduce the fear of war among participating states. Thus, projects like NATO’s Air Situational Data Exchange and the NATO-Russia Council Cooperative Airspace Initiative—if extended to the exchange of data about military aircraft—have the potential to buttress deterrence and contribute to conflict prevention. The study concludes that documenting the evolution of airspace arrangements since the end of the Cold War contributes to understanding of the conflicting narratives put forward by Russia, the West, and the states “in-between” with respect to reasons for the current state of regional security. The long-term project of developing a zone of stable peace in the Euro-Atlantic must begin with the difficult task of building inclusive security institutions to accommodate the concerns of all regional actors.
Resumo:
The goal was to understand, document and module how information is currently flown internally in the largest dairy organization in Finland. The organization has undergone radical changes in the past years due to economic sanctions between European Union and Russia. Therefore, organization’s ultimate goal would be to continue its growth through managing its sales process more efficiently. The thesis consists of a literature review and an empirical part. The literature review consists of knowledge management and process modeling theories. First, the knowledge management discusses how data, information and knowledge are exchanged in the process. Knowledge management models and processes are describing how knowledge is created, exchanged and can be managed in an organization. Secondly, the process modeling is responsible for visualizing information flow through discussion of modeling approaches and presenting different methods and techniques. Finally, process’ documentation procedure was presented. In the end, a constructive research approach was used in order to identify process’ related problems and bottlenecks. Therefore, possible solutions were presented based on this approach. The empirical part of the study is based on 37 interviews, organization’s internal data sources and theoretical framework. The acquired data and information were used to document and to module the sales process in question with a flowchart diagram. Results are conducted through construction of the flowchart diagram and analysis of the documentation. In fact, answers to research questions are derived from empirical and theoretical parts. In the end, 14 problems and two bottlenecks were identified in the process. The most important problems are related to approach and/or standardization for information sharing, insufficient information technology tool utilization and lack of systematization of documentation. The bottlenecks are caused by the alarming amount of changes to files after their deadlines.
Resumo:
The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.
Resumo:
Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.
Resumo:
The changes in time and location of surface temperature from a water body has an important effect on climate activities, marine biology, sea currents, salinity and other characteristics of the seas and lakes water. Traditional measurement of temperature is costly and time consumer due to its dispersion and instability. In recent years the use of satellite technology and remote sensing sciences for data acquiring and parameter and lysis of climatology and oceanography is well developed. In this research we used the NOAA’s Satellite images from its AVHRR system to compare the field surface temperature data with the satellite images information. Ten satellite images were used in this project. These images were calibrated with the field data at the exact time of satellite pass above the area. The result was a significant relation between surface temperatures from satellite data with the field work. As the relative error less than %40 between these two data is acceptable, therefore in our observation the maximum error is %21.2 that can be considered it as acceptable. In all stations the result of satellite measurements is usually less than field data that cores ponds with the global result too. As this sea has a vast latitude, therefore the different in the temperature is natural. But we know this factor is not the only cause for surface currents. The information of all satellites were images extracted by ERDAS software, and the “Surfer” software is used to plot the isotherm lines.
Resumo:
Mestrado em Gestão de Sistemas de Informação
Resumo:
Understanding spatial patterns of land use and land cover is essential for studies addressing biodiversity, climate change and environmental modeling as well as for the design and monitoring of land use policies. The aim of this study was to create a detailed map of land use land cover of the deforested areas of the Brazilian Legal Amazon up to 2008. Deforestation data from and uses were mapped with Landsat-5/TM images analysed with techniques, such as linear spectral mixture model, threshold slicing and visual interpretation, aided by temporal information extracted from NDVI MODIS time series. The result is a high spatial resolution of land use and land cover map of the entire Brazilian Legal Amazon for the year 2008 and corresponding calculation of area occupied by different land use classes. The results showed that the four classes of Pasture covered 62% of the deforested areas of the Brazilian Legal Amazon, followed by Secondary Vegetation with 21%. The area occupied by Annual Agriculture covered less than 5% of deforested areas; the remaining areas were distributed among six other land use classes. The maps generated from this project ? called TerraClass - are available at INPE?s web site (http://www.inpe.br/cra/projetos_pesquisas/terraclass2008.php)
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.