901 resultados para Data dissemination and sharing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aims to provide a better understanding on how firms stimulate knowledge sharing through the utilization of collaboration tools, in particular Emergent Social Software Platforms (ESSPs). It focuses on the distinctive applications of ESSPs and on the initiatives contributing to maximize its advantages. In the first part of the research, I have itemized all types of existing collaboration tools and classify them in different categories according to their capabilities, objectives and according to their faculty for promoting knowledge sharing. In the second part, and based on an exploratory case study at Cisco Systems, I have identified the main applications of an existing enterprise social software platform named Webex Social. By combining a qualitative and quantitative approach, as well as combining data collected from survey’s results and from the analysis of the company’s documents, I am expecting to maximize the outcome of this investigation and reduce the risk of bias. Although effects cannot be universalized based on one single case study, some utilization patterns have been underlined from the data collected and potential trends in managing knowledge have been observed. The results of the research have also enabled identifying most of the constraints experienced by the users of the firm’s social software platform. Utterly, this research should provide a primary framework for firms planning to create or implement a social software platform and for firms willing to increase adoption levels and to promote the overall participation of users. It highlights the common traps that should be avoided by developers when designing a social software platform and the capabilities that it should inherently carry to support an effective knowledge management strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The “CMS Safety Closing Sensors System” (SCSS, or CSS for brevity) is a remote monitoring system design to control safety clearance and tight mechanical movements of parts of the CMS detector, especially during CMS assembly phases. We present the different systems that makes SCSS: its sensor technologies, the readout system, the data acquisition and control software. We also report on calibration and installation details, which determine the resolution and limits of the system. We present as well our experience from the operation of the system and the analysis of the data collected since 2008. Special emphasis is given to study positioning reproducibility during detector assembly and understanding how the magnetic fields influence the detector structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research was conducted to understand how Facebook users interact and the underlying reasons for doing so with a focus on one-to-mass communication interactions. Different methods and sources were used to generate accurate and valid insights. It was discovered that liking, groups, commenting, events and sharing are essential interactions, whereby liking, commenting and sharing were investigated in more detail. This investigations proves that emotions do trigger these three interactions; The most influencing emotions are Surprise/Wonder, Deep Respect/ Impressiveness and Fun/Joy. Moreover a variety of specific factors that trigger each of the interactions are revealed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O projeto MEMORIAMEDIA tem como objetivos o estudo, a inventariação e divulgação de manifestações do património cultural imaterial: expressões orais; práticas performativas; celebrações; o saber-fazer de artes e ofícios e as práticas e conhecimentos relacionados com a natureza e o universo. O MEMORIAMEDIA iniciou em 2006, em pleno debate nacional e internacional das questões do património cultural imaterial. Este livro cruza essas discussões teóricas, metodológicas e técnicas com a caracterização do MEMORIAMEDIA. Os resultados do projeto, organizados num inventário nacional, estão publicados no site www.memoriamedia.net, onde se encontram disponíveis para consulta e partilha. Filomena Sousa é investigadora de pós-doutoramento em antropologia (FCSH/UNL) e doutorada em sociologia (ISCTE-IUL). Membro integrado no Instituto de Estudos de Literatura e Tradição - patrimónios, artes e culturas (IELT) da FCSH/UNL e consultora da Memória Imaterial CRL – organização não-governamental autora e gestora do projeto MEMORIAMEDIA. Desenvolve investigação no âmbito das políticas e instrumentos de identificação, documentação e salvaguarda do património cultural imaterial e realizou vários documentários sobre expressões culturais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding how the brain works has been one of the greatest goals of mankind. This desire fuels the scientific community to pursue novel techniques able to acquire the complex information produced by the brain at any given moment. The Electrocorticography (ECoG) is one of those techniques. By placing conductive electrodes over the dura, or directly over the cortex, and measuring the electric potential variation, one can acquire information regarding the activation of those areas. In this work, transparent ECoGs, (TrECoGs) are fabricated through thin film deposition of the Transparent Conductive Oxides (TCOs) Indium-Zinc-Oxide (IZO) and Gallium-Zinc-Oxide (GZO). Five distinct devices have been fabricated via shadow masking and photolithography. The data acquired and presented in this work validates the TrECoGs fabricated as efficient devices for recording brain activity. The best results were obtained for the GZO- based TrECoG, which presented an average impedance of 36 kΩ at 1 kHz for 500 μm diameter electrodes, a transmittance close to 90% for the visible spectrum and a clear capability to detect brain signal variations. The IZO based devices also presented high transmittance levels (90%), but with higher impedances, which ranged from 40 kΩ to 100 kΩ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years a set of production paradigms were proposed in order to capacitate manufacturers to meet the new market requirements, such as the shift in demand for highly customized products resulting in a shorter product life cycle, rather than the traditional mass production standardized consumables. These new paradigms advocate solutions capable of facing these requirements, empowering manufacturing systems with a high capacity to adapt along with elevated flexibility and robustness in order to deal with disturbances, like unexpected orders or malfunctions. Evolvable Production Systems propose a solution based on the usage of modularity and self-organization with a fine granularity level, supporting pluggability and in this way allowing companies to add and/or remove components during execution without any extra re-programming effort. However, current monitoring software was not designed to fully support these characteristics, being commonly based on centralized SCADA systems, incapable of re-adapting during execution to the unexpected plugging/unplugging of devices nor changes in the entire system’s topology. Considering these aspects, the work developed for this thesis encompasses a fully distributed agent-based architecture, capable of performing knowledge extraction at different levels of abstraction without sacrificing the capacity to add and/or remove monitoring entities, responsible for data extraction and analysis, during runtime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Field lab: Business project

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As huge amounts of data become available in organizations and society, specific data analytics skills and techniques are needed to explore this data and extract from it useful patterns, tendencies, models or other useful knowledge, which could be used to support the decision-making process, to define new strategies or to understand what is happening in a specific field. Only with a deep understanding of a phenomenon it is possible to fight it. In this paper, a data-driven analytics approach is used for the analysis of the increasing incidence of fatalities by pneumonia in the Portuguese population, characterizing the disease and its incidence in terms of fatalities, knowledge that can be used to define appropriate strategies that can aim to reduce this phenomenon, which has increased more than 65% in a decade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the civil engineering field, the use of the Finite Element Method has acquired a significant importance, since numerical simulations have been employed in a broad field, which encloses the design, analysis and prediction of the structural behaviour of constructions and infrastructures. Nevertheless, these mathematical simulations can only be useful if all the mechanical properties of the materials, boundary conditions and damages are properly modelled. Therefore, it is required not only experimental data (static and/or dynamic tests) to provide references parameters, but also robust calibration methods able to model damage or other special structural conditions. The present paper addresses the model calibration of a footbridge bridge tested with static loads and ambient vibrations. Damage assessment was also carried out based on a hybrid numerical procedure, which combines discrete damage functions with sets of piecewise linear damage functions. Results from the model calibration shows that the model reproduces with good accuracy the experimental behaviour of the bridge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data traces, consisting of logs about the use of mobile and wireless networks, have been used to study the statistics of encounters between mobile nodes, in an attempt to predict the performance of opportunistic networks. Understanding the role and potential of mobile devices as relaying nodes in message dissemination and delivery depends on the knowledge about patterns and number of encounters among nodes. Data traces about the use of WiFi networks are widely available and can be used to extract large datasets of encounters between nodes. However, these logs only capture indirect encounters between nodes, and the resulting encounters datasets might not realistically represent the spatial and temporal behaviour of nodes. This paper addresses the impact of overlapping between the coverage areas of different Access Points of WiFi networks in extracting encounters datasets from the usage logs. Simulation and real-world experimental results show that indirect encounter traces extracted directly from these logs strongly underestimate the opportunities for direct node-to- node message exchange in opportunistic networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de Doutoramento - Programa Doutoral em Engenharia Industrial e Sistemas (PDEIS)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DNA microarrays are one of the most used technologies for gene expression measurement. However, there are several distinct microarray platforms, from different manufacturers, each with its own measurement protocol, resulting in data that can hardly be compared or directly integrated. Data integration from multiple sources aims to improve the assertiveness of statistical tests, reducing the data dimensionality problem. The integration of heterogeneous DNA microarray platforms comprehends a set of tasks that range from the re-annotation of the features used on gene expression, to data normalization and batch effect elimination. In this work, a complete methodology for gene expression data integration and application is proposed, which comprehends a transcript-based re-annotation process and several methods for batch effect attenuation. The integrated data will be used to select the best feature set and learning algorithm for a brain tumor classification case study. The integration will consider data from heterogeneous Agilent and Affymetrix platforms, collected from public gene expression databases, such as The Cancer Genome Atlas and Gene Expression Omnibus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although some studies point to cognitive stimulation as a beneficial therapy for older adults with cognitive impairments, this area of research and practice is still lacking dissemination and is underrepresented in many countries. Moreover, the comparative effects of different intervention durations remain to be established and, besides cognitive effects, pragmatic parameters, such as cost-effectiveness and experiential relevance to participants, are seldom explored. In this work, we present a randomized con- trolled wait-list trial evaluating 2 different intervention durations (standard 1⁄4 17 vs brief 1⁄4 11 sessions) of a cognitive stimulation program developed for older adults with cognitive impairments with or without dementia. 20 participants were randomly assigned to the standard duration intervention program (17 sessions, 1.5 months) or to a wait-list group. At postintervention of the standard intervention group, the wait-list group crossed over to receive the brief intervention program (11 sessions, 1 month). Changes in neuropsychological, functionality, quality of life, and caregiver outcomes were evaluated. Experience during intervention and costs and feasibility were also evaluated. The current cognitive stimulation programs (ie, standard and brief) showed high values of experiential relevance for both intervention durations. High adherence, completion rates, and reasonable costs were found for both formats. Further studies are needed to definitively establish the potential efficacy, optimal duration, cost-effectiveness, and experiential relevance for participants of cognitive intervention approaches.