884 resultados para Connectivity,Connected Car,Big Data,KPI


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A evolução tecnológica na comunicação contemporânea estrutura sistemas digitais via redes de computadores conectados e exploração maciça de dispositivos tecnológicos. Os dados digitais captados e distribuídos via aplicativos instalados em smartphones criam ambiente dinâmico comunicacional. O Jornalismo e a Comunicação tentam se adaptar ao novo ecossistema informacional impetrado pelas constantes inovações tecnológicas que possibilitam a criação de novos ambientes e sistemas para acesso à informação de relevância social. Surgem novas ferramentas para produção e distribuição de conteúdos jornalísticos, produtos baseados em dados e interações inteligentes, algoritmos usados em diversos processos, plataformas hiperlocais e sistemas de narrativas e produção digitais. Nesse contexto, o objetivo da pesquisa foi elaborar uma análise e comparação entre produtos de mídia e tecnologia específicos. Se as novas tecnologias acrescentam atributos às produções e narrativas jornalísticas, seus impactos na prática da atividade e também se há modificação nos processos de produção de informação de relevância social em relação aos processos jornalísticos tradicionais e consolidados. Investiga se o uso de informações insertadas pelos usuários, em tempo real, melhora a qualidade das narrativas emergentes através de dispositivos móveis e se a gamificação ou ludificação altera a percepção de credibilidade do jornalismo. Para que assim seja repensado a forma de se produzir e gerar informação e conhecimento para os públicos que demandam conteúdo

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durante el desarrollo del proyecto he aprendido sobre Big Data, Android y MongoDB mientras que ayudaba a desarrollar un sistema para la predicción de las crisis del trastorno bipolar mediante el análisis masivo de información de diversas fuentes. En concreto hice una parte teórica sobre bases de datos NoSQL, Streaming Spark y Redes Neuronales y después diseñé y configuré una base de datos MongoDB para el proyecto del trastorno bipolar. También aprendí sobre Android y diseñé y desarrollé una aplicación de móvil en Android para recoger datos para usarlos como entrada en el sistema de predicción de crisis. Una vez terminado el desarrollo de la aplicación también llevé a cabo una evaluación con usuarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cumulon is a system aimed at simplifying the development and deployment of statistical analysis of big data in public clouds. Cumulon allows users to program in their familiar language of matrices and linear algebra, without worrying about how to map data and computation to specific hardware and cloud software platforms. Given user-specified requirements in terms of time, monetary cost, and risk tolerance, Cumulon automatically makes intelligent decisions on implementation alternatives, execution parameters, as well as hardware provisioning and configuration settings -- such as what type of machines and how many of them to acquire. Cumulon also supports clouds with auction-based markets: it effectively utilizes computing resources whose availability varies according to market conditions, and suggests best bidding strategies for them. Cumulon explores two alternative approaches toward supporting such markets, with different trade-offs between system and optimization complexity. Experimental study is conducted to show the efficiency of Cumulon's execution engine, as well as the optimizer's effectiveness in finding the optimal plan in the vast plan space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amount and quality of available biomass is a key factor for the sustainable livestock industry and agricultural management related decision making. Globally 31.5% of land cover is grassland while 80% of Ireland’s agricultural land is grassland. In Ireland, grasslands are intensively managed and provide the cheapest feed source for animals. This dissertation presents a detailed state of the art review of satellite remote sensing of grasslands, and the potential application of optical (Moderate–resolution Imaging Spectroradiometer (MODIS)) and radar (TerraSAR-X) time series imagery to estimate the grassland biomass at two study sites (Moorepark and Grange) in the Republic of Ireland using both statistical and state of the art machine learning algorithms. High quality weather data available from the on-site weather station was also used to calculate the Growing Degree Days (GDD) for Grange to determine the impact of ancillary data on biomass estimation. In situ and satellite data covering 12 years for the Moorepark and 6 years for the Grange study sites were used to predict grassland biomass using multiple linear regression, Neuro Fuzzy Inference Systems (ANFIS) models. The results demonstrate that a dense (8-day composite) MODIS image time series, along with high quality in situ data, can be used to retrieve grassland biomass with high performance (R2 = 0:86; p < 0:05, RMSE = 11.07 for Moorepark). The model for Grange was modified to evaluate the synergistic use of vegetation indices derived from remote sensing time series and accumulated GDD information. As GDD is strongly linked to the plant development, or phonological stage, an improvement in biomass estimation would be expected. It was observed that using the ANFIS model the biomass estimation accuracy increased from R2 = 0:76 (p < 0:05) to R2 = 0:81 (p < 0:05) and the root mean square error was reduced by 2.72%. The work on the application of optical remote sensing was further developed using a TerraSAR-X Staring Spotlight mode time series over the Moorepark study site to explore the extent to which very high resolution Synthetic Aperture Radar (SAR) data of interferometrically coherent paddocks can be exploited to retrieve grassland biophysical parameters. After filtering out the non-coherent plots it is demonstrated that interferometric coherence can be used to retrieve grassland biophysical parameters (i. e., height, biomass), and that it is possible to detect changes due to the grass growth, and grazing and mowing events, when the temporal baseline is short (11 days). However, it not possible to automatically uniquely identify the cause of these changes based only on the SAR backscatter and coherence, due to the ambiguity caused by tall grass laid down due to the wind. Overall, the work presented in this dissertation has demonstrated the potential of dense remote sensing and weather data time series to predict grassland biomass using machine-learning algorithms, where high quality ground data were used for training. At present a major limitation for national scale biomass retrieval is the lack of spatial and temporal ground samples, which can be partially resolved by minor modifications in the existing PastureBaseIreland database by adding the location and extent ofeach grassland paddock in the database. As far as remote sensing data requirements are concerned, MODIS is useful for large scale evaluation but due to its coarse resolution it is not possible to detect the variations within the fields and between the fields at the farm scale. However, this issue will be resolved in terms of spatial resolution by the Sentinel-2 mission, and when both satellites (Sentinel-2A and Sentinel-2B) are operational the revisit time will reduce to 5 days, which together with Landsat-8, should enable sufficient cloud-free data for operational biomass estimation at a national scale. The Synthetic Aperture Radar Interferometry (InSAR) approach is feasible if there are enough coherent interferometric pairs available, however this is difficult to achieve due to the temporal decorrelation of the signal. For repeat-pass InSAR over a vegetated area even an 11 days temporal baseline is too large. In order to achieve better coherence a very high resolution is required at the cost of spatial coverage, which limits its scope for use in an operational context at a national scale. Future InSAR missions with pair acquisition in Tandem mode will minimize the temporal decorrelation over vegetation areas for more focused studies. The proposed approach complements the current paradigm of Big Data in Earth Observation, and illustrates the feasibility of integrating data from multiple sources. In future, this framework can be used to build an operational decision support system for retrieval of grassland biophysical parameters based on data from long term planned optical missions (e. g., Landsat, Sentinel) that will ensure the continuity of data acquisition. Similarly, Spanish X-band PAZ and TerraSAR-X2 missions will ensure the continuity of TerraSAR-X and COSMO-SkyMed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to become better prepared to support Research Data Management (RDM) practices in sciences and engineering, Queen’s University Library, together with the University Research Services, conducted a research study of all ranks of faculty members, as well as postdoctoral fellows and graduate students at the Faculty of Engineering & Applied Science, Departments of Chemistry, Computer Science, Geological Sciences and Geological Engineering, Mathematics and Statistics, Physics, Engineering Physics & Astronomy, School of Environmental Studies, and Geography & Planning in the Faculty of Arts and Science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses a series of artworks named CODEX produced by the authors as part of a collaborative research project between the Centre for Research in Education, Art and Media (CREAM), University of Westminster, and the Oxford Internet Institute. Taking the form of experimental maps, large-scale installations and prints, we show how big data can be employed to reflect upon social phenomena through the formulation of critical, aesthetic and speculative geographies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Twitter System is the biggest social network in the world, and everyday millions of tweets are posted and talked about, expressing various views and opinions. A large variety of research activities have been conducted to study how the opinions can be clustered and analyzed, so that some tendencies can be uncovered. Due to the inherent weaknesses of the tweets - very short texts and very informal styles of writing - it is rather hard to make an investigation of tweet data analysis giving results with good performance and accuracy. In this paper, we intend to attack the problem from another aspect - using a two-layer structure to analyze the twitter data: LDA with topic map modelling. The experimental results demonstrate that this approach shows a progress in twitter data analysis. However, more experiments with this method are expected in order to ensure that the accurate analytic results can be maintained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tiedon hyödyntäminen on yhä merkittävämmässä osassa markkinoinnin perustehtävien toteuttamisessa. Tiedon avulla asiakkaiden tarpeita saadaan tunnistettua paremmin ja niihin voidaan vastata tehokkaammin. Lisäksi erityisesti viimevuosien teknologinen kehitys on parantanut tiedon hyödyntämismahdollisuuksia markkinoinnissa merkittävästi. Tämä tutkielma käsittelee uudenlaista tiedon hyödyntämistä markkinointiviestinnässä. Työssä tutkitaan tiedon kehityksen uutta aikakautta, big dataa, joka vaikuttaa erityisesti markkinointiviestinnän kohdentamisen kehittymiseen. Kohdentamisen kanavana tutkielmassa tarkastellaan mobiiliverkkopankkia, mistä syystä markkinointiviestinnän tutkiminen on rajattu työssä käsittämään lähinnä asiakaspalvelun sekä mainonnan. Tutkielman tarkoituksena on vastata kysymykseen: Mitkä ovat big datan tuomat mahdollisuudet ja haasteet markkinointiviestinnän kohdentamisessa mobiiliverkkopankissa? Vastaus tähän tutkimuskysymykseen muodostetaan kahden osakysymyksen avulla: Mitä mahdollisuuksia ja haasteita big data tuo markkinointiviestinnän kohdentamiseen? Millainen markkinointiviestinnän kohdentamisen kanava on mobiiliverkkopankki? Tutkimuskysymykseen vastattiin sekä tutkielman teoriaosassa toteutetun teoreettis-käsitteellisen aikaisemman tutkimuksen läpikäynnin että erillisen empiirisen tapaustutkimuksen avulla. Laadullinen tapaustutkimus suoritettiin teemahaastatteluina S-Pankin mobiiliverkkopankkiin, Smobiiliin, liittyen. Teemahaastatteluissa haastateltiin seitsemää asiantuntijaa sekä kahta S-mobiilin käyttäjää. Tutkielman teoriaosassa tuli ilmi, että big datan avulla kuluttaja on mahdollista tuntea kokonaisuudessaan paremmin, mikä parantaa perinteisiä sekä tarjoaa myös täysin uusia markkinointiviestinnän kohdentamisen keinoja. Näiden kautta on mahdollista vaikuttaa yrityksen kilpailuetuun. Teoriaosuudessa todettiin big datan tuomien haasteiden liittyvän ilmiön uutuuteen ja tuntemattomuuteen, tiedonhallintaan sekä yritysten ulkopuolelta tuleviin haasteisiin koskien yksityisyydensuojaa, kuluttajien mielipiteitä sekä erilaisia määrättyjä rajoitteita. Tapaustutkimuksen tulokset erosivat näistä löydöksistä ainoastaan haasteiden tärkeyden painotuksissa: suurimpana tiedonhallintaan liittyvänä haasteena empiirisessä tutkimuksessa tuli esiin teknologioiden tarve tutkielman teoriaosuudessa ilmenneen asiantuntijuuden tarpeen sijaan. Lisäksi ulkoisia haasteita ei koettu merkittävinä haasteina empiirisessä tutkimuksessa. Tapaustutkimuksen tulokset tukivat tutkielman teoriaosuudessa muodostettua kuvaa mobiiliverkkopankista markkinointiviestinnän kohdentamisen kanavana: se on henkilökohtainen pankkisovellus, jonka avulla tarkasti tunnettu asiakas voidaan tavoittaa suojatussa ympäristössä tehokkaasti, ajasta ja paikasta riippumatta. Mobiiliverkkopankkia käytetään oletettavasti päämääräkeskeisesti, mutta mahdollisesti myös osittain viihdykkeellisesti sekä ajankulutustarkoituksiin. Lisäksi sovelluksen sisällä on mahdollista saada asiakkaan jakamaton huomio, jota älypuhelimen erilaiset käyttötilanteet voivat kuitenkin käytännössä häiritä. Mobiiliverkkopankin kautta toteutetun markkinointiviestinnän kohdentamisen hyväksyttävyyteen voitaneen vaikuttaa luvan pyytämisen, pull-tyyppisen mainonnan, viestinnän hyödyllisyyden sekä viestivän yrityksen brändin luotettavuuden kautta. Tutkielman aihepiiri on vielä hyvin tuore ja muuttuva, mistä syystä siihen liittyvät ilmiöt ja termit ovat osittain vakiintumattomia sekä hankalasti hahmotettavissa. Tämä tuo esiin tarpeellisia jatkotutkimusmahdollisuuksia liittyen esimerkiksi big data -termin käsiteanalyyttiseen tutkimiseen. Jatkotutkimusta olisi hyödyllistä suorittaa myös koskien big datan ja mobiiliverkkopankin avulla toteutetun kohdentamisen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kandidaatintyö on toteutettu kirjallisuuskatsauksena, jonka tavoitteena on selvittää data-analytiikan käyttökohteita ja datan hyödyntämisen vaikutusta liiketoimintaan. Työ käsittelee data-analytiikan käyttöä ja datan tehokkaan hyödyntämisen haasteita. Työ on rajattu tarkastelemaan yrityksen talouden ohjausta, jossa analytiikkaa käytetään johdon ja rahoituksen laskentatoimessa. Datan määrän eksponentiaalinen kasvunopeus luo data-analytiikan käytölle uusia haasteita ja mahdollisuuksia. Datalla itsessään ei kuitenkaan ole suurta arvoa yritykselle, vaan arvo syntyy prosessoinnin kautta. Vaikka data-analytiikkaa tutkitaan ja käytetään jo runsaasti, se tarjoaa paljon nykyisiä sovelluksia suurempia mahdollisuuksia. Yksi työn keskeisimmistä tuloksista on, että data-analytiikalla voidaan tehostaa johdon laskentatoimea ja helpottaa rahoituksen laskentatoimen tehtäviä. Tarjolla olevan datan määrä kasvaa kuitenkin niin nopeasti, että käytettävissä oleva teknologia ja osaamisen taso eivät pysy kehityksessä mukana. Varsinkin big datan laajempi käyttöönotto ja sen tehokas hyödyntäminen vaikuttavat jatkossa talouden ohjauksen käytäntöihin ja sovelluksiin yhä enemmän.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Americans are accustomed to a wide range of data collection in their lives: census, polls, surveys, user registrations, and disclosure forms. When logging onto the Internet, users’ actions are being tracked everywhere: clicking, typing, tapping, swiping, searching, and placing orders. All of this data is stored to create data-driven profiles of each user. Social network sites, furthermore, set the voluntarily sharing of personal data as the default mode of engagement. But people’s time and energy devoted to creating this massive amount of data, on paper and online, are taken for granted. Few people would consider their time and energy spent on data production as labor. Even if some people do acknowledge their labor for data, they believe it is accessory to the activities at hand. In the face of pervasive data collection and the rising time spent on screens, why do people keep ignoring their labor for data? How has labor for data been become invisible, as something that is disregarded by many users? What does invisible labor for data imply for everyday cultural practices in the United States? Invisible Labor for Data addresses these questions. I argue that three intertwined forces contribute to framing data production as being void of labor: data production institutions throughout history, the Internet’s technological infrastructure (especially with the implementation of algorithms), and the multiplication of virtual spaces. There is a common tendency in the framework of human interactions with computers to deprive data and bodies of their materiality. My Introduction and Chapter 1 offer theoretical interventions by reinstating embodied materiality and redefining labor for data as an ongoing process. The middle Chapters present case studies explaining how labor for data is pushed to the margin of the narratives about data production. I focus on a nationwide debate in the 1960s on whether the U.S. should build a databank, contemporary Big Data practices in the data broker and the Internet industries, and the group of people who are hired to produce data for other people’s avatars in the virtual games. I conclude with a discussion on how the new development of crowdsourcing projects may usher in the new chapter in exploiting invisible and discounted labor for data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context Understanding connectivity patterns in relation to habitat fragmentation is essential to landscape management. However, connectivity is often judged from expert opinion or species occurrence patterns, with very few studies considering the actual movements of individuals. Path selection functions provide a promising tool to infer functional connectivity from animal movement data, but its practical application remains scanty. Objectives We aimed to describe functional connectivity patterns in a forest carnivore using path-level analysis, and to explore how connectivity is affected by land cover patterns and road networks. Methods We radiotracked 22 common genets in a mixed forest-agricultural landscape of southern Portugal. We developed path selection functions discriminating between observed and random paths in relation to landscape variables. These functions were used together with land cover information to map conductance surfaces. Results Genets moved preferentially within forest patches and close to riparian habitats. Functional connectivity declined with increasing road density, but increased with the proximity of culverts, viaducts and bridges. Functional connectivity was favoured by large forest patches, and by the presence of riparian areas providing corridors within open agricultural land. Roads reduced connectivity by dissecting forest patches, but had less effect on riparian corridors due to the presence of crossing structures. Conclusions Genet movements were jointly affected by the spatial distribution of suitable habitats, and the presence of a road network dissecting such habitats and creating obstacles in areas otherwise permeable to animal movement. Overall, the study showed the value of path-level analysis to assess functional connectivity patterns in human-modified landscapes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is an overview of some of the implications of IoT on the healthcare field. Due to the increasing of IoT solutions, healthcare cannot be outside of this paradigm. The contribution of this paper is to introduce directions to achieve a global connectivity between the Internet of Things (IoT) and the medical environments. The need to integrate all in a global environment is a huge challenge to all (from electrical engineers to data engineers).This revolution is redesigning the way we see healthcare, from the smallest sensor to the big data collected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.