818 resultados para big data
Resumo:
Este libro nace con la pretensión de ser un manual online dirigido a profesionales que se estén iniciando en la planificación de esta publicidad y, sobre todo, a estudiantes. Se concibe un libro que se adapte a diferentes tipos de lectores y a las diferentes necesidades de conocimiento. Se hace uso de su naturaleza online para facilitarte la lectura. Se compone de 12 capítulos. 1. La publicidad online. 2. El plan de medios publicitario. 3. El target group de la campaña: definición, medición y tipología de selección de medios online según público. 4. Los soportes publicitarios online y la publicidad en los medios sociales. 5. Los dispositivos móviles como medio publicitario. 6. Los modelos de pricing o contratación de espacios. 7. La eficacia publicitaria. 8. Herramientas para la planificación de la publicidad online. 9. Agencias de medios y perfiles profesionales. 10. Retos y oportunidades. 11. Fuentes de información. 12. Microtemas.
Resumo:
Nowadays, online media represent a great choice for advertising. From de advertising media planning, new media give new ways to reach the consumers, but they also add more complexity. The communication capacity of online media and the greater use of that media by part of the users open up the debate about the necessity of rethinking the approach of the ‘traditional’ advertising media planning, which structure and work processes were developed when media were offline. So, this article gives a panoramic view about the influence of new media in advertising media planning. To do this, in first place, describes the current scenario, analyzing the penetration and advertising expenditure in Internet. Also, it shows the main online media according to their proximity to the offline advertising media planning conception. In second place, this article addresses the current challenges at measuring new media as a symptom of the impulse at the change of model. Finally, the article ends up showing some trends that are presented as drivers of change. However, after this analysis, comes up the point that those aspects would not change the essence of advertising media planning, so it is questionable if we can speak of a crisis or, instead, if new media are showing the necessity that media planning have to be involved with this new scenario.
Resumo:
De entre los principales retos que plantea la docencia universitaria actual, destaca el de avanzar hacia modelos docentes centrados en el estudiante, capaces de desarrollar y conducir su aprendizaje de forma autónoma (tutorizada) tanto en las actividades presenciales como en las no presenciales. En este sentido, la posibilidad de operar con grandes bases de datos georeferenciadas de libre acceso supone un magnífico potencial para la investigación y la docencia del Urbanismo. Por ello, intervenir como guías en el proceso de comprensión y empleo de los datos a gran escala, es uno de los principales desafíos actuales de los docentes de las asignaturas de Urbanismo. Este artículo tiene por objeto explicar la experiencia desarrollada en la Universidad de Alicante (UA), con el propósito de iniciar al alumnado en el consumo inteligente de la información, para llevar a cabo sus propios análisis y obtener sus propias interpretaciones. El trabajo muestra los métodos y herramientas empleadas para tal fin, que permiten acercarse a nuevas formas dinámicas de relación con el conocimiento, a nuevas prácticas educativas activas y, sobre todo, a la creación de una nueva conciencia social más consciente y acorde con el mundo que habitamos.
Resumo:
El Cuadro de Mando SmartUA es una aplicación software que permite localizar y visualizar con facilidad, en cualquier momento y desde cualquier lugar, toda la información recopilada desde diversas fuentes de datos y redes de sensores generadas por el proyecto Smart University de la Universidad de Alicante; representarla en forma de mapas y gráficas; realizar búsquedas y filtros sobre dicha información; y mostrar a la comunidad universitaria en particular y a la ciudadanía en general, de una forma objetiva e inteligible, los fenómenos que ocurren en el campus, interconectado sistemas y personas para un mejor aprovechamiento de los recursos, una gestión eficiente y una innovación continua.
Resumo:
In order to become better prepared to support Research Data Management (RDM) practices in sciences and engineering, Queen’s University Library, together with the University Research Services, conducted a research study of all ranks of faculty members, as well as postdoctoral fellows and graduate students at the Faculty of Engineering & Applied Science, Departments of Chemistry, Computer Science, Geological Sciences and Geological Engineering, Mathematics and Statistics, Physics, Engineering Physics & Astronomy, School of Environmental Studies, and Geography & Planning in the Faculty of Arts and Science.
Resumo:
Internet traffic classification is a relevant and mature research field, anyway of growing importance and with still open technical challenges, also due to the pervasive presence of Internet-connected devices into everyday life. We claim the need for innovative traffic classification solutions capable of being lightweight, of adopting a domain-based approach, of not only concentrating on application-level protocol categorization but also classifying Internet traffic by subject. To this purpose, this paper originally proposes a classification solution that leverages domain name information extracted from IPFIX summaries, DNS logs, and DHCP leases, with the possibility to be applied to any kind of traffic. Our proposed solution is based on an extension of Word2vec unsupervised learning techniques running on a specialized Apache Spark cluster. In particular, learning techniques are leveraged to generate word-embeddings from a mixed dataset composed by domain names and natural language corpuses in a lightweight way and with general applicability. The paper also reports lessons learnt from our implementation and deployment experience that demonstrates that our solution can process 5500 IPFIX summaries per second on an Apache Spark cluster with 1 slave instance in Amazon EC2 at a cost of $ 3860 year. Reported experimental results about Precision, Recall, F-Measure, Accuracy, and Cohen's Kappa show the feasibility and effectiveness of the proposal. The experiments prove that words contained in domain names do have a relation with the kind of traffic directed towards them, therefore using specifically trained word embeddings we are able to classify them in customizable categories. We also show that training word embeddings on larger natural language corpuses leads improvements in terms of precision up to 180%.
Resumo:
Questa tesi concerne quella che è una generalizzata tendenza verso la trasformazione digitale dei processi di business. Questa evoluzione, che implica l’utilizzo delle moderne tecnologie informatiche tra cui il Cloud Computing, le Big Data Analytics e gli strumenti Mobile, non è priva di insidie che vanno di volta in volta individuate ed affrontate opportunamente. In particolare si farà riferimento ad un caso aziendale, quello della nota azienda bolognese FAAC spa, ed alla funzione acquisti. Nell'ambito degli approvvigionamenti l'azienda sente la necessità di ristrutturare e digitalizzare il processo di richiesta di offerta (RdO) ai propri fornitori, al fine di consentire alla funzione di acquisti di concentrarsi sull'implementazione della strategia aziendale più che sull'operatività quotidiana. Si procede quindi in questo elaborato all'implementazione di un progetto di implementazione di una piattaforma specifica di e-procurement per la gestione delle RdO. Preliminarmente vengono analizzati alcuni esempi di project management presenti in letteratura e quindi viene definito un modello per la gestione del progetto specifico. Lo svolgimento comprende quindi: una fase di definizione degli obiettivi di continuità dell'azienda, un'analisi As-Is dei processi, la definizione degli obiettivi specifici di progetto e dei KPI di valutazione delle performance, la progettazione della piattaforma software ed infine alcune valutazioni relative ai rischi ed alle alternative dell'implementazione.
Dando ouvidos aos dispositivos: como resolver controvérsias em um debate sobre cidades inteligentes?
Resumo:
Neste breve artigo, procuro analisar um workshop de pesquisa sobre o tema das “Cidades Inteligentes”, ou smart cities, no qual estive presente. Nessa análise, mostro como os conceitos de “cidade inteligente” e “big data” são construídos de modo distinto pelos dois grupos de pessoas presentes no evento, que classifico como “otimizadores” e “reguladores”. Essas diferentes formas de se enxergar os dispositivos em questão levam a uma série de controvérsias. Em um primeiro momento, procuro enquadrar o modo como algumas das controvérsias aparecem dentro do marco teórico da Construção Social da Tecnologia (SCOT). Posteriormente, pretendo mostrar que as controvérsias que apareceram ao longo do evento não foram solucionadas – e dificilmente serão, num futuro próximo – enquanto não se optar por um modelo analítico tal como a Teoria Ator-Rede, que dá ouvidos para um grupo ignorado naquelas discussões: os dispositivos empregados na construção do conceito de “Cidade Inteligente”.
Resumo:
O sociólogo que comanda um dos grandes centros de análise de big data no Brasil diz que os políticos só vão recuperar legitimidade quando aprenderem que "curtir" é coisa séria para detectar tendências e medir o pulso das aspirações sociais ganhou volume e tempo real no oceano de informações do big data, nome que se dá à gigantesca quantidade de dados produzidos diariamente na internet. É nessa mina inesgotável que o sociólogo carioca Marco Aurelio Ruediger, da Fundação Getulio Vargas, abastece a Diretoria de Análise de Políticas Públicas, um centro de estudo da visão que os brasileiros têm da máquina estatal e dos poderes da República.
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
Multiple transformative forces target marketing, many of which derive from new technologies that allow us to sample thinking in real time (i.e., brain imaging), or to look at large aggregations of decisions (i.e., big data). There has been an inclination to refer to the intersection of these technologies with the general topic of marketing as “neuromarketing”. There has not been a serious effort to frame neuromarketing, which is the goal of this paper. Neuromarketing can be compared to neuroeconomics, wherein neuroeconomics is generally focused on how individuals make “choices”, and represent distributions of choices. Neuromarketing, in contrast, focuses on how a distribution of choices can be shifted or “influenced”, which can occur at multiple “scales” of behavior (e.g., individual, group, or market/society). Given influence can affect choice through many cognitive modalities, and not just that of valuation of choice options, a science of influence also implies a need to develop a model of cognitive function integrating attention, memory, and reward/aversion function. The paper concludes with a brief description of three domains of neuromarketing application for studying influence, and their caveats.
Resumo:
This paper researches on Matthew Effect in Sina Weibo microblogger. We choose the microblogs in the ranking list of Hot Microblog App in Sina Weibo microblogger as target of our study. The differences of repost number of microblogs in the ranking list between before and after the time when it enter the ranking list of Hot Microblog app are analyzed. And we compare the spread features of the microblogs in the ranking list with those hot microblogs not in the list and those ordinary microblogs of users who have some microblog in the ranking list before. Our study proves the existence of Matthew Effect in social network. © 2013 IEEE.
Resumo:
The miniaturization, sophistication, proliferation, and accessibility of technologies are enabling the capture of more and previously inaccessible phenomena in Parkinson's disease (PD). However, more information has not translated into a greater understanding of disease complexity to satisfy diagnostic and therapeutic needs. Challenges include noncompatible technology platforms, the need for wide-scale and long-term deployment of sensor technology (among vulnerable elderly patients in particular), and the gap between the "big data" acquired with sensitive measurement technologies and their limited clinical application. Major opportunities could be realized if new technologies are developed as part of open-source and/or open-hardware platforms that enable multichannel data capture sensitive to the broad range of motor and nonmotor problems that characterize PD and are adaptable into self-adjusting, individualized treatment delivery systems. The International Parkinson and Movement Disorders Society Task Force on Technology is entrusted to convene engineers, clinicians, researchers, and patients to promote the development of integrated measurement and closed-loop therapeutic systems with high patient adherence that also serve to (1) encourage the adoption of clinico-pathophysiologic phenotyping and early detection of critical disease milestones, (2) enhance the tailoring of symptomatic therapy, (3) improve subgroup targeting of patients for future testing of disease-modifying treatments, and (4) identify objective biomarkers to improve the longitudinal tracking of impairments in clinical care and research. This article summarizes the work carried out by the task force toward identifying challenges and opportunities in the development of technologies with potential for improving the clinical management and the quality of life of individuals with PD. © 2016 International Parkinson and Movement Disorder Society.
Resumo:
Sensing technology is a key enabler of the Internet of Things (IoT) and could produce huge volume data to contribute the Big Data paradigm. Modelling of sensing information is an important and challenging topic, which influences essentially the quality of smart city systems. In this paper, the author discusses the relevant technologies and information modelling in the context of smart city and especially reports the investigation of how to model sensing and location information in order to support smart city development.
Resumo:
Purpose – The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution. Design/methodology/approach – The paper establishes an overview of challenges and opportunities of current significance in the area of big data, specifically in the context of transparency and processes in heterogeneous enterprise networks. Within this context, the paper presents how existing components and purpose-driven research were combined for a solution implemented in a nationwide network for less-than-truckload consignments. Findings – Aside from providing an extended overview of today’s big data situation, the findings have shown that technical means and methods available today can comprise a feasible process transparency solution in a large heterogeneous network where legacy practices, reporting lags and incomplete data exist, yet processes are sensitive to inadequate policy changes. Practical implications – The means introduced in the paper were found to be of utility value in improving process efficiency, transparency and planning in logistics networks. The particular system design choices in the presented solution allow an incremental introduction or evolution of resource handling practices, incorporating existing fragmentary, unstructured or tacit knowledge of experienced personnel into the theoretically founded overall concept. Originality/value – The paper extends previous high-level view on the potential of big data, and presents new applied research and development results in a logistics application.