32 resultados para Impala, Hadoop, Big Data, HDFS, Social Business Intelligence, SBI, cloudera


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last few years, the Data Center market has increased exponentially and this tendency continues today. As a direct consequence of this trend, the industry is pushing the development and implementation of different new technologies that would improve the energy consumption efficiency of data centers. An adaptive dashboard would allow the user to monitor the most important parameters of a data center in real time. For that reason, monitoring companies work with IoT big data filtering tools and cloud computing systems to handle the amounts of data obtained from the sensors placed in a data center.Analyzing the market trends in this field we can affirm that the study of predictive algorithms has become an essential area for competitive IT companies. Complex algorithms are used to forecast risk situations based on historical data and warn the user in case of danger. Considering that several different users will interact with this dashboard from IT experts or maintenance staff to accounting managers, it is vital to personalize it automatically. Following that line of though, the dashboard should only show relevant metrics to the user in different formats like overlapped maps or representative graphs among others. These maps will show all the information needed in a visual and easy-to-evaluate way. To sum up, this dashboard will allow the user to visualize and control a wide range of variables. Monitoring essential factors such as average temperature, gradients or hotspots as well as energy and power consumption and savings by rack or building would allow the client to understand how his equipment is behaving, helping him to optimize the energy consumption and efficiency of the racks. It also would help him to prevent possible damages in the equipment with predictive high-tech algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensor network deployments have become a primary source of big data about the real world that surrounds us, measuring a wide range of physical properties in real time. With such large amounts of heterogeneous data, a key challenge is to describe and annotate sensor data with high-level metadata, using and extending models, for instance with ontologies. However, to automate this task there is a need for enriching the sensor metadata using the actual observed measurements and extracting useful meta-information from them. This paper proposes a novel approach of characterization and extraction of semantic metadata through the analysis of sensor data raw observations. This approach consists in using approximations to represent the raw sensor measurements, based on distributions of the observation slopes, building a classi?cation scheme to automatically infer sensor metadata like the type of observed property, integrating the semantic analysis results with existing sensor networks metadata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durante la actividad diaria, la sociedad actual interactúa constantemente por medio de dispositivos electrónicos y servicios de telecomunicaciones, tales como el teléfono, correo electrónico, transacciones bancarias o redes sociales de Internet. Sin saberlo, masivamente dejamos rastros de nuestra actividad en las bases de datos de empresas proveedoras de servicios. Estas nuevas fuentes de datos tienen las dimensiones necesarias para que se puedan observar patrones de comportamiento humano a grandes escalas. Como resultado, ha surgido una reciente explosión sin precedentes de estudios de sistemas sociales, dirigidos por el análisis de datos y procesos computacionales. En esta tesis desarrollamos métodos computacionales y matemáticos para analizar sistemas sociales por medio del estudio combinado de datos derivados de la actividad humana y la teoría de redes complejas. Nuestro objetivo es caracterizar y entender los sistemas emergentes de interacciones sociales en los nuevos espacios tecnológicos, tales como la red social Twitter y la telefonía móvil. Analizamos los sistemas por medio de la construcción de redes complejas y series temporales, estudiando su estructura, funcionamiento y evolución en el tiempo. También, investigamos la naturaleza de los patrones observados por medio de los mecanismos que rigen las interacciones entre individuos, así como medimos el impacto de eventos críticos en el comportamiento del sistema. Para ello, hemos propuesto modelos que explican las estructuras globales y la dinámica emergente con que fluye la información en el sistema. Para los estudios de la red social Twitter, hemos basado nuestros análisis en conversaciones puntuales, tales como protestas políticas, grandes acontecimientos o procesos electorales. A partir de los mensajes de las conversaciones, identificamos a los usuarios que participan y construimos redes de interacciones entre los mismos. Específicamente, construimos una red para representar quién recibe los mensajes de quién y otra red para representar quién propaga los mensajes de quién. En general, hemos encontrado que estas estructuras tienen propiedades complejas, tales como crecimiento explosivo y distribuciones de grado libres de escala. En base a la topología de estas redes, hemos indentificado tres tipos de usuarios que determinan el flujo de información según su actividad e influencia. Para medir la influencia de los usuarios en las conversaciones, hemos introducido una nueva medida llamada eficiencia de usuario. La eficiencia se define como el número de retransmisiones obtenidas por mensaje enviado, y mide los efectos que tienen los esfuerzos individuales sobre la reacción colectiva. Hemos observado que la distribución de esta propiedad es ubicua en varias conversaciones de Twitter, sin importar sus dimensiones ni contextos. Con lo cual, sugerimos que existe universalidad en la relación entre esfuerzos individuales y reacciones colectivas en Twitter. Para explicar los factores que determinan la emergencia de la distribución de eficiencia, hemos desarrollado un modelo computacional que simula la propagación de mensajes en la red social de Twitter, basado en el mecanismo de cascadas independientes. Este modelo nos permite medir el efecto que tienen sobre la distribución de eficiencia, tanto la topología de la red social subyacente, como la forma en que los usuarios envían mensajes. Los resultados indican que la emergencia de un grupo selecto de usuarios altamente eficientes depende de la heterogeneidad de la red subyacente y no del comportamiento individual. Por otro lado, hemos desarrollado técnicas para inferir el grado de polarización política en redes sociales. Proponemos una metodología para estimar opiniones en redes sociales y medir el grado de polarización en las opiniones obtenidas. Hemos diseñado un modelo donde estudiamos el efecto que tiene la opinión de un pequeño grupo de usuarios influyentes, llamado élite, sobre las opiniones de la mayoría de usuarios. El modelo da como resultado una distribución de opiniones sobre la cual medimos el grado de polarización. Aplicamos nuestra metodología para medir la polarización en redes de difusión de mensajes, durante una conversación en Twitter de una sociedad políticamente polarizada. Los resultados obtenidos presentan una alta correspondencia con los datos offline. Con este estudio, hemos demostrado que la metodología propuesta es capaz de determinar diferentes grados de polarización dependiendo de la estructura de la red. Finalmente, hemos estudiado el comportamiento humano a partir de datos de telefonía móvil. Por una parte, hemos caracterizado el impacto que tienen desastres naturales, como innundaciones, sobre el comportamiento colectivo. Encontramos que los patrones de comunicación se alteran de forma abrupta en las áreas afectadas por la catástofre. Con lo cual, demostramos que se podría medir el impacto en la región casi en tiempo real y sin necesidad de desplegar esfuerzos en el terreno. Por otra parte, hemos estudiado los patrones de actividad y movilidad humana para caracterizar las interacciones entre regiones de un país en desarrollo. Encontramos que las redes de llamadas y trayectorias humanas tienen estructuras de comunidades asociadas a regiones y centros urbanos. En resumen, hemos mostrado que es posible entender procesos sociales complejos por medio del análisis de datos de actividad humana y la teoría de redes complejas. A lo largo de la tesis, hemos comprobado que fenómenos sociales como la influencia, polarización política o reacción a eventos críticos quedan reflejados en los patrones estructurales y dinámicos que presentan la redes construidas a partir de datos de conversaciones en redes sociales de Internet o telefonía móvil. ABSTRACT During daily routines, we are constantly interacting with electronic devices and telecommunication services. Unconsciously, we are massively leaving traces of our activity in the service providers’ databases. These new data sources have the dimensions required to enable the observation of human behavioral patterns at large scales. As a result, there has been an unprecedented explosion of data-driven social research. In this thesis, we develop computational and mathematical methods to analyze social systems by means of the combined study of human activity data and the theory of complex networks. Our goal is to characterize and understand the emergent systems from human interactions on the new technological spaces, such as the online social network Twitter and mobile phones. We analyze systems by means of the construction of complex networks and temporal series, studying their structure, functioning and temporal evolution. We also investigate on the nature of the observed patterns, by means of the mechanisms that rule the interactions among individuals, as well as on the impact of critical events on the system’s behavior. For this purpose, we have proposed models that explain the global structures and the emergent dynamics of information flow in the system. In the studies of the online social network Twitter, we have based our analysis on specific conversations, such as political protests, important announcements and electoral processes. From the messages related to the conversations, we identify the participant users and build networks of interactions with them. We specifically build one network to represent whoreceives- whose-messages and another to represent who-propagates-whose-messages. In general, we have found that these structures have complex properties, such as explosive growth and scale-free degree distributions. Based on the topological properties of these networks, we have identified three types of user behavior that determine the information flow dynamics due to their influence. In order to measure the users’ influence on the conversations, we have introduced a new measure called user efficiency. It is defined as the number of retransmissions obtained by message posted, and it measures the effects of the individual activity on the collective reacixtions. We have observed that the probability distribution of this property is ubiquitous across several Twitter conversation, regardlessly of their dimension or social context. Therefore, we suggest that there is a universal behavior in the relationship between individual efforts and collective reactions on Twitter. In order to explain the different factors that determine the user efficiency distribution, we have developed a computational model to simulate the diffusion of messages on Twitter, based on the mechanism of independent cascades. This model, allows us to measure the impact on the emergent efficiency distribution of the underlying network topology, as well as the way that users post messages. The results indicate that the emergence of an exclusive group of highly efficient users depends upon the heterogeneity of the underlying network instead of the individual behavior. Moreover, we have also developed techniques to infer the degree of polarization in social networks. We propose a methodology to estimate opinions in social networks and to measure the degree of polarization in the obtained opinions. We have designed a model to study the effects of the opinions of a small group of influential users, called elite, on the opinions of the majority of users. The model results in an opinions distribution to which we measure the degree of polarization. We apply our methodology to measure the polarization on graphs from the messages diffusion process, during a conversation on Twitter from a polarized society. The results are in very good agreement with offline and contextual data. With this study, we have shown that our methodology is capable of detecting several degrees of polarization depending on the structure of the networks. Finally, we have also inferred the human behavior from mobile phones’ data. On the one hand, we have characterized the impact of natural disasters, like flooding, on the collective behavior. We found that the communication patterns are abruptly altered in the areas affected by the catastrophe. Therefore, we demonstrate that we could measure the impact of the disaster on the region, almost in real-time and without needing to deploy further efforts. On the other hand, we have studied human activity and mobility patterns in order to characterize regional interactions on a developing country. We found that the calls and trajectories networks present community structure associated to regional and urban areas. In summary, we have shown that it is possible to understand complex social processes by means of analyzing human activity data and the theory of complex networks. Along the thesis, we have demonstrated that social phenomena, like influence, polarization and reaction to critical events, are reflected in the structural and dynamical patterns of the networks constructed from data regarding conversations on online social networks and mobile phones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, the research community has focused on new classification methods that rely on statistical characteristics of Internet traffic, instead of pre-viously popular port-number-based or payload-based methods, which are under even bigger constrictions. Some research works based on statistical characteristics generated large fea-ture sets of Internet traffic; however, nowadays it?s impossible to handle hun-dreds of features in big data scenarios, only leading to unacceptable processing time and misleading classification results due to redundant and correlative data. As a consequence, a feature selection procedure is essential in the process of Internet traffic characterization. In this paper a survey of feature selection methods is presented: feature selection frameworks are introduced, and differ-ent categories of methods are briefly explained and compared; several proposals on feature selection in Internet traffic characterization are shown; finally, future application of feature selection to a concrete project is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban economic activities are an essential facet in defining city identity. Traditional approaches rely very often on the most theoretical and quantitative features of the studies, excluding de-facto a direct association between those findings and the tangible subject of the analysis. To fill the gap, the Big Data era and information visualization methodologies could help analysts, stakeholders and general audience to gain a new insight on the field. In this paper, we want to provide some food for thought about new opportunities arising in visual urban economies as well as present some visual results on possible scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequencing of Information Technology Infrastructure Library (ITIL) processes related to their order of implementation is one of the pending issues in the ITIL handbooks. Other frameworks and process models related to service management (e.g., COBIT, COSO and CMMI-SVC) are quite well described in the literature and their handbooks. However, the identifi cation of the fi rst process to be implemented has not been deeply analysed in the previous frameworks and models, and it is also a complex question to answer for organizations, especially Small and Medium Enterprises (SMEs). Moreover, SMEs are the organizations that have the largest presence in the world economy; offi cial data of General Business Directory, show that their range of presence in different countries around the world is between 93% and 99%, and the average of employment contribution is around 60%. Consequently, the improvement of information technology service management is of vital importance to accomplish in this type of enterprises. This research has focused on two surveys that aim at helping SMEs to select the ITIL process by which starting the implementation of ITIL. In the fi rst survey, data were gathered through a questionnaire to SMEs registered in the region of Madrid. The second survey obtained data from experts and enterprises in countries as Spain, Ecuador, Chile, Luxembourg, Colombia, Norway, El Salvador, and Venezuela. Finally, the results of both surveys show that the tendency for starting an ITIL implementation in SMEs points to one of the processes included in the Service Operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uno de los principales objetivos de los sistemas informáticos es ser capaces de detectar y controlar aquellos accesos no autorizados, o incluso prevenirlos antes de que se produzca una pérdida de valor en el sistema. Se busca encontrar un modelo general que englobe todos los posibles casos de entradas no deseadas al sistema y que sea capaz de aprender para detectar intrusiones futuras. En primer lugar se estudiará la relevancia de las técnicas utilizadas para el almacenamiento de la información. Big Data ilustra los elementos esenciales necesarios para el almacenamiento de los datos con un formato único identificable y unos atributos característicos que los definan, para su posterior análisis. El método de almacenamiento elegido influirá en las técnicas de análisis y captura de valor utilizadas, dado que existe una dependencia directa entre el formato en el que se almacena la información y el valor específico que se pretende obtener de ella. En segundo lugar se examinarán las distintas técnicas de análisis y captura de datos actuales, y los diferentes resultados que se pueden obtener. En este punto aparece el concepto de machine learning y su posible aplicación para detección de anomalías. La finalidad es lograr generalizar diferentes comportamientos a partir de una información no estructurada y generar un modelo aplicable a nuevas entradas al sistema que no son conocidas con anterioridad. En último lugar, se analizarán diferentes entornos de ciberseguridad y se propondrá un conjunto de recomendaciones de diseño o ajustes respecto a las técnicas mencionadas anteriormente, realizando una breve clasificación según las variables de entrada que se tienen y el resultado que se desea obtener. El propósito de este Trabajo de Fin de Grado es, por tanto, la comparación general de las diferentes técnicas actuales de detección de comportamientos anómalos en un sistema informático, tales como el aprendizaje de máquinas o minería de datos, así como de un planteamiento de cuáles son las mejores opciones según el tipo de valor que se desea extraer de la información almacenada.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Nowadays, Internet is a place where social networks have reached an important impact in collaboration among people over the world in different ways. This article proposes a new paradigm for building CSCW business tools following the novel ideas provided by the social web to collaborate and generate awareness. An implementation of these concepts is described, including the components we provide to collaborate in workspaces, (such as videoconference, chat, desktop sharing, forums or temporal events), and the way we generate awareness from these complex social data structures. Figures and validation results are also presented to stress that this architecture has been defined to support awareness generation via joining current and future social data from business and social networks worlds, based on the idea of using social data stored in the cloud.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Este trabajo trata el estudio de la implementación y desarrollo de las diversas plataformas del social media: redes sociales y del conocimiento, blogs y herramientas colaborativas en el ámbito corporativo de la empresa. El estudio recopila información de consultoras tecnológicas, de artículos y de diversas plataformas social media y se realiza una investigación sobre el tema planteado. Se incluye el análisis de 42 encuestas a profesionales de dos grandes empresas de las telecomunicaciones en España. Estas dos empresas cuentan una con cerca de 28000 empleados y la otra con más 300 empleados en sus filiales españolas. Ambas tienen una importante presencia internacional. Estas dos empresa se diferencian de otras empresas del sector de las telecomunicaciones en que están apostando en la implementación del social media en sus procesos internos. Además se incluye el estudio y análisis de las estadísticas de uso y de una series de encuestas realizadas en el muro de la red social corporativa de una multinacional de las telecomunicaciones durante tres meses. Se presenta una nueva cultura social de empresa innovadora en áreas como la gestión del conocimiento, comunicación interna, formación e innovación. Y se ofrece una visión cuantitativa y de la implantación del social media en los procesos de una empresa. Se desarrolla una exposición donde se detalla el proceso de estudio de las diferentes plataformas social media y áreas de aplicación en la empresa, el estudio de los aspectos legales de su aplicación y uso y la implementación y desarrollo. Asimismo se expone un análisis teórico-práctico del cálculo del retorno de la inversión (ROI) y por último un análisis de la información recopilada en las encuestas y en el estudio estadístico de la red social corporativa. Los datos de las encuestas fueron analizados mediante estadística descriptiva basada en gráficos y tablas de contingencia donde se calculan residuos y porcentajes totales para analizar la dependencia entre el social media, eficiencia, productividad y cuenta de resultados, además del análisis de la aportación del social media a la misión, comunicación interna y gestión del conocimiento en la empresa. También se realizan cálculos de distribuciones Chi-cuadrado para demostrar la dependencia del social media-productividad y del GAP que relaciona la importancia y el nivel de satisfacción del social media. En el análisis teórico-práctico se toman como parámetros los beneficios, costes, flexibilidad y riesgo. Los beneficios van ligados a la productividad, gestión del conocimiento, capital humano y procesos internos. Los costes a las licencias de software, administración, implementación y formación. A partir de estos parámetros se realizó el estudio de un modelo de empresa que representa a una gran empresa de las TIC en España. Los datos para el estudio son estimativos dentro de la realidad, debido a que la intención no es saber estos valores reales sino el estudio teórico-práctico del método y su aplicación para el calculo del ROI. El estudio estadístico del la red social se realizo durante tres meses y se obtuvo el progreso de uso de la red social en eventos tales como: número de participantes activos, mensajes publicados, archivos subidos, grupos activos y tipos y plataformas de acceso. Del estudio de los datos estadísticos de estos eventos se obtuvieron indicadores de participación, actividad y conocimiento de la red social que son útiles par el calculo del ROI. En conclusión, se demuestran las mejoras que ofrece el social media en campos como la comunicación interna, gestión del conocimiento, formación e innovación. Y gracias a estas mejoras el aumento de la productividad y eficiencia del profesional y asimismo un potencial retorno de la inversión (ROI). ABSTRACT. This paper deals with the study of the implementation and development of the different platforms of social media: social networks and knowledge, blogs and collaborative tools in the corporate enterprise level. The study collects information technology consulting, articles and several social media platforms and an investigation into the question raised is performed. Analysis of 42 surveys of professionals from two big companies telecommunications in Spain are included. These two companies have one about 28000 employees and another with more than 300 employees at its Spanish subsidiaries. Both have a strong international presence. These two companies differ from other companies in the telecommunications sector they are betting in the implementation of social media in their internal processes. Furthermore, the study and analysis of usage statistics and a series of surveys on the wall of the corporate social network of a multinational telecommunications is included for three months. A new social culture enterprise is presented innovative in areas such as knowledge management, internal communications, training and innovation. And a quantitative vision into implementation of social media in the processes of a company is offered. They develops an exhibition where shown the process of studying the different social media platforms and application areas in the company, the study of the legal aspects of your application and use and implementation and development. A theoretical and practical analysis also exposed of calculation of return on investment (ROI) and finally an analysis of the information collected in surveys and statistical study of corporate social network. The survey data were analyzed using descriptive statistics based on graphs and contingency tables where waste and total percentages are calculated for analyze the dependence between the social media, efficiency, productivity and income statement, plus analysis of the contribution of social media on the mission, internal communication and knowledge management in the company. Also Calculations of chi-square distributions are conducted to demonstrate the dependence between of productivity and social media and the GAP that relates the importance and satisfaction level in social media. The theoretical and practical analysis parameters are the benefits, costs, flexibility and risk. The benefits are linked to productivity, knowledge management, human capital and internal processes. The costs are linked the software licensing, management, implementation and training. Based on these parameters was performed the study of a business model that represents a large ICT company in Spain. The data for the study are estimates within the reality, because the intention is not to know these real values but the theoretical and practical study and application of the method for calculating the ROI. Statistical analysis of the social network was made during or three months and was obtained the progress of social network use at events such as: number of active participants, messages posted, files uploaded, active groups and types and access platforms. Into study of statistical data of these events were obtained indicators of participation, activity and knowledge of the social network that are useful for calculating the ROI. In conclusion, the improvements offered by the social media in areas such as internal communication, knowledge management, training and innovation are shown. And thanks to these improvements increase the productivity and efficiency of professional and also a potential return on investment (ROI).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We describe a domain ontology development approach that extracts domain terms from folksonomies and enrich them with data and vocabularies from the Linked Open Data cloud. As a result, we obtain lightweight domain ontologies that combine the emergent knowledge of social tagging systems with formal knowledge from Ontologies. In order to illustrate the feasibility of our approach, we have produced an ontology in the financial domain from tags available in Delicious, using DBpedia, OpenCyc and UMBEL as additional knowledge sources.