884 resultados para cloud-based applications
Resumo:
Uncertainty text detection is important to many social-media-based applications since more and more users utilize social media platforms (e.g., Twitter, Facebook, etc.) as information source to produce or derive interpretations based on them. However, existing uncertainty cues are ineffective in social media context because of its specific characteristics. In this paper, we propose a variant of annotation scheme for uncertainty identification and construct the first uncertainty corpus based on tweets. We then conduct experiments on the generated tweets corpus to study the effectiveness of different types of features for uncertainty text identification. © 2013 Association for Computational Linguistics.
Resumo:
There has been a recent surge of research looking at the reporting of food consumption on social media. The topic of alcohol consumption, however, remains poorly investigated. Social media has the potential to shed light on a topic that, traditionally, is difficult to collect fine-grained information on. One social app stands out in this regard: Untappd is an app that allows users to ‘check-in’ their consumption of beers. It operates in a similar fashion to other location-based applications, but is specifically tailored to the collection of information on beer consumption. In this paper, we explore beer consumption through the lens of social media. We crawled Untappd in real time over a period of 112 days, across 40 cities in the United States and Europe. Using this data, we shed light on the drinking habits of over 369k users. We focus on per-user and per-city characterisation, highlighting key behavioural trends.
Resumo:
Az Európai Bizottság jelentése szerint a magyar kis- és középvállalkozások helyzete 2005 óta stagnál. Bár ezek a vállalkozások adják a magyar vállalkozások 99%-át, mégis a közbeszerzési, valamint a növekvő piacokhoz való hozzáférés terén számos akadállyal kerülnek szembe. Az eBEST projekten (Empowering Business Ecosystems of Small Service Enterprises to Face the Economic Crisis) belül kialakított platform olyan funkcionalitással bír, ami mindamellett, hogy lehetővé teszi a vállalkozások szervezett csoportokba, azaz ökoszisztémákba rendeződését, hozzá tud járulni a fogyasztói igények kielégítése érdekében létrejövő ellátási lánc, illetve egyedi folyamatok mentén fellépő információszerzési, kommunikációs vagy együttműködési akadályok lebontásához. ____ It is widely recognised that the most important factor for increasing the productivity of small companies is a deep adoption of computer-based applications and services. The FP7 SME eBEST project proposed a new operational environment specifically conceived for net worked small companies, supported by an advanced suite of ICT services, the eBEST platform. The paper aims at presenting the projects achievements that are validated by a number of company clusters of different EU countries and industry sectors. The general objectives of the eBEST project are attracting customers to work with the clustered companies, facilitating companies to collaborate with each other, and enabling associations to foster the devised innovation.
Resumo:
The promise of Wireless Sensor Networks (WSNs) is the autonomous collaboration of a collection of sensors to accomplish some specific goals which a single sensor cannot offer. Basically, sensor networking serves a range of applications by providing the raw data as fundamentals for further analyses and actions. The imprecision of the collected data could tremendously mislead the decision-making process of sensor-based applications, resulting in an ineffectiveness or failure of the application objectives. Due to inherent WSN characteristics normally spoiling the raw sensor readings, many research efforts attempt to improve the accuracy of the corrupted or "dirty" sensor data. The dirty data need to be cleaned or corrected. However, the developed data cleaning solutions restrict themselves to the scope of static WSNs where deployed sensors would rarely move during the operation. Nowadays, many emerging applications relying on WSNs need the sensor mobility to enhance the application efficiency and usage flexibility. The location of deployed sensors needs to be dynamic. Also, each sensor would independently function and contribute its resources. Sensors equipped with vehicles for monitoring the traffic condition could be depicted as one of the prospective examples. The sensor mobility causes a transient in network topology and correlation among sensor streams. Based on static relationships among sensors, the existing methods for cleaning sensor data in static WSNs are invalid in such mobile scenarios. Therefore, a solution of data cleaning that considers the sensor movements is actively needed. This dissertation aims to improve the quality of sensor data by considering the consequences of various trajectory relationships of autonomous mobile sensors in the system. First of all, we address the dynamic network topology due to sensor mobility. The concept of virtual sensor is presented and used for spatio-temporal selection of neighboring sensors to help in cleaning sensor data streams. This method is one of the first methods to clean data in mobile sensor environments. We also study the mobility pattern of moving sensors relative to boundaries of sub-areas of interest. We developed a belief-based analysis to determine the reliable sets of neighboring sensors to improve the cleaning performance, especially when node density is relatively low. Finally, we design a novel sketch-based technique to clean data from internal sensors where spatio-temporal relationships among sensors cannot lead to the data correlations among sensor streams.
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
Las TIC son inseparables de la museografía in situ e imprescindibles en la museografía en red fija y móvil. En demasiados casos se han instalado prótesis tecnológicas para barnizar de modernidad el espacio cultural, olvidando que la tecnología debe estar al servicio de los contenidos de manera que resulte invisible y perfectamente imbricada con la museografía tradicional. Las interfaces móviles pueden fusionar museo in situ y en red y acompañar a las personas más allá del espacio físico. Esa fusión debe partir de una base de datos narrativa y abierta a obras materiales e inmateriales de otros museos de manera que no se trasladen las limitaciones del museo físico al virtual. En el museo in situ tienen sentido las instalaciones hipermedia inmersivas que faciliten experiencias culturales innovadoras. La interactividad (relaciones virtuales) debe convivir con la interacción (relaciones físicas y personales) y estar al servicio de todas las personas, partiendo de que todas, todos tenemos limitaciones. Trabajar interdisciplinarmente ayuda a comprender mejor el museo para ponerlo al servicio de las personas.
Resumo:
This deliverable is software, as such this document is abridged to be as succinct as possible, the extended descriptions and detailed documentation for the software are online. The document consists of two parts, part one describes the first bundle of social gamification assets developed in WP3, part two presents mock-ups of the RAGE ecosystem gamification. In addition to the software outline, included in part one is a short market analysis of existing gamification solutions, outline rationale for combining the three social gamification assets into one unified asset, and the branding exercise to make the assets more developer friendly.Online links to the source code, binaries, demo and documentation for the assets are provided. The combined assets offer game developers as well as a wide range of software developers the opportunity to readily enhance existing games or digital platforms with multiplayer gamification functionalities, catering for both competitive and cooperative game dynamics. The solution consist of a flexible client-server solution which can run either as a cloud-based service, serving many games or have specific instances for individual games as necessary.
Resumo:
With the development of the Internet-of-Things, more and more IoT platforms come up with different structures and characteristics. Making balance of their advantages and disadvantages, we should choose the suitable platform in differ- ent scenarios. For this project, I make comparison of a cloud-based centralized platform, Microsoft Azure IoT hub and a fully distributed platform, Sensi- bleThings. Quantitative comparison is made for performance by 2 scenarios, messages sending speed adds up, devices lie in different location. General com- parison is made for security, utilization and the storage. Finally I draw the con- clusion that SensibleThings performs more stable when a lot of messages push- es to the platform. Microsoft Azure has better geographic expansion. For gener- al comparison, Microsoft Azure IoT hub has better security. The requirement of local device for Microsoft Azure IoT hub is lower than SensibleThings. The SensibleThings are open source and free while Microsoft Azure follow the con- cept “pay as you go” with many throttling limitations for different editions. Microsoft is more user-friendly.
Resumo:
The development of broadband Internet connections has fostered new audiovisual media services and opened new possibilities for accessing broadcasts. The Internet retransmission case of TVCatchup before the CJEU was the first case concerning new technologies in the light of Art. 3.1. of the Information Society Directive. On the other side of the Atlantic the Aereo case reached the U.S. Supreme Court and challenged the interpretation of public performance rights. In both cases the recipients of the services could receive broadcast programs in a way alternative to traditional broadcasting channels including terrestrial broadcasting or cable transmission. The Aereo case raised the debate on the possible impact of the interpretation of copyright law in the context of the development of new technologies, particularly cloud based services. It is interesting to see whether any similar problems occur in the EU. The „umbrella” in the title refers to Art. 8 WCT, which covers digital and Internet transmission and constitutes the backrgound for the EU and the U.S. legal solutions. The article argues that no international standard for qualification of the discussed services exists.
Resumo:
[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.
Resumo:
Major developments in the technological environment can become commonplace very quickly. They are now impacting upon a broad range of information-based service sectors, as high growth Internet-based firms, such as Google, Amazon, Facebook and Airbnb, and financial technology (Fintech) start-ups expand their product portfolios into new markets. Real estate is one of the information-based service sectors that is currently being impacted by this new type of competitor and the broad range of disruptive digital technologies that have emerged. Due to the vast troves of data that these Internet firms have at their disposal and their asset-light (cloud-based) structures, they are able to offer highly-targeted products at much lower costs than conventional brick-and-mortar companies.
Resumo:
This interactive symposium will focus on the use of different technologies in developing innovative practice in teacher education at one university in England. Technology Enhanced Learning (TEL) is a field of educational policy and practice that has the power to ignite diametrically opposing views and reactions amongst teachers and teacher educators, ranging across a spectrum from immense enthusiasm to untold terror. In a field where the skills and experience of individuals vary from those of digital natives (Prensky 2001) to lags and lurkers in digital spaces, the challenges of harnessing the potential of TEL are complex. The challenges include developing the IT skills of trainees and educators and the creative application of these skills to pedagogy in all areas of the curriculum. The symposium draws on examples from primary, secondary and post-compulsory teacher education to discuss issues and approaches to developing research capacity and innovative practice using different etools, many of which are freely available. The first paper offers theoretical and policy perspectives on finding spaces in busy professional lives to engage in research and develop research-informed practice. It draws on notions of teachers as researchers, practitioner research and evidenc-ebased practice to argue that engagement in research is integral to teacher education and an empowering source of creative professional learning for teachers and teacher educators. Whilst acknowledging the challenges of this stance, examples from our own research practice illustrate how e-tools can assist us in building the capacity and confidence of staff and students in researching and enhancing teaching, learning and assessment practice. The second paper discusses IT skills development through the TEL pathway for trainee teachers in secondary education across different curriculum subjects. The lead tutor for the TEL pathway will use examples of activities developed with trainee teachers and university subject tutors to enhance their skills in using e-tools, such as QR codes, Kahoot, Padlet, Pinterest and cloud based learning. The paper will also focus on how these skills and tools can be used for action Discussant - the wider use of technologies in a university centre for teacher education; course management, recruitment and mentor training. research, evaluation and feedback and for marking and administrative tasks. The discussion will finish with thoughts on widening trainee teachers’ horizons into the future direction of educational technology. The third paper considers institutional policies and strategies for promoting and embedding TEL, including an initiative called ‘The Learning Conversation’, which aims ‘to share, highlight, celebrate, discuss, problematise, find things out...’ about TEL through an online space. The lead for ‘The Learning Conversation’ will offer reflections on this and other initiatives across the institution involving trainee teachers, university subject tutors, librarians and staff in student support services who are using TEL to engage, enthuse and support students on campus and during placements in schools. The fourth paper reflects on the use of TEL to engage with trainee teachers in post-compulsory education. This sector of education and training is more fragmented than primary and secondary schools sectors and so the challenges of building a community of practice that can support the development of innovative practice are greater.
Resumo:
The last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.
Resumo:
As the efficiency of parallel software increases it is becoming common to measure near linear speedup for many applications. For a problem size N on P processors then with software running at O(N=P ) the performance restrictions due to file i/o systems and mesh decomposition running at O(N) become increasingly apparent especially for large P . For distributed memory parallel systems an additional limit to scalability results from the finite memory size available for i/o scatter/gather operations. Simple strategies developed to address the scalability of scatter/gather operations for unstructured mesh based applications have been extended to provide scalable mesh decomposition through the development of a parallel graph partitioning code, JOSTLE [8]. The focus of this work is directed towards the development of generic strategies that can be incorporated into the Computer Aided Parallelisation Tools (CAPTools) project.