771 resultados para mindfulness-based mobile apps
Resumo:
The concern over the quality of delivering video streaming services in mobile wireless networks is addressed in this work. A framework that enhances the Quality of Experience (QoE) of end users through a quality driven resource allocation scheme is proposed. To play a key role, an objective no-reference quality metric, Pause Intensity (PI), is adopted to derive a resource allocation algorithm for video streaming. The framework is examined in the context of 3GPP Long Term Evolution (LTE) systems. The requirements and structure of the proposed PI-based framework are discussed, and results are compared with existing scheduling methods on fairness, efficiency and correlation (between the required and allocated data rates). Furthermore, it is shown that the proposed framework can produce a trade-off between the three parameters through the QoE-aware resource allocation process.
Resumo:
In a pilot project an optimized mobile latent heat storage based on a system available on the market has been tested at Fraunhofer Institute for Environmental, Safety and Energy Technology. Initially trials were conducted with the aim of optimizing the process of charging and discharging. A specifically constructed test rig at the incineration trials centre at the institute allowed charging and discharging procedures of the mobile latent heat storage with adjustable parameters. In addition an evaluation model was constructed to further optimize the heat exchanger systems. In conclusion the prototype of the mobile latent heat storage was tested in practical operation. The economic and technical feasibility of heat transportation was shown if not utilized waste heat is available. © 2014 The Authors.
Resumo:
A real-time adaptive resource allocation algorithm considering the end user's Quality of Experience (QoE) in the context of video streaming service is presented in this work. An objective no-reference quality metric, namely Pause Intensity (PI), is used to control the priority of resource allocation to users during the scheduling process. An online adjustment has been introduced to adaptively set the scheduler's parameter and maintain a desired trade-off between fairness and efficiency. The correlation between the data rates (i.e. video code rates) demanded by users and the data rates allocated by the scheduler is taken into account as well. The final allocated rates are determined based on the channel status, the distribution of PI values among users, and the scheduling policy adopted. Furthermore, since the user's capability varies as the environment conditions change, the rate adaptation mechanism for video streaming is considered and its interaction with the scheduling process under the same PI metric is studied. The feasibility of implementing this algorithm is examined and the result is compared with the most commonly existing scheduling methods.
Resumo:
Aim: To validate the accuracy and repeatability of a mobile app reading speed test compared with the traditional paper version. Method: Twenty-one subjects wearing their full refractive correction glasses read 14 sentences of decreasing print size between 1.0 and -0.1 logMAR, each consisting of 14 words (Radner reading speed test) at 40 cm with a paper-based chart and twice on iPad charts. Time duration was recorded with a stop watch for the paper chart and on the App itself for the mobile chart allowing critical print size (CPS) and optimal reading speed (ORS) to be derived objectively. Results: The ORS was higher for the mobile app charts (194±29 wpm; 195±25 wpm) compared with the paper chart (166±20 wpm; F=57.000, p<0.001). The CPS was lower for the mobile app charts (0.17±0.20 logMAR; 0.18±0.17 logMAR) compared with the paper chart (0.25±0.17 logMAR; F=5.406, p=0.009). The mobile app test had a mean difference repeatability of 0.30±22.5 wpm, r=0.917 for ORS, and a CPS of 0.0±0.2 logMAR, r=0.769. Conclusions: Repeatability of the app reading speed test is as good (ORS) or better (CPS) than previous studies on the paper test. While the results are not interchangeable with paper-based charts, mobile app tablet-based tests of reading speed are reliable and rapid to perform, with the potential to capture functional visual ability in research studies and clinical practice.
Resumo:
This paper discusses the integration of quiz mechanism into digital game-based learning platform addressing environmental and social issues caused by population growth. 50 participants' learning outcomes were compared before and after the session. Semi-structured interview was used to gather participants' viewpoints regarding of issues presented in the game. Phenomenography was used as a methodology for data collection and analysis. Preliminary outcomes have shown that the current game implementation and quiz mechanism can be used to: (1) promote learning and awareness on environmental and social issues and (2) sustain players' attention and engagements.
Resumo:
This paper deals with the classification of news items in ePaper, a prototype system of a future personalized newspaper service on a mobile reading device. The ePaper system aggregates news items from various news providers and delivers to each subscribed user (reader) a personalized electronic newspaper, utilizing content-based and collaborative filtering methods. The ePaper can also provide users "standard" (i.e., not personalized) editions of selected newspapers, as well as browsing capabilities in the repository of news items. This paper concentrates on the automatic classification of incoming news using hierarchical news ontology. Based on this classification on one hand, and on the users' profiles on the other hand, the personalization engine of the system is able to provide a personalized paper to each user onto her mobile reading device.
Resumo:
Within project Distributed eLearning Center (DeLC) we are developing a system for distance and eLearning, which offers fixed and mobile access to electronic content and services. Mobile access is based on InfoStation architecture, which provides Bluetooth and WiFi connectivity. On InfoStation network we are developing multi-agent middleware that provides context-aware, adaptive and personalized access to the mobile services to the users. For more convenient testing and optimization of the middleware a simulation environment, called CA3 SiEnv, is being created.
Resumo:
In the digital age the internet and the ICT devices changed our daily life and routines. It means we couldn't live without these services and devices anywhere (work, home, holiday, etc.). It can be experienced in the tourism sector; digital contents become key tools in the tourism of the 21st century; they will be able to adapt the traditional tourist guide methodology to the applications running on novel digital devices. Tourists belong to a new generation, an "ICT generation" using innovative tools, a new info-media to communicate. A possible direction for tourism development is to use modern ICT systems and devices. Besides participating in classical tours guided by travel guides, there is a new opportunity for individual tourists to enjoy high quality ICT based guided walks prepared on the knowledge of travel guides. The main idea of the GUIDE@HAND service is to use reusable, and create new tourism contents for an advanced mobile device, in order to give a contemporary answer to traditional systems of tourism information, by developing new tourism services based on digital contents for innovative mobile applications. The service is based on a new concept of enhancing territorial heritage and values, through knowledge, innovation, languages and multilingual solutions going along with new tourists‟ “sensitiveness”.
Resumo:
The possibility to analyze, quantify and forecast epidemic outbreaks is fundamental when devising effective disease containment strategies. Policy makers are faced with the intricate task of drafting realistically implementable policies that strike a balance between risk management and cost. Two major techniques policy makers have at their disposal are: epidemic modeling and contact tracing. Models are used to forecast the evolution of the epidemic both globally and regionally, while contact tracing is used to reconstruct the chain of people who have been potentially infected, so that they can be tested, isolated and treated immediately. However, both techniques might provide limited information, especially during an already advanced crisis when the need for action is urgent. In this paper we propose an alternative approach that goes beyond epidemic modeling and contact tracing, and leverages behavioral data generated by mobile carrier networks to evaluate contagion risk on a per-user basis. The individual risk represents the loss incurred by not isolating or treating a specific person, both in terms of how likely it is for this person to spread the disease as well as how many secondary infections it will cause. To this aim, we develop a model, named Progmosis, which quantifies this risk based on movement and regional aggregated statistics about infection rates. We develop and release an open-source tool that calculates this risk based on cellular network events. We simulate a realistic epidemic scenarios, based on an Ebola virus outbreak; we find that gradually restricting the mobility of a subset of individuals reduces the number of infected people after 30 days by 24%.
Resumo:
Advances in the area of industrial metrology have generated new technologies that are capable of measuring components with complex geometry and large dimensions. However, no standard or best-practice guides are available for the majority of such systems. Therefore, these new systems require appropriate testing and verification in order for the users to understand their full potential prior to their deployment in a real manufacturing environment. This is a crucial stage, especially when more than one system can be used for a specific measurement task. In this paper, two relatively new large-volume measurement systems, the mobile spatial co-ordinate measuring system (MScMS) and the indoor global positioning system (iGPS), are reviewed. These two systems utilize different technologies: the MScMS is based on ultrasound and radiofrequency signal transmission and the iGPS uses laser technology. Both systems have components with small dimensions that are distributed around the measuring area to form a network of sensors allowing rapid dimensional measurements to be performed in relation to large-size objects, with typical dimensions of several decametres. The portability, reconfigurability, and ease of installation make these systems attractive for many industries that manufacture large-scale products. In this paper, the major technical aspects of the two systems are briefly described and compared. Initial results of the tests performed to establish the repeatability and reproducibility of these systems are also presented. © IMechE 2009.
Resumo:
Human-computer interaction is a growing field of study in which researchers and professionals aim to understand and evaluate the impact of new technologies on human behavior. With the integration of smart phones, tablets, and other portable devices into everyday life, there is a greater need to understand the influence of such technology on the human experience. Emerging Perspectives on the Design, Use, and Evaluation of Mobile and Handheld Devices is an authoritative reference source consisting of the latest scholarly research and theories from international experts and professionals on the topic of human-computer interaction with mobile devices. Featuring a comprehensive collection of chapters on critical topics in this dynamic field, this publication is an essential reference source for researchers, educators, students, and practitioners interested in the use of mobile and handheld devices and their impact on individuals and society as a whole. This publication features timely, research-based chapters pertaining to topics in the design and evaluation of smart devices including, but not limited to, app stores, category-based interfaces, gamified mobility applications, mobile interaction, mobile learning, pervasive multimodal applications, smartphone interaction, and social media use.
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.