941 resultados para end user computing application streaming horizon workspace portalvmware view


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pour ce projet, nous avons développé une plateforme pour l’analyse pangénomique de la méthylation de l’ADN chez le bovin qui est compatible avec des échantillons de petites tailles. Cet outil est utilisé pour étudier les caractéristiques génétiques et épigénétiques (méthylation de l’ADN) des gamètes soumis aux procédures de procréation médicalement assisitée et des embryons précoces. Dans un premier temps, une plateforme d’analyse de biopuces spécifiques pour l’étude de la méthylation de l’ADN chez l’espèce bovine a été développée. Cette plateforme a ensuite été optimisée pour produire des analyses pangénomiques de méthylation de l’ADN fiables et reproductibles à partir d’échantillons de très petites tailles telle que les embryons précoces (≥ 10 ng d’ADN a été utilisé, ce qui correspond à 10 blastocystes en expansion). En outre, cet outil a permis d’évaluer de façon simultanée la méthylation de l’ADN et le transcriptome dans le même échantillon, fournissant ainsi une image complète des profils génétiques et épigénétiques (méthylation de l’ADN). Comme preuve de concept, les profils comparatifs de méthylation de l’ADN spermatique et de blastocystes bovins ont été analysés au niveau de l’ensemble du génome. Dans un deuxième temps, grâce à cette plateforme, les profils globaux de méthylation de l’ADN de taureaux jumeaux monozygotes (MZ) ont été analysés. Malgré qu’ils sont génétiquement identiques, les taureaux jumeaux MZ ont des descendants avec des performances différentes. Par conséquent, l’hypothèse que le profil de méthylation de l’ADN spermatique de taureaux jumeaux MZ est différent a été émise. Dans notre étude, des différences significatives entre les jumeaux MZ au niveau des caractéristiques de la semence ainsi que de la méthylation de l’ADN ont été trouvées, chacune pouvant contribuer à l’obtention de performances divergentes incongrues des filles engendrées par ces jumeaux MZ. Dans la troisième partie de ce projet, la même plateforme a été utilisée pour découvrir les impacts d’une supplémentation à forte concentration en donneur de méthyle universel sur les embryons précoces bovins. La supplémentation avec de grandes quantités d’acide folique (AF) a été largement utilisée et recommandée chez les femmes enceintes pour sa capacité bien établie à prévenir les malformations du tube neural chez les enfants. Cependant, plus récemment, plusieurs études ont rapporté des effets indésirables de l’AF utilisé à des concentrations élevées, non seulement sur le développement de l’embryon, mais aussi chez les adultes. Au niveau cellulaire, l’AF entre dans le métabolisme monocarboné, la seule voie de production de S-adénosyl méthionine (SAM), un donneur universel de groupements méthyles pour une grande variété de biomolécules, y compris l’ADN. Par conséquent, pour résoudre cette controverse, une forte dose de SAM a été utilisée pour traiter des embryons produits in vitro chez le bovin. Ceci a non seulement permis d’influencer le phénotype des embryons précoces, mais aussi d’avoir un impact sur le transcriptome et le méthylome de l’ADN. En somme, le projet en cours a permis le développement d’une plateforme d’analyse de la méthylation de l’ADN à l’échelle du génome entier chez le bovin à coût raisonnable et facile à utiliser qui est compatible avec les embryons précoces. De plus, puisque c’est l’une des premières études de ce genre en biologie de la reproduction bovine, ce projet avait trois objectifs qui a donné plusieurs nouveaux résultats, incluant les profils comparatifs de méthylation de l’ADN au niveau : i) blastocystes versus spermatozoïdes ; ii) semence de taureaux jumeaux MZ et iii) embryons précoces traités à de fortes doses de SAM versus des embryons précoces non traités.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automation technologies are widely acclaimed to have the potential to significantly reduce energy consumption and energy-related costs in buildings. However, despite the abundance of commercially available technologies, automation in domestic environments keep on meeting commercial failures. The main reason for this is the development process that is used to build the automation applications, which tend to focus more on technical aspects rather than on the needs and limitations of the users. An instance of this problem is the complex and poorly designed home automation front-ends that deter customers from investing in a home automation product. On the other hand, developing a usable and interactive interface is a complicated task for developers due to the multidisciplinary challenges that need to be identified and solved. In this context, the current research work investigates the different design problems associated with developing a home automation interface as well as the existing design solutions that are applied to these problems. The Qualitative Data Analysis approach was used for collecting data from research papers and the open coding process was used to cluster the findings. From the analysis of the data collected, requirements for designing the interface were derived. A home energy management functionality for a Web-based home automation front-end was developed as a proof-of-concept and a user evaluation was used to assess the usability of the interface. The results of the evaluation showed that this holistic approach to designing interfaces improved its usability which increases the chances of its commercial success.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Measuring and fulfilling user requirements during medical device development will result in successful products that improve patient safety, improve device effectiveness and reduce product recalls and modifications. Medical device users are an extremely heterogeneous group and for any one device the users may include patients, their carers as well as various healthcare professionals. There are a number of factors that make capturing user requirements for medical device development challenging including the ethical and research governance involved with studying users as well as the inevitable time and financial constraints. Most ergonomics research methods have been developed in response to such practical constraints and a number of these have potential for medical device development. Some are suitable for specific points in the device cycle such as contextual inquiry and ethnography, others, such as usability tests and focus groups may be used throughout development. When designing user research there are a number of factors that may affect the quality of data collected including the sample of users studied, the use of proxies instead of real end-users and the context in which the research is performed. As different methods are effective in identifying different types of data, ideally more than one method should be used at each point in development, however financial and time factors may often constrain this.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Since children already use and explore applications on smartphones, we use this as the starting point for design. Our monitoring and analysis framework, BaranC, enables us to discover and analyse which applications children uses and precisely how they interact with them. The monitoring happens unobtrusively in the background so children interact normally in their own natural environment without artificial constraints. Thus, we can discover to what extent a child of a particular age engages with, and how they physically interact with, existing applications. This information in turn provides the basis for design of new child-centred applications which can then be subject to the same comprehensive child use analysis using our framework. The work focuses on the first aspect, namely, the monitoring and analysis of current child use of smartphones. Experiments show the value of this approach and interesting results have been obtained from this precise monitoring of child smartphone usage.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

User Quality of Experience (QoE) is a subjective entity and difficult to measure. One important aspect of it, User Experience (UX), corresponds to the sensory and emotional state of a user. For a user interacting through a User Interface (UI), precise information on how they are using the UI can contribute to understanding their UX, and thereby understanding their QoE. As well as a user’s use of the UI such as clicking, scrolling, touching, or selecting, other real-time digital information about the user such as from smart phone sensors (e.g. accelerometer, light level) and physiological sensors (e.g. heart rate, ECG, EEG) could contribute to understanding UX. Baran is a framework that is designed to capture, record, manage and analyse the User Digital Imprint (UDI) which, is the data structure containing all user context information. Baran simplifies the process of collecting experimental information in Human and Computer Interaction (HCI) studies, by recording comprehensive real-time data for any UI experiment, and making the data available as a standard UDI data structure. This paper presents an overview of the Baran framework, and provides an example of its use to record user interaction and perform some basic analysis of the interaction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mobile and wireless networks have long exploited mobility predictions, focused on predicting the future location of given users, to perform more efficient network resource management. In this paper, we present a new approach in which we provide predictions as a probability distribution of the likelihood of moving to a set of future locations. This approach provides wireless services a greater amount of knowledge and enables them to perform more effectively. We present a framework for the evaluation of this new type of predictor, and develop 2 new predictors, HEM and G-Stat. We evaluate our predictors accuracy in predicting future cells for mobile users, using two large geolocation data sets, from MDC [11], [12] and Crawdad [13]. We show that our predictors can successfully predict with as low as an average 2.2% inaccuracy in certain scenarios.