803 resultados para mindfulness-based mobile apps
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
With the rapid development of Internet technologies, video and audio processing are among the most important parts due to the constant requirements of high quality media contents. Along with the improvement of network environment and the hardware equipment, this demand is becoming more and more imperious, people prefer high quality videos and audios as well as the net streaming media resources. FFmpeg is a set of open source program about the A/V decoding. Many commercial players use FFmpeg as their displaying cores. This paper designed a simple and easy-to-use video player based on FFmpeg. The first part is about the basic theories and related knowledge of video displaying, including some concepts like data formats, streaming media data, video coding and decoding. In a word, the realization of the video player depend on the a set of video decoding process. The general idea about the process is to get the video packets from the Internet, to read the related protocols and de-encapsulate the protocols, to de-encapsulate the packaging data and to get encoded formats data, to decode them to pixel data that can be displayed directly through graphics cards. During the coding and decoding process, there could be different degrees of data losing, which is called lossy compression, but it usually does not influence the quality of user experiences. The second part is about the principle of the FFmpeg decoding process, that is one of the key point of the paper. In this project, FFmpeg is used for the main decoding task, by call some main functions and structures from FFmpeg class libraries, packaging video formats could be transfer to pixel data, after getting the pixel data, SDL is used for the displaying process. The third part is about the SDL displaying flow. Similarly, it would invoke some important displaying functions from SDL class libraries to realize the function, though SDL is able to do not only displaying task, but also many other game playing process. After that, a independent video displayer is completed, it is provided with all the key function of a player. The fourth part make a simple users interface for the player based on the MFC program, it enable the player could be used by most people. At last, in consideration of the mobile Internet’s blossom, people nowadays can hardly ever drop their mobile phones, there is a brief introduction about how to transplant the video player to Android platform which is one of the most used mobile systems.
Resumo:
NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Quantum sensors based on coherent matter-waves are precise measurement devices whose ultimate accuracy is achieved with Bose-Einstein condensates (BECs) in extended free fall. This is ideally realized in microgravity environments such as drop towers, ballistic rockets and space platforms. However, the transition from lab-based BEC machines to robust and mobile sources with comparable performance is a challenging endeavor. Here we report on the realization of a miniaturized setup, generating a flux of 4x10(5) quantum degenerate Rb-87 atoms every 1.6 s. Ensembles of 1 x 10(5) atoms can be produced at a 1 Hz rate. This is achieved by loading a cold atomic beam directly into a multi-layer atom chip that is designed for efficient transfer from laser-cooled to magnetically trapped clouds. The attained flux of degenerate atoms is on par with current lab-based BEC experiments while offering significantly higher repetition rates. Additionally, the flux is approaching those of current interferometers employing Raman-type velocity selection of laser-cooled atoms. The compact and robust design allows for mobile operation in a variety of demanding environments and paves the way for transportable high-precision quantum sensors.
Resumo:
The evolution of cellular systems towards third generation (3G) or IMT-2000 seems to have a tendency to use W-CDMA as the standard access method, as ETSI decisions have showed. However, there is a question about the improvements in capacity and the wellness of this access method. One of the aspects that worry developers and researchers planning the third generation is the extended use of the Internet and more and more bandwidth hungry applications. This work shows the performance of a W-CDMA system simulated in a PC using cover maps generated with DC-Cell, a GIS based planning tool developed by the Technical University of Valencia, Spain. The maps are exported to MATLAB and used in the model. The system used consists of several microcells in a downtown area. We analyse the interference from users in the same cell and in adjacent cells and the effect in the system, assuming perfect control for each cell. The traffic generated by the simulator is voice and data. This model allows us to work with coverage that is more accurate and is a good approach to analyse the multiple access interference (MAI) problem in microcellular systems with irregular coverage. Finally, we compare the results obtained, with the performance of a similar system using TDMA.
Resumo:
We show a simulation model for capacity analysis in mobile systems using a geographic information system (GIS) based tool, used for coverage calculations and frequency assignment, and MATLAB. The model was developed initially for “narrowband” CDMA and TDMA, but was modified for WCDMA. We show also some results for a specific case in “narrowband” CDMA
Resumo:
In previous papers we describe a model for capacity analysis in CDMA systems using DC-Cell, a GIS based planning tool developed at Universidad Politecnica de Valencia, and MATLAB. We show some initial results of that model, and now, we are exploring different parameters like cell size, proximity between cells, number of cells in the system and “clustering” CDMA in order to improve the planning process for third generation systems. In this paper we show the results for variations of some of these parameters, specifically the cell size and number of cells. In CDMA systems is quite common to suppose only one carrier frequency for capacity estimation, and it is intuitive to think that for more base stations, mean more users. However the multiple access interference problem in CDMA systems could establish a limit for that supposition in a similar way that occurs in FDMA and TDMA systems.
Resumo:
Nowadays there is almost no crime committed without a trace of digital evidence, and since the advanced functionality of mobile devices today can be exploited to assist in crime, the need for mobile forensics is imperative. Many of the mobile applications available today, including internet browsers, will request the user’s permission to access their current location when in use. This geolocation data is subsequently stored and managed by that application's underlying database files. If recovered from a device during a forensic investigation, such GPS evidence and track points could hold major evidentiary value for a case. The aim of this paper is to examine and compare to what extent geolocation data is available from the iOS and Android operating systems. We focus particularly on geolocation data recovered from internet browsing applications, comparing the native Safari and Browser apps with Google Chrome, downloaded on to both platforms. All browsers were used over a period of several days at various locations to generate comparable test data for analysis. Results show considerable differences not only in the storage locations and formats, but also in the amount of geolocation data stored by different browsers and on different operating systems.
Resumo:
Tourism is growing and is becoming more competitive. Destinations need to find elements which demonstrate their uniqueness, the singularity which allows them to differentiate themselves from others. This struggle for uniqueness makes economies become more competitive and competition is a central element in the dynamics of Tourism. Technology is also an added value for tourism competitiveness, as it allows destinations to become internationalised and known worldwide. In this scenario, research has increased as a means to study Tourism trends in fields such as sociology and marketing. Nevertheless, there are areas in which there is not much research done and which are fundamental: these are the areas concerned with identities, communication and interpersonal relations. In this regard, Linguistics has a major role for different reasons: firstly, it studies language itself and through it, communication, secondly, language conveys culture and, thirdly, it is by enriching language users that innovation in Tourism and in knowledge, as a whole, is made possible. This innovation, on the other hand, has repercussions in areas such as management, internationalisation and marketing as well. It is, therefore, the objective of this thesis to report on how learning experiences take place in Tourism undergraduate English language classes as well as to give an account of enhanced results in classes where mobile learning was adopted. In this way, an alliance between practice and research was established. This is beneficial for the teaching and learning process because by establishing links between research based insight and practice, the outcome is grounded knowledge which helps make solid educational decisions. This research, therefore, allows to better understand if learners accept working with mobile technologies in their learning process. Before introducing any teaching and learning approach, it was necessary to be informed, as well, of how English for tourism programmes are organised. This thesis also illustrates through the premises of Systemic Functional Linguistics that language use can be enhanced by using mobile technology in Tourism undergraduate language classes.
Resumo:
Context: Mobile applications support a set of user-interaction features that are independent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user-interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.
Resumo:
Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (AIS) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a Reinforcement Learning based control system (RL) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
Resumo:
A presente investigação tem como objeto de estudo a Arte Nova nas cidades de Aveiro e Ílhavo, traduzindo-se posteriormente na criação de uma aplicação mobile de um roteiro diferenciador referente ao património Arte Nova localizado nas duas cidades. Existe em Aveiro, uma tomada de consciência notória relativa ao património que se enquadra no espírito e nas características do estilo Arte Nova. Isso em muito se deve à originalidade e diversidade de interpretações que a caracterizam localmente. No entanto, apesar dos melhoramentos efetuados na comunicação deste património, especialmente depois da reabilitação do edifício Mário Belmonte Pessoa e da sua transformação em Museu Arte Nova, o roteiro atual não contempla todos os artefactos do estilo existentes na cidade e exclui aqueles que se encontram fora do centro urbano da cidade de Aveiro. Em Ílhavo a realidade é completamente díspar da encontrada na cidade vizinha, Aveiro, não existindo grande decoro pelo tratamento da informação referente ao estilo Arte Nova, apenas algumas referencias breves em guias culturais e um roteiro limitado no site da Câmara Municipal de Ílhavo. Paralelamente a isto, a região de Aveiro tornou-se numa das regiões com o espólio mais significativo do país. Dada à sua importância cultural e local, o roteiro diferenciador que propomos nesta investigação irá intervir de forma a melhorar o que já existe, passando pelo aprofundamento de conhecimentos sobre o tema, catalogação, cruzamento e agrupamento de toda a informação dos artefactos que se encontrem dispersos pelas duas cidades, de forma a tornar mais fácil a procura e o acesso à informação. Numa primeira fase, a investigação irá focar-se nos conteúdos afetos a cada um dos artefactos, sendo estes metodologicamente trabalhados através do método triangular de Francisco Providência, a interpretação autoral (autoria) que se traduz na evolução dos edifícios (tecnologia) e a relevância da sua história (programa) para o património nacional. Posteriormente, os conteúdos anteriormente referidos serão adaptados a uma aplicação mobile que facilitará o acesso à informação previamente selecionada referente a cada artefacto, apresentando uma breve história sobre as manifestações da Arte Nova nas cidades de Aveiro e Ílhavo. Esta aplicação mobile permitirá perceber a evolução dos edifícios desde a sua construção até à atualidade, ao nível de recuperação estrutural ou da falta de reabilitação e recuperação dos mesmos. Contribuirá para conhecer se os edifícios mantiveram (ou não) as suas características originais relativas ao desenho e tecnologia, para tal fará valerse de tecnologias como Realidade Aumentada, assim como os princípios de elaboração e leitura de QR codes, para facilitar o acesso, localização e compreensão dessa mesma informação, permitindo ainda que o seu utilizador embarque numa viagem no tempo e experiencie o roteiro de uma forma diferente. Paralelamente, pretende-se que este roteiro funcione como um roteiro único do património Arte Nova nas duas cidades, com o intuito de se expandir a outras cidades e se tornar num roteiro único do património Arte Nova na região de Aveiro. A diversidade do património Arte Nova nesta região assenta no cunho pessoal e social que os proprietários atribuíram aos seus artefactos, assim como na formação e a capacidade artística fortemente influenciada pela técnica pessoal, temperamento e sensibilidade dos seus autores, fazendo destes artefactos autênticas obras de arte, que merecem o seu estudo. Constatou-se, ao nível dos resultados que o protótipo da aplicação mobile, se adequaram ao que foi anunciado, a nível investigativo, e por isso, interessou a este estudo confirmar a demonstração do que foi enunciado. No entanto, concluiu-se que o respetivo protótipo necessita de ser ‘afinado’ em estudos futuros. Independentemente, das fragilidades encontradas, considera-se que este protótipo de aplicação mobile poderá servir como meio de excelência para a integração de conteúdos que vão mais além do que a visualização dos artefactos. Assim, contribui-se para o adensamento e acesso ao conhecimento sobre a história da Arte Nova em Portugal.
Resumo:
Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (AIS) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a Reinforcement Learning based control system (RL) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.