88 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

QoS plays a key role in evaluating a service or a service composition plan across clouds and data centers. Currently, the energy cost of a service's execution is not covered by the QoS framework, and a service's price is often fixed during its execution. However, energy consumption has a great contribution in determining the price of a cloud service. As a result, it is not reasonable if the price of a cloud service is calculated with a fixed energy consumption value, if part of a service's energy consumption could be saved during its execution. Taking advantage of the dynamic energy-Aware optimal technique, a QoS enhanced method for service computing is proposed, in this paper, through virtual machine (VM) scheduling. Technically, two typical QoS metrics, i.e., the price and the execution time are taken into consideration in our method. Moreover, our method consists of two dynamic optimal phases. The first optimal phase aims at dynamically benefiting a user with discount price by transparently migrating his or her task execution from a VM located at a server with high energy consumption to a low one. The second optimal phase aims at shortening task's execution time, through transparently migrating a task execution from a VM to another one located at a server with higher performance. Experimental evaluation upon large scale service computing across clouds demonstrates the validity of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Directional fluid motion driven by the surface property of solid substrate is highly desirable for manipulating microfluidic liquid and collecting water from humid air. Studies on such liquid motion have been confined to dense material surfaces such as flat panels and single filaments. Recently, directional fluid transport through the thickness of thin porous materials has been reported by several research groups. Their studies not only attract fundamental, experimental and theoretical interest but also open novel application opportunities. This review article summarizes research progress in directional fluid transport across thin porous materials. It focuses on the materials preparation, basic properties associated with directional fluid transport in thin porous media, and their application development. The porous substrates, type of transporting fluids, structure-property attributes, and possible directional fluid transport mechanism are discussed. A perspective for future development in this field is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional methods of qualitative data analysis require transcription of audio-recorded data prior to conduct of the coding and analysis process. In this paper Alison Hutchinson describes and illustrates an innovative method of data analysis that comprises the use of audio-editing software to save selected audio bytes from digital audio recordings of meetings. The use of a database to code and manage the linked audio files and generate detailed and summary reports, including reporting of code frequencies according to participant code and/or meeting, is also highlighted. The advantage of using this approach in the analysis of audio-recorded data is that the process may be undertaken in the medium in which the data were collected. Though time-consuming, this process negates the need for expensive and time intensive transcription of recorded data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex data is challenging to understand when it is represented as written communication even when it is structured in a table. How- ever, choosing to represent data in creative ways can aid our under- standing of complex ideas and patterns. In this regard, the creative industries have a great deal to offer data-intensive scholarly disci- plines. Music, for example, is not often used to interpret data, yet the rhythmic nature of music lends itself to the representation and anal- ysis of temporal data.Taking the music industry as a case study, this paper explores how data about historical live music gigs can be analysed, extend- ed and re-presented to create new insights. Using a unique process called ‘songification’ we demonstrate how enhanced auditory data design can provide a medium for aural intuition. The case study also illustrates the benefits of an expanded and inclusive view of research; in which computation and communication, method and media, in combination enable us to explore the larger question of how we can employ technologies to produce, represent, analyse, deliver and exchange knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless body area networks (WBANs), as a promising health-care system, can provide tremendous benefits for timely and continuous patient care and remote health monitoring. Owing to the restriction of communication, computation and power in WBANs, cloud-assisted WBANs, which offer more reliable, intelligent, and timely health-care services for mobile users and patients, are receiving increasing attention. However, how to aggregate the health data multifunctionally and efficiently is still an open issue to the cloud server (CS). In this paper, we propose a privacy-preserving and multifunctional health data aggregation (PPM-HDA) mechanism with fault tolerance for cloud-assisted WBANs. With PPM-HDA, the CS can compute multiple statistical functions of users' health data in a privacy-preserving way to offer various services. In particular, we first propose a multifunctional health data additive aggregation scheme (MHDA+) to support additive aggregate functions, such as average and variance. Then, we put forward MHDA as an extension of MHDA+ to support nonadditive aggregations, such as min/max, median, percentile, and histogram. The PPM-HDA can resist differential attacks, which most existing data aggregation schemes suffer from. The security analysis shows that the PPM-HDA can protect users' privacy against many threats. Performance evaluations illustrate that the computational overhead of MHDA+ is significantly reduced with the assistance of CSs. Our MHDA scheme is more efficient than previously reported min/max aggregation schemes in terms of communication overhead when the applications require large plaintext space and highly accurate data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automating Software Engineering is the dream of software Engineers for decades. To make this dream to come to true, data mining can play an important role. Our recent research has shown that to increase the productivity and to reduce the cost of software development, it is essential to have an effective and efficient mechanism to store, manage and utilize existing software resources, and thus to automate software analysis, testing, evaluation and to make use of existing software for new problems. This paper firstly provides a brief overview of traditional data mining followed by a presentation on data mining in broader sense. Secondly, it presents the idea and the technology of software warehouse as an innovative approach in managing software resources using the idea of data warehouse where software assets are systematically accumulated, deposited, retrieved, packaged, managed and utilized driven by data mining and OLAP technologies. Thirdly, we presented the concepts and technology and their applications of data mining and data matrix including software warehouse to software engineering. The perspectives of the role of software warehouse and software mining in modern software development are addressed. We expect that the results will lead to a streamlined high efficient software development process and enhance the productivity in response to modern challenges of the design and development of software applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The widespread adoption of cluster computing as a high performance computing platform has seen the growth of data intensive scientific, engineering and commercial applications such as digital libraries, climate modeling, computational chemistry, computational fluid dynamics and image repositories. However, I/O subsystem performance has not been keeping pace with processor and memory performance, and is fast becoming the dominant factor in overall system performance.  Thus, parallel I/O has become a necessity in the face of performance improvements in other areas of computing systems. This paper addresses the problem of parallel I/O scheduling on cluster computing systems in the presence of data replication.  We propose two new I/O scheduling algorithms and evaluate the relative performance of the proposed policies against two existing approaches.  Simulation results show that the proposed policies perform substantially better than the baseline policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anycast in next generation Internet Protocol is a hot topic in the research of computer networks. It has promising potentials and also many challenges, such as architecture, routing, Quality-of-Service, anycast in ad hoc networks, application-layer anycast, etc. In this thesis, we tackle some important topics among them. The thesis at first presents an introduction about anycast, followed by the related work. Then, as our major contributions, a number of challenging issues are addressed in the following chapters. We tackled the anycast routing problem by proposing a requirement based probing algorithm at application layer for anycast routing. Compared with the existing periodical based probing routing algorithm, the proposed routing algorithm improves the performance in terms of delay. We addressed the reliable service problem by the design of a twin server model for the anycast servers, providing a transparent and reliable service for all anycast queries. We addressed the load balance problem of anycast servers by proposing new job deviation strategies, to provide a similar Quality-of-Service to all clients of anycast servers. We applied the mesh routing methodology in the anycast routing in ad hoc networking environment, which provides a reliable routing service and uses much less network resources. We combined the anycast protocol and the multicast protocol to provide a bidirectional service, and applied the service to Web-based database applications, achieving a better query efficiency and data synchronization. Finally, we proposed a new Internet based service, minicast, as the combination of the anycast and multicast protocols. Such a service has potential applications in information retrieval, parallel computing, cache queries, etc. We show that the minicast service consumes less network resources while providing the same services. The last chapter of the thesis presents the conclusions and discusses the future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural networks and statistical techniques like decision trees, discriminant analysis, logistic regression and survival analysis play a crucial role in Business Intelligence. These predictive analytical tools exploit patterns found in historical data to make predictions about future events. In this paper we have shown some recent developments of a few of these techniques in financial and business intelligence applications like fraud detection, bankruptcy prediction and credit rating scoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stroke is a common neurological condition which is becoming increasingly common as the population ages. This entails healthcare monitoring systems suitable for home use, with remote access for medical professionals and emergency responders. The mobile phone is becoming the easy access tool for self-evaluation of health, but it is hindered by inherent problems including computational power and storage capacity. This research proposes a novel cloud based architecture of a biomedical system for a wearable motion kinematic analysis system which mitigates the above mentioned deficiencies of mobile devices. The system contains three subsystems: 1. Bio Kin WMS for measuring the acceleration and rotation of movement 2. Bio Kin Mobi for Mobile phone based data gathering and visualization 3. Bio Kin Cloud for data intensive computations and storage. The system is implemented as a web system and an android based mobile application. The web system communicates with the mobile application using an encrypted data structure containing sensor data and identifiable headings. The raw data, according to identifiable headings, is stored in the Amazon Relational Database Service which is automatically backed up daily. The system was deployed and tested in Amazon Web Services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The constrained battery power of mobile devices poses a serious impact on user experience. As an increasingly prevalent type of applications in mobile cloud environments, location-based applications (LBAs) present some inherent limitations concerning energy. For example, the Global Positioning System based positioning mechanism is well-known for its extremely power-hungry attribute. Due to the severity of the issue, considerable researches have focused on energy-efficient locating sensing mechanism in the last a few years. In this paper, we provide a comprehensive survey of recent work on low-power design of LBAs. An overview of LBAs and different locating sensing technologies used today are introduced. Methods for energy saving with existing locating technologies are investigated. Reductions of location updating queries and simplifications of trajectory data are also mentioned. Moreover, we discuss cloud-based schemes in detail which try to develop new energy-efficient locating technologies by leveraging the cloud capabilities of storage, computation and sharing. Finally, we conclude the survey and discuss the future research directions. © 2013 Springer-Verlag Wien.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 This thesis analyses and examines the challenges of aggregation of sensitive data and data querying on aggregated data at cloud server. This thesis also delineates applications of aggregation of sensitive medical data in several application scenarios, and tests privatization techniques to assist in improving the strength of privacy and utility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While High Performance Computing clouds allow researchers to process large amounts of genomic data, complex resource and software configuration tasks must be carried out beforehand. The current trend exposes applications and data as services, simplifying access to clouds. This paper examines commonly used cloud-based genomic analysis services, introduces the approach of exposing data as services and proposes two new solutions (HPCaaS and Uncinus) which aim to automate service development, deployment process and data provision. By comparing and contrasting these solutions, we identify key mechanisms of service creation, execution and data access required to support non-computing specialists employing clouds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the advance of the Internet of Things (IoT), more M2M sensors and devices are connected to the Internet. These sensors and devices generate sensor-based big data and bring new business opportunities and demands for creating and developing sensor-oriented big data infrastructures, platforms and analytics service applications. Big data sensing is becoming a new concept and next technology trend based on a connected sensor world because of IoT. It brings a strong impact on many sensor-oriented applications, including smart city, disaster control and monitor, healthcare services, and environment protection and climate change study. This paper is written as a tutorial paper by providing the informative concepts and taxonomy on big data sensing and services. The paper not only discusses the motivation, research scope, and features of big data sensing and services, but also exams the required services in big data sensing based on the state-of-the-art research work. Moreover, the paper discusses big data sensing challenges, issues, and needs.