483 resultados para Streaming


Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article will acquaint you with ten of the more important leftwing films I have reviewed over the past sixteen years as a member of New York Film Critics Online. You will not see listed familiar works such as “The Battle of Algiers” but instead those that deserve wider attention, the proverbial neglected masterpieces. They originate from different countries and are available through Internet streaming, either freely from Youtube or through Netflix or Amazon rental. In several instances you will be referred to film club websites that like the films under discussion deserve wider attention since they are the counterparts to the small, independent theaters where such films get premiered. The country of origin, date and director will be identified next to the title, followed by a summary of the film, and finally by its availability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work was performing effluent degradation studies by electrochemical treatment. The electrochemical oxidation (EO) hydroquinone (H2Q) was carried out in acid medium, using PbO2 electrode by galvanostatic electrolysis, applying current densities of 10 and 30 mA/cm2 . The concentration of H2Q was monitored by differential pulse voltammetry (DPV). The experimental results showed that the galvanostatic electrolysis process performance significantly depends on the applied current density, achieving removal efficiencies of 100% and 80 % and 10 applying 30 mA/cm2 , respectively. Furthermore, the electroanalytical technique was effective in H2Q be used as a detection method. In order to test the efficiency of PbO2 electrode, the electrochemical treatment was conducted in an actual effluent, leachate from a landfill. The liquid waste leachate (600ml effluent) was treated in a batch electrochemical cell, with or without addition of NaCl by applying 7 mA/cm2 . The efficiency of EO was assessed against the removal of thermo-tolerant coliforms, total organic carbon (TOC), total phosphorus and metals (copper, cobalt, chromium, iron and nickel). These results showed that efficient removal of coliforms was obtained (100%), and was further decrease the concentration of heavy metals by the cathode processes. However, results were not satisfactory TOC, achieving low total removal of dissolved organic load. Because it is considered an effluent complex were developed other tests with this effluent to monitor a larger number of decontamination parameters (Turbidity, Total Solids, Color, Conductivity, Total Organic Carbon (TOC) and metals (barium, chromium, lithium, manganese and Zinc), comparing the efficiency of this type of electrochemical treatment (EO or electrocoagulation) using a flow cell. In this assay was compared to electro streaming. In the case of the OE, Ti/IrO2-TaO5 was used as the anode, however, the electrocoagulation process, aluminum electrodes were used; applying current densities of 10, 20 and 30 mA/cm2 in the presence and absence of NaCl as an electrolyte. The results showed that EO using Ti/IrO2–TaO5 was anode as efficient when Cl- was present in the effluent. In contrast, the electrocoagulation flow reduces the dissolved organic matter in the effluent, under certain experimental conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Durante el desarrollo del proyecto he aprendido sobre Big Data, Android y MongoDB mientras que ayudaba a desarrollar un sistema para la predicción de las crisis del trastorno bipolar mediante el análisis masivo de información de diversas fuentes. En concreto hice una parte teórica sobre bases de datos NoSQL, Streaming Spark y Redes Neuronales y después diseñé y configuré una base de datos MongoDB para el proyecto del trastorno bipolar. También aprendí sobre Android y diseñé y desarrollé una aplicación de móvil en Android para recoger datos para usarlos como entrada en el sistema de predicción de crisis. Una vez terminado el desarrollo de la aplicación también llevé a cabo una evaluación con usuarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

UNESCO’s approval of the Convention on the Protection and Promotion of the Diversity of Cultural Expressions (UNESCO, 2005) has been an important element in catalyzing any attempt to measure the diversity of cultural industries (UIS, 2011). Within this framework, this article analyzes the relations between the music and radio industries in Spain from a critical perspective through the analysis of available data on recorded music offer and consumption (sales lists, radio-formula lists, the characteristics of the phonographic and radio markets) in different key moments due to the emergence of new formats and devices (CDS, Mp3, Internet).The main goal of this work is to study the evolution of the Spanish record market in terms of diversity from the end of the 1970s to the present, through the study of radio music hits lists and, the business structure of the phonographic and radio sectors, and phonograms top sales

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports the findings from a study of the learning of English intonation by Spanish speakers within the discourse mode of L2 oral presentation. The purpose of this experiment is, firstly, to compare four prosodic parameters before and after an L2 discourse intonation training programme and, secondly, to confirm whether subjects, after the aforementioned L2 discourse intonation training, are able to match the form of these four prosodic parameters to the discourse-pragmatic function of dominance and control. The study designed the instructions and tasks to create the oral and written corpora and Brazil’s Pronunciation for Advanced Learners of English was adapted for the pedagogical aims of the present study. The learners’ pre- and post-tasks were acoustically analysed and a pre / post- questionnaire design was applied to interpret the acoustic analysis. Results indicate most of the subjects acquired a wider choice of the four prosodic parameters partly due to the prosodically-annotated transcripts that were developed throughout the L2 discourse intonation course. Conversely, qualitative and quantitative data reveal most subjects failed to match the forms to their appropriate pragmatic functions to express dominance and control in an L2 oral presentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We advocate the Loop-of-stencil-reduce pattern as a means of simplifying the implementation of data-parallel programs on heterogeneous multi-core platforms. Loop-of-stencil-reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop in both data-parallel and streaming applications, or a combination of both. The pattern makes it possible to deploy a single stencil computation kernel on different GPUs. We discuss the implementation of Loop-of-stencil-reduce in FastFlow, a framework for the implementation of applications based on the parallel patterns. Experiments are presented to illustrate the use of Loop-of-stencil-reduce in developing data-parallel kernels running on heterogeneous systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Participation in group exhibition themed around the 25th anniversary of the Elba Benitez Gallery in Madrid. My work comprised a series of performances in which I translated reviews from the magazine Art Forum from 1990. The performances took place in various locations in London, throughout the run of the exhibition, and were streamed live to an iPad in the gallery in Madrid. I made audio visual recordings of the performances via the streaming media, which located me as the performer alongside the viewers in a single split image. These recordings were then archived in a shared folder held between the gallery and me, and which visitors to the exhibition could access when a performance was not taking place. The work extends my concerns with translation and performance, and with a consideration of how the mechanism of the gallery and the exhibition might be used to generate innovative viewing engagements facilitated by technology. The work also attempts to develop thinking and practice around the relationship between art works and their documentation - in this case the documentation and even its potential for distribution is generated as the work comes into being. The exhibition included works by Ignasi Aballí, Armando Andrade Tudela,Lothar Baumgarten, Carlos Bunga, Cabello/Carceller, Juan Cruz, Gintaras Didžiapetris, Fernanda Fragateiro, Hreinn Fridfinnsson, Carlos Garaicoa,Mario García Torres, David Goldblatt, Cristina Iglesias,Ana Mendieta, Vik Muniz, Ernesto Neto, Francisco Ruiz de Infante,Alexander Sokurov, Francesc Torres and Valentín Vallhonrat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the rapid development of Internet technologies, video and audio processing are among the most important parts due to the constant requirements of high quality media contents. Along with the improvement of network environment and the hardware equipment, this demand is becoming more and more imperious, people prefer high quality videos and audios as well as the net streaming media resources. FFmpeg is a set of open source program about the A/V decoding. Many commercial players use FFmpeg as their displaying cores. This paper designed a simple and easy-to-use video player based on FFmpeg. The first part is about the basic theories and related knowledge of video displaying, including some concepts like data formats, streaming media data, video coding and decoding. In a word, the realization of the video player depend on the a set of video decoding process. The general idea about the process is to get the video packets from the Internet, to read the related protocols and de-encapsulate the protocols, to de-encapsulate the packaging data and to get encoded formats data, to decode them to pixel data that can be displayed directly through graphics cards. During the coding and decoding process, there could be different degrees of data losing, which is called lossy compression, but it usually does not influence the quality of user experiences. The second part is about the principle of the FFmpeg decoding process, that is one of the key point of the paper. In this project, FFmpeg is used for the main decoding task, by call some main functions and structures from FFmpeg class libraries, packaging video formats could be transfer to pixel data, after getting the pixel data, SDL is used for the displaying process. The third part is about the SDL displaying flow. Similarly, it would invoke some important displaying functions from SDL class libraries to realize the function, though SDL is able to do not only displaying task, but also many other game playing process. After that, a independent video displayer is completed, it is provided with all the key function of a player. The fourth part make a simple users interface for the player based on the MFC program, it enable the player could be used by most people. At last, in consideration of the mobile Internet’s blossom, people nowadays can hardly ever drop their mobile phones, there is a brief introduction about how to transplant the video player to Android platform which is one of the most used mobile systems.