991 resultados para Stream processing
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
The human nervous system constructs a Euclidean representation of near (personal) space by combining multiple sources of information (cues). We investigated the cues used for the representation of personal space in a patient with visual form agnosia (DF). Our results indicated that DF relies predominantly on binocular vergence information when determining the distance of a target despite the presence of other (retinal) cues. Notably, DF was able to construct an Euclidean representation of personal space from vergence alone. This finding supports previous assertions that vergence provides the nervous system with veridical information for the construction of personal space. The results from the current study, together with those of others, suggest that: (i) the ventral stream is responsible for extracting depth and distance information from monocular retinal cues (i.e. from shading, texture, perspective) and (ii) the dorsal stream has access to binocular information (from horizontal image disparities and vergence). These results also indicate that DF was not able to use size information to gauge target distance, suggesting that intact temporal cortex is necessary for learned size to influence distance processing. Our findings further suggest that in neurologically intact humans, object information extracted in the ventral pathway is combined with the products of dorsal stream processing for guiding prehension. Finally, we studied the size-distance paradox in visual form agnosia in order to explore the cognitive use of size information. The results of this experiment were consistent with a previous suggestion that the paradox is a cognitive phenomenon.
Genetic engineering of baker's and wine yeasts using formaldehyde hyperresistance-mediating plasmids
Resumo:
Yeast multi-copy vectors carrying the formaldehyde-resistance marker gene SFA have proved to be a valuable tool for research on industrially used strains of Saccharomyces cerevisiae. The genetics of these strains is often poorly understood, and for various reasons it is not possible to simply subject these strains to protocols of genetic engineering that have been established for laboratory strains of S. cerevisiae. We tested our vectors and protocols using 10 randomly picked baker's and wine yeasts all of which could be transformed by a simple protocol with vectors conferring hyperresistance to formaldehyde. The application of formaldehyde as a selecting agent also offers the advantage of its biodegradation to CO2 during fermentation, i.e., the selecting agent will be consumed and therefore its removal during down-stream processing is not necessary. Thus, this vector provides an expression system which is simple to apply and inexpensive to use
Resumo:
The thesis comprises a set of experiments mainly focused on the improvement of L-glutamic acid fennentation. Much attention has been given to use of locally available raw materials, culturing the organism on inert solid substrates and also immobilization of the bacterial cells from the view point of long term utilization of biocatalyst and continuous operation of the stabilized system. Studies were also carried out for the down stream processing for the extraction and purification of L-glutamic acid. An attempt was made to study the morphological features of the microorganism including the cell premeability. In relation with the accumulation of glutamic acid within the cells an approach was made to study the behaviour of the Brevibacterium cells when they are exposed to hyper osmotic environment. Attempts were also made to study the requirement of iron and production of siderophores by this microbial strain. The search for a suitable nitrogen source for glutamate fermentation ended with a promising result that they got a potent urease activity and it can be utilized for many biotransfonnation studies. The entire thesis is presented in three sections, viz. introductory section, experimental section and the concluding section
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
This work represents an investigation into the presence, abundance and diversity of virus-like particles (VLPs) associated with human faecal and caecal samples. Various methodologies for the recovery of VLPs from faeces were tested and optimized, including successful down-stream processing of such samples for the purpose of an in-depth electron microscopic analysis, pulsed-field gel electrophoresis and efficient DNA recovery. The applicability of the developed VLP characterization method beyond the use of faecal samples was then verified using samples obtained from human caecal fluid.
Resumo:
I Big Data hanno forgiato nuove tecnologie che migliorano la qualità della vita utilizzando la combinazione di rappresentazioni eterogenee di dati in varie discipline. Occorre, quindi, un sistema realtime in grado di computare i dati in tempo reale. Tale sistema viene denominato speed layer, come si evince dal nome si è pensato a garantire che i nuovi dati siano restituiti dalle query funcions con la rapidità in cui essi arrivano. Il lavoro di tesi verte sulla realizzazione di un’architettura che si rifaccia allo Speed Layer della Lambda Architecture e che sia in grado di ricevere dati metereologici pubblicati su una coda MQTT, elaborarli in tempo reale e memorizzarli in un database per renderli disponibili ai Data Scientist. L’ambiente di programmazione utilizzato è JAVA, il progetto è stato installato sulla piattaforma Hortonworks che si basa sul framework Hadoop e sul sistema di computazione Storm, che permette di lavorare con flussi di dati illimitati, effettuando l’elaborazione in tempo reale. A differenza dei tradizionali approcci di stream-processing con reti di code e workers, Storm è fault-tolerance e scalabile. Gli sforzi dedicati al suo sviluppo da parte della Apache Software Foundation, il crescente utilizzo in ambito di produzione di importanti aziende, il supporto da parte delle compagnie di cloud hosting sono segnali che questa tecnologia prenderà sempre più piede come soluzione per la gestione di computazioni distribuite orientate agli eventi. Per poter memorizzare e analizzare queste moli di dati, che da sempre hanno costituito una problematica non superabile con i database tradizionali, è stato utilizzato un database non relazionale: HBase.
Resumo:
In questo lavoro di tesi sono state impiegate le librerie grafiche OpenGL ES 2 per eseguire calcoli paralleli sulla GPU del Raspberry Pi. Sono stati affrontati e discussi concetti riguanrdati il calcolo parallelo, stream processing, GPGPU e le metriche di valutazione di algoritmi paralleli. Sono inoltre descritte le potenzialita e le limitazioni derivanti dall'impiego di OpenGL per implementare algoritmi paralleli. In particolare si e fatto riferimento all'algoritmo Seam Carving per il restringimento di immagini, realizzando e valutando una implementazione parallela di questo sul Raspberry Pi.
Resumo:
Many applications in several domains such as telecommunications, network security, large scale sensor networks, require online processing of continuous data lows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static con?gurations that lead to either under or over-provisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation and a thorough evaluation of the scalability and elasticity of the fully implemented system.
Resumo:
[EN]This paper describes a face detection system which goes beyond traditional approaches normally designed for still images. First the video stream context is considered to apply the detector, and therefore, the resulting system is designed taking into consideration a main feature available in a video stream, i.e. temporal coherence. The resulting system builds a feature based model for each detected face, and searches them using various model information in the next frame. The results achieved for video stream processing outperform Rowley-Kanade's and Viola-Jones' solutions providing eye and face data in a reduced time with a notable correct detection rate.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
The convergence between the recent developments in sensing technologies, data science, signal processing and advanced modelling has fostered a new paradigm to the Structural Health Monitoring (SHM) of engineered structures, which is the one based on intelligent sensors, i.e., embedded devices capable of stream processing data and/or performing structural inference in a self-contained and near-sensor manner. To efficiently exploit these intelligent sensor units for full-scale structural assessment, a joint effort is required to deal with instrumental aspects related to signal acquisition, conditioning and digitalization, and those pertaining to data management, data analytics and information sharing. In this framework, the main goal of this Thesis is to tackle the multi-faceted nature of the monitoring process, via a full-scale optimization of the hardware and software resources involved by the {SHM} system. The pursuit of this objective has required the investigation of both: i) transversal aspects common to multiple application domains at different abstraction levels (such as knowledge distillation, networking solutions, microsystem {HW} architectures), and ii) the specificities of the monitoring methodologies (vibrations, guided waves, acoustic emission monitoring). The key tools adopted in the proposed monitoring frameworks belong to the embedded signal processing field: namely, graph signal processing, compressed sensing, ARMA System Identification, digital data communication and TinyML.
Resumo:
Mode of access: Internet.