848 resultados para Online data processing.
Resumo:
By providing vehicle-to-vehicle and vehicle-to-infrastructure wireless communications, vehicular ad hoc networks (VANETs), also known as the “networks on wheels”, can greatly enhance traffic safety, traffic efficiency and driving experience for intelligent transportation system (ITS). However, the unique features of VANETs, such as high mobility and uneven distribution of vehicular nodes, impose critical challenges of high efficiency and reliability for the implementation of VANETs. This dissertation is motivated by the great application potentials of VANETs in the design of efficient in-network data processing and dissemination. Considering the significance of message aggregation, data dissemination and data collection, this dissertation research targets at enhancing the traffic safety and traffic efficiency, as well as developing novel commercial applications, based on VANETs, following four aspects: 1) accurate and efficient message aggregation to detect on-road safety relevant events, 2) reliable data dissemination to reliably notify remote vehicles, 3) efficient and reliable spatial data collection from vehicular sensors, and 4) novel promising applications to exploit the commercial potentials of VANETs. Specifically, to enable cooperative detection of safety relevant events on the roads, the structure-less message aggregation (SLMA) scheme is proposed to improve communication efficiency and message accuracy. The scheme of relative position based message dissemination (RPB-MD) is proposed to reliably and efficiently disseminate messages to all intended vehicles in the zone-of-relevance in varying traffic density. Due to numerous vehicular sensor data available based on VANETs, the scheme of compressive sampling based data collection (CS-DC) is proposed to efficiently collect the spatial relevance data in a large scale, especially in the dense traffic. In addition, with novel and efficient solutions proposed for the application specific issues of data dissemination and data collection, several appealing value-added applications for VANETs are developed to exploit the commercial potentials of VANETs, namely general purpose automatic survey (GPAS), VANET-based ambient ad dissemination (VAAD) and VANET based vehicle performance monitoring and analysis (VehicleView). Thus, by improving the efficiency and reliability in in-network data processing and dissemination, including message aggregation, data dissemination and data collection, together with the development of novel promising applications, this dissertation will help push VANETs further to the stage of massive deployment.
Resumo:
The main objective of this work was to monitor a set of physical-chemical properties of heavy oil procedural streams through nuclear magnetic resonance spectroscopy, in order to propose an analysis procedure and online data processing for process control. Different statistical methods which allow to relate the results obtained by nuclear magnetic resonance spectroscopy with the results obtained by the conventional standard methods during the characterization of the different streams, have been implemented in order to develop models for predicting these same properties. The real-time knowledge of these physical-chemical properties of petroleum fractions is very important for enhancing refinery operations, ensuring technically, economically and environmentally proper refinery operations. The first part of this work involved the determination of many physical-chemical properties, at Matosinhos refinery, by following some standard methods important to evaluate and characterize light vacuum gas oil, heavy vacuum gas oil and fuel oil fractions. Kinematic viscosity, density, sulfur content, flash point, carbon residue, P-value and atmospheric and vacuum distillations were the properties analysed. Besides the analysis by using the standard methods, the same samples were analysed by nuclear magnetic resonance spectroscopy. The second part of this work was related to the application of multivariate statistical methods, which correlate the physical-chemical properties with the quantitative information acquired by nuclear magnetic resonance spectroscopy. Several methods were applied, including principal component analysis, principal component regression, partial least squares and artificial neural networks. Principal component analysis was used to reduce the number of predictive variables and to transform them into new variables, the principal components. These principal components were used as inputs of the principal component regression and artificial neural networks models. For the partial least squares model, the original data was used as input. Taking into account the performance of the develop models, by analysing selected statistical performance indexes, it was possible to conclude that principal component regression lead to worse performances. When applying the partial least squares and artificial neural networks models better results were achieved. However, it was with the artificial neural networks model that better predictions were obtained for almost of the properties analysed. With reference to the results obtained, it was possible to conclude that nuclear magnetic resonance spectroscopy combined with multivariate statistical methods can be used to predict physical-chemical properties of petroleum fractions. It has been shown that this technique can be considered a potential alternative to the conventional standard methods having obtained very promising results.
Resumo:
Bibliography: p. 14.
Resumo:
Originally presented as the author's thesis, University of Illinois at Urbana-Champaign, 1974.
Resumo:
Mode of access: Internet.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
The design and development of a Bottom Pressure Recorder for a Tsunami Early Warning System is described here. The special requirements that it should satisfy for the specific application of deployment at ocean bed and pressure monitoring of the water column above are dealt with. A high-resolution data digitization and low circuit power consumption are typical ones. The implementation details of the data sensing and acquisition part to meet these are also brought out. The data processing part typically encompasses a Tsunami detection algorithm that should detect an event of significance in the background of a variety of periodic and aperiodic noise signals. Such an algorithm and its simulation are presented. Further, the results of sea trials carried out on the system off the Chennai coast are presented. The high quality and fidelity of the data prove that the system design is robust despite its low cost and with suitable augmentations, is ready for a full-fledged deployment at ocean bed. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
This report is a detailed description of data processing of NOAA/MLML spectroradiometry data. It introduces the MLML_DBASE programs, describes the assembly of diverse data fues, and describes general algorithms and how individual routines are used. Definitions of data structures are presented in Appendices. [PDF contains 48 pages]
Resumo:
This report outlines the NOAA spectroradiometer data processing system implemented by the MLML_DBASE programs. This is done by presenting the algorithms and graphs showing the effects of each step in the algorithms. [PDF contains 32 pages]
Resumo:
Commercially available software packages for IBM PC-compatibles are evaluated to use for data acquisition and processing work. Moss Landing Marine Laboratories (MLML) acquired computers since 1978 to use on shipboard data acquisition (Le. CTD, radiometric, etc.) and data processing. First Hewlett-Packard desktops were used then a transition to the DEC VAXstations, with software developed mostly by the author and others at MLML (Broenkow and Reaves, 1993; Feinholz and Broenkow, 1993; Broenkow et al, 1993). IBM PC were at first very slow and limited in available software, so they were not used in the early days. Improved technology such as higher speed microprocessors and a wide range of commercially available software made use of PC more reasonable today. MLML is making a transition towards using the PC for data acquisition and processing. Advantages are portability and available outside support.
Resumo:
Data processing is an essential part of Acoustic Doppler Profiler (ADP) surveys, which have become the standard tool in assessing flow characteristics at tidal power development sites. In most cases, further processing beyond the capabilities of the manufacturer provided software tools is required. These additional tasks are often implemented by every user in mathematical toolboxes like MATLAB, Octave or Python. This requires the transfer of the data from one system to another and thus increases the possibility of errors. The application of dedicated tools for visualisation of flow or geographic data is also often beneficial and a wide range of tools are freely available, though again problems arise from the necessity of transferring the data. Furthermore, almost exclusively PCs are supported directly by the ADP manufacturers, whereas small computing solutions like tablet computers, often running Android or Linux operating systems, seem better suited for online monitoring or data acquisition in field conditions. While many manufacturers offer support for developers, any solution is limited to a single device of a single manufacturer. A common data format for all ADP data would allow development of applications and quicker distribution of new post processing methodologies across the industry.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.