989 resultados para data flows


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simulation analysis is important approach to developing and evaluating the systems in terms of development time and cost. This paper demonstrates the application of Time Division Cluster Scheduling (TDCS) tool for the configuration of IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs using the simulation analysis, as an illustrative example that confirms the practical applicability of the tool. The simulation study analyses how the number of retransmissions impacts the reliability of data transmission, the energy consumption of the nodes and the end-to-end communication delay, based on the simulation model that was implemented in the Opnet Modeler. The configuration parameters of the network are obtained directly from the TDCS tool. The simulation results show that the number of retransmissions impacts the reliability, the energy consumption and the end-to-end delay, in a way that improving the one may degrade the others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* The research was supported by INTAS 00-397 and 00-626 Projects.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Data Protection Regulation proposed by the European Commission contains important elements to facilitate and secure personal data flows within the Single Market. A harmonised level of protection of individual data is an important objective and all stakeholders have generally welcomed this basic principle. However, when putting the regulation proposal in the complex context in which it is to be implemented, some important issues are revealed. The proposal dictates how data is to be used, regardless of the operational context. It is generally thought to have been influenced by concerns over social networking. This approach implies protection of data rather than protection of privacy and can hardly lead to more flexible instruments for global data flows.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The speed with which data has moved from being scarce, expensive and valuable, thus justifying detailed and careful verification and analysis to a situation where the streams of detailed data are almost too large to handle has caused a series of shifts to occur. Legal systems already have severe problems keeping up with, or even in touch with, the rate at which unexpected outcomes flow from information technology. The capacity to harness massive quantities of existing data has driven Big Data applications until recently. Now the data flows in real time are rising swiftly, become more invasive and offer monitoring potential that is eagerly sought by commerce and government alike. The ambiguities as to who own this often quite remarkably intrusive personal data need to be resolved – and rapidly - but are likely to encounter rising resistance from industrial and commercial bodies who see this data flow as ‘theirs’. There have been many changes in ICT that has led to stresses in the resolution of the conflicts between IP exploiters and their customers, but this one is of a different scale due to the wide potential for individual customisation of pricing, identification and the rising commercial value of integrated streams of diverse personal data. A new reconciliation between the parties involved is needed. New business models, and a shift in the current confusions over who owns what data into alignments that are in better accord with the community expectations. After all they are the customers, and the emergence of information monopolies needs to be balanced by appropriate consumer/subject rights. This will be a difficult discussion, but one that is needed to realise the great benefits to all that are clearly available if these issues can be positively resolved. The customers need to make these data flow contestable in some form. These Big data flows are only going to grow and become ever more instructive. A better balance is necessary, For the first time these changes are directly affecting governance of democracies, as the very effective micro targeting tools deployed in recent elections have shown. Yet the data gathered is not available to the subjects. This is not a survivable social model. The Private Data Commons needs our help. Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. This Web extra is the audio part of a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Tutkielman tarkoituksena on selvittää lukijalle, mistä syistä ja miten Euroopan unionin tietosuojainstrumentit – nykyinen tietosuojadirektiivi ja tuleva tietosuoja-asetus – asettavat rajoituksia EU:n kansalaisten henkilötietojen siirroille kolmansiin maihin kaupallisia tarkoituksia varten. Erityisen tarkastelun kohteena on henkilötietojen siirrot EU:n alueelta Yhdysvaltoihin mahdollistanut Safe Harbor-järjestelmä, jonka Euroopan unionin tuomioistuin katsoi pätemättömäksi asiassa C-362/14 Maximillian Schrems v Data Protection Commissioner. Tutkimusaiheen eli henkilötietojen rajat ylittävien siirtojen ollessa kansainvälisen oikeuden ja tietosuojaoikeuden leikkauspisteessä on tutkimuksessa käytetty molempien oikeudenalojen asiantuntijoiden tutkimuksia lähteenä. Kansainvälisen oikeuden peruslähteenä on käytetty Brownlien teosta Principles of Public International Law (6. painos), jota vasten on peilattu tutkimusaihetta tarkemmin käsittelevää kirjallisuutta. Erityisesti on syytä nostaa esille Bygraven tietosuojaoikeutta kansainvälisessä kontekstissa käsittelevä Data Privacy Law: An International Perspective sekä Kunerin nimenomaisesti henkilötietojen kansainvälisiä siirtoja käsittelevä Transborder Data Flows and Data Privacy Law. Uusien teknologioiden myötä nopeasti kehittyvästä tutkimusilmiöstä ja oikeudenalasta johtuen tutkimuksessa on käytetty lähdemateriaaleina runsaasti aihepiiriä käsitteleviä artikkeleita arvostetuista julkaisuista, sekä EU:n tietosuojaviranomaisten ja YK:n raportteja virallislähteinä. Keskeiset tutkimustulokset osoittavat EU:n ja sen jäsenvaltioiden intressit henkilötietojen siirroissa sekä EU:n asettamien henkilötietojen siirtosääntelyiden vaikutukset kolmansiin maihin. Globaalin konsensuksen saavuttamisen koskien henkilötietojen kansainvälisiä siirtosääntelyitä arvioitiin olevan ainakin lähitulevaisuudessa epätodennäköistä. Nykyisten alueellisten sääntelyratkaisujen osalta todettiin Euroopan neuvoston yleissopimuksen No. 108 eniten osoittavan potentiaalia maailmanlaajuiselle implementoinnille. Lopuksi arvioitiin oikeudellisen pluralismin mallin puitteissa tarkoituksenmukaisia keinoja EU:n kansalaisten perusoikeuksina turvattujen yksityisyyden ja henkilötietojen suojan parantamiseksi. Tarkastelu osoittaa EU:n kansalaisten sekä näiden henkilötietoja käsittelevien ja siirtävien yritysten välillä olleen tiedollinen ja voimallinen epätasapaino, joka ilmenee yksilön tiedollisen itseautonomian ja suostumuksen merkityksen heikentymisenä, joskin EU:n vuonna 2018 voimaan astuva tietosuoja-asetus organisaatioiden vastuuta korostamalla pyrkii poistamaan tätä ongelmaa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Institute of Public Health in Ireland is an all-island body which aims to improve health in Ireland by working to combat health inequalities and influence public policies in favour of health. The Institute promotes North-South co-operation in research, training, information and policy. The Institute commends the Department of Health and Children for producing the Discussion Paper on Proposed Health Information Bill (June 2008) and welcomes the opportunity to comment on it. The first objective of the Health Information: A National Strategy (2004) is to support the implementation of Quality and Fairness: A Health System for You (2001).The National Health Goals - such as ‘Better health for everyone’, ‘Fair access’ and ‘Responsive and appropriate care delivery’ - are expressed in terms of the health of the public as well as patients. The Discussion Paper focuses on personal information, and the data flows within the health system, that are needed to enhance medical care and maximise patient safety. The Institute believes that the Health Information Bill should also aim to more fully support the achievement of the National Health Goals and the public health function. This requires the development of more integrated information systems that link the healthcare sector and other sectors. Assessment of health services performance - in terms of the public’s health, health inequalities and achievement of the National Health Goals - require such information systems. They will enable the construction of public health key performance indicators for the healthcare services.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aquest treball de fi de carrera vol reflectir les necessitats bàsiques per al disseny d'un programari de gestió de magatzems (PGM a partir d'ara). Un PGM té la particularitat d'haver d'establir connexions entre els fluxos físics del treball propi d'un magatzem i els fluxos de dades propis d'un sistema informàtic. Les prestacions del PGM s'hauran de desenvolupar en part en terminals de radiofreqüència, ja que es considera que el treball en línia és bàsic per a gestionar un magatzem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The target of the thesis was to find out has the decision to outsource part of Filtronic LK warehouse function been profitable. Furthermore, another thesis target was to demonstrate current logistics processes between TPLP and company and find out the targets for developing these processes. The decision to outsource part of logistical funtions have been profitable during the first business year. Partnership includes always business risks. Risk increases high asset specific investments. In the other hand investment to partnership increases mutual trust and commitment between parties. By developing partnership risks and opportunitic behaviour can be decreased. The potential of managing material and data flows between logistic service provider and company observed. By analyzing inventory effiency were highlighted the need for decreasing the capital invested to inventories. The recommendations for managing outsourced logistical funtions were established such as improving partnership, process development, performance measurement and invoice checking.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014