876 resultados para bigdata, data stream processing, dsp, apache storm, cyber security


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der Europäische Markt für ökologische Lebensmittel ist seit den 1990er Jahren stark gewachsen. Begünstigt wurde dies durch die Einführung der EU-Richtlinie 2092/91 zur Zertifizierung ökologischer Produkte und durch die Zahlung von Subventionen an umstellungswillige Landwirte. Diese Maßnahmen führten am Ende der 1990er Jahre für einige ökologische Produkte zu einem Überangebot auf europäischer Ebene. Die Verbrauchernachfrage stieg nicht in gleichem Maße wie das Angebot, und die Notwendigkeit für eine Verbesserung des Marktgleichgewichts wurde offensichtlich. Dieser Bedarf wurde im Jahr 2004 von der Europäischen Kommission im ersten „Europäischen Aktionsplan für ökologisch erzeugte Lebensmittel und den ökologischen Landbau“ formuliert. Als Voraussetzung für ein gleichmäßigeres Marktwachstum wird in diesem Aktionsplan die Schaffung eines transparenteren Marktes durch die Erhebung statistischer Daten über Produktion und Verbrauch ökologischer Produkte gefordert. Die Umsetzung dieses Aktionsplans ist jedoch bislang nicht befriedigend, da es auf EU-Ebene noch immer keine einheitliche Datenerfassung für den Öko-Sektor gibt. Ziel dieser Studie ist es, angemessene Methoden für die Erhebung, Verarbeitung und Analyse von Öko-Marktdaten zu finden. Geeignete Datenquellen werden identifiziert und es wird untersucht, wie die erhobenen Daten auf Plausibilität untersucht werden können. Hierzu wird ein umfangreicher Datensatz zum Öko-Markt analysiert, der im Rahmen des EU-Forschungsprojektes „Organic Marketing Initiatives and Rural Development” (OMIaRD) erhoben wurde und alle EU-15-Länder sowie Tschechien, Slowenien, Norwegen und die Schweiz abdeckt. Daten für folgende Öko-Produktgruppen werden untersucht: Getreide, Kartoffeln, Gemüse, Obst, Milch, Rindfleisch, Schaf- und Ziegenfleisch, Schweinefleisch, Geflügelfleisch und Eier. Ein zentraler Ansatz dieser Studie ist das Aufstellen von Öko-Versorgungsbilanzen, die einen zusammenfassenden Überblick von Angebot und Nachfrage der jeweiligen Produktgruppen liefern. Folgende Schlüsselvariablen werden untersucht: Öko-Produktion, Öko-Verkäufe, Öko-Verbrauch, Öko-Außenhandel, Öko-Erzeugerpreise und Öko-Verbraucherpreise. Zudem werden die Öko-Marktdaten in Relation zu den entsprechenden Zahlen für den Gesamtmarkt (öko plus konventionell) gesetzt, um die Bedeutung des Öko-Sektors auf Produkt- und Länderebene beurteilen zu können. Für die Datenerhebung werden Primär- und Sekundärforschung eingesetzt. Als Sekundärquellen werden Publikationen von Marktforschungsinstituten, Öko-Erzeugerverbänden und wissenschaftlichen Instituten ausgewertet. Empirische Daten zum Öko-Markt werden im Rahmen von umfangreichen Interviews mit Marktexperten in allen beteiligten Ländern erhoben. Die Daten werden mit Korrelations- und Regressionsanalysen untersucht, und es werden Hypothesen über vermutete Zusammenhänge zwischen Schlüsselvariablen des Öko-Marktes getestet. Die Datenbasis dieser Studie bezieht sich auf ein einzelnes Jahr und stellt damit einen Schnappschuss der Öko-Marktsituation der EU dar. Um die Marktakteure in die Lage zu versetzen, zukünftige Markttrends voraussagen zu können, wird der Aufbau eines EU-weiten Öko-Marktdaten-Erfassungssystems gefordert. Hierzu wird eine harmonisierte Datenerfassung in allen EU-Ländern gemäß einheitlicher Standards benötigt. Die Zusammenstellung der Marktdaten für den Öko-Sektor sollte kompatibel sein mit den Methoden und Variablen der bereits existierenden Eurostat-Datenbank für den gesamten Agrarmarkt (öko plus konventionell). Eine jährlich aktualisierte Öko-Markt-Datenbank würde die Transparenz des Öko-Marktes erhöhen und die zukünftige Entwicklung des Öko-Sektors erleichtern. ---------------------------

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cybersecurity is a complex challenge that has emerged alongside the evolving global socio-technical environment of social networks that feature connectivity across time and space in ways unimaginable even a decade ago. This paper reports on the preliminary findings of a NATO funded project that investigates the nature of innovation in open collaborative communities and its implications for cyber security. In this paper, the authors describe the framing of relevant issues, the articulation of the research questions, and the derivation of a conceptual framework based on open collaborative innovation that has emerged from preliminary field research in Russia and the UK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. This study investigated whether trait positive schizotypy or trait dissociation was associated with increased levels of data-driven processing and symptoms of post-traumatic distress following a road traffic accident. Methods. Forty-five survivors of road traffic accidents were recruited from a London Accident and Emergency service. Each completed measures of trait positive schizotypy, trait dissociation, data-driven processing, and post-traumatic stress. Results. Trait positive schizotypy was associated with increased levels of data-driven processing and post-traumatic symptoms during a road traffic accident, whereas trait dissociation was not. Conclusions. Previous results which report a significant relationship between trait dissociation and post-traumatic symptoms may be an artefact of the relationship between trait positive schizotypy and trait dissociation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a clocking pipeline technique referred to as a single-pulse pipeline (PP-Pipeline) and applies it to the problem of mapping pipelined circuits to a Field Programmable Gate Array (FPGA). A PP-pipeline replicates the operation of asynchronous micropipelined control mechanisms using synchronous-orientated logic resources commonly found in FPGA devices. Consequently, circuits with an asynchronous-like pipeline operation can be efficiently synthesized using a synchronous design methodology. The technique can be extended to include data-completion circuitry to take advantage of variable data-completion processing time in synchronous pipelined designs. It is also shown that the PP-pipeline reduces the clock tree power consumption of pipelined circuits. These potential applications are demonstrated by post-synthesis simulation of FPGA circuits. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The next generation consumer level interactive services require reliable and constant communication for both mobile and static users. The Digital Video Broadcasting ( DVB) group has exploited the rapidly increasing satellite technology for the provision of interactive services and launched a standard called Digital Video Broadcast through Return Channel Satellite (DYB-RCS). DVB-RCS relies on DVB-Satellite (DVB-S) for the provision of forward channel. The Digital Signal processing (DSP) implemented in the satellite channel adapter block of these standards use powerful channel coding and modulation techniques. The investigation is concentrated towards the Forward Error Correction (FEC) of the satellite channel adapter block, which will help in determining, how the technology copes with the varying channel conditions and user requirements(1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lightning data, collected using a Boltek Storm Tracker system installed at Chilton, UK, were used to investigate the mean response of the ionospheric sporadic-E layer to lightning strokes in a superposed epoch study. The lightning detector can discriminate between positive and negative lightning strokes and between cloud-to-ground ( CG) and inter-cloud ( IC) lightning. Superposed epoch studies carried out separately using these subsets of lightning strokes as trigger events have revealed that the dominant cause of the observed ionospheric enhancement in the Es layer is negative cloud-to-ground lightning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emergency vehicles use high-amplitude sirens to warn pedestrians and other road users of their presence. Unfortunately, the siren noise enters the vehicle and corrupts the intelligibility of two-way radio voice com-munications from the emergency vehicle to a control room. Often the siren has to be turned off to enable the control room to hear what is being said which subsequently endangers people's lives. A digital signal processing (DSP) based system for the cancellation of siren noise embedded within speech is presented. The system has been tested with the least mean square (LMS), normalised least mean square (NLMS) and affine projection algorithm (APA) using recordings from three common types of sirens (two-tone, wail and yelp) from actual test vehicles. It was found that the APA with a projection order of 2 gives comparably improved cancellation over the LMS and NLMS with only a moderate increase in algorithm complexity and code size. Therefore, this siren noise cancellation system using the APA offers an improvement in cancellation achieved by previous systems. The removal of the siren noise improves the response time for the emergency vehicle and thus the system can contribute to saving lives. The system also allows voice communication to take place even when the siren is on and as such the vehicle offers less risk of danger when moving at high speeds in heavy traffic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is growing interest in the ways in which the location of a person can be utilized by new applications and services. Recent advances in mobile technologies have meant that the technical capability to record and transmit location data for processing is appearing in off-the-shelf handsets. This opens possibilities to profile people based on the places they visit, people they associate with, or other aspects of their complex routines determined through persistent tracking. It is possible that services offering customized information based on the results of such behavioral profiling could become commonplace. However, it may not be immediately apparent to the user that a wealth of information about them, potentially unrelated to the service, can be revealed. Further issues occur if the user agreed, while subscribing to the service, for data to be passed to third parties where it may be used to their detriment. Here, we report in detail on a short case study tracking four people, in three European member states, persistently for six weeks using mobile handsets. The GPS locations of these people have been mined to reveal places of interest and to create simple profiles. The information drawn from the profiling activity ranges from intuitive through special cases to insightful. In this paper, these results and further extensions to the technology are considered in light of European legislation to assess the privacy implications of this emerging technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The long observational record is critical to our understanding of the Earth’s climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. The purpose of this paper is to both review recent progress in reprocessing and reanalyzing observations, and summarize the challenges that must be overcome in order to improve our understanding of climate and variability. Reprocessing improves data quality through more scrutiny and improved retrieval techniques for individual observing systems, while reanalysis merges many disparate observations with models through data assimilation, yet both aim to provide a climatology of Earth processes. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Oceanic reanalyses have made significant advances in recent years, but will only be discussed here in terms of progress toward integrated Earth system analyses. Climate data sets are generally adequate for process studies and large-scale climate variability. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. It must be emphasized that careful investigation of the data and processing methods are required to use the observations appropriately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work represents an investigation into the presence, abundance and diversity of virus-like particles (VLPs) associated with human faecal and caecal samples. Various methodologies for the recovery of VLPs from faeces were tested and optimized, including successful down-stream processing of such samples for the purpose of an in-depth electron microscopic analysis, pulsed-field gel electrophoresis and efficient DNA recovery. The applicability of the developed VLP characterization method beyond the use of faecal samples was then verified using samples obtained from human caecal fluid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Massive Open Online Courses (MOOCs) have become very popular among learners millions of users from around the world registered with leading platforms. There are hundreds of universities (and other organizations) offering MOOCs. However, sustainability of MOOCs is a pressing concern as MOOCs incur up front creation costs, maintenance costs to keep content relevant and on-going support costs to provide facilitation while a course is being run. At present, charging a fee for certification (for example Coursera Signature Track and FutureLearn Statement of Completion) seems a popular business model. In this paper, the authors discuss other possible business models and their pros and cons. Some business models discussed here are: Freemium model – providing content freely but charging for premium services such as course support, tutoring and proctored exams. Sponsorships – courses can be created in collaboration with industry where industry sponsorships are used to cover the costs of course production and offering. For example Teaching Computing course was offered by the University of East Anglia on the FutureLearn platform with the sponsorship from British Telecom while the UK Government sponsored the course Introduction to Cyber Security offered by the Open University on FutureLearn. Initiatives and Grants – The government, EU commission or corporations could commission the creation of courses through grants and initiatives according to the skills gap identified for the economy. For example, the UK Government’s National Cyber Security Programme has supported a course on Cyber Security. Similar initiatives could also provide funding to support relevant course development and offering. Donations – Free software, Wikipedia and early OER initiatives such as the MIT OpenCourseware accept donations from the public and this could well be used as a business model where learners could contribute (if they wish) to the maintenance and facilitation of a course. Merchandise – selling merchandise could also bring revenue to MOOCs. As many participants do not seek formal recognition (European Commission, 2014) for their completion of a MOOC, merchandise that presents their achievement in a playful way could well be attractive for them. Sale of supplementary material –supplementary course material in the form of an online or physical book or similar could be sold with the revenue being reinvested in the course delivery. Selective advertising – courses could have advertisements relevant to learners Data sharing – though a controversial topic, sharing learner data with relevant employers or similar could be another revenue model for MOOCs. Follow on events – the courses could lead to follow on summer schools, courses or other real-life or online events that are paid-for in which case a percentage of the revenue could be passed on to the MOOC for its upkeep. Though these models are all possible ways of generating revenue for MOOCs, some are more controversial and sensitive than others. Nevertheless unless appropriate business models are identified the sustainability of MOOCs would be problematic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intelligent Transportation System (ITS) is a system that builds a safe, effective and integrated transportation environment based on advanced technologies. Road signs detection and recognition is an important part of ITS, which offer ways to collect the real time traffic data for processing at a central facility.This project is to implement a road sign recognition model based on AI and image analysis technologies, which applies a machine learning method, Support Vector Machines, to recognize road signs. We focus on recognizing seven categories of road sign shapes and five categories of speed limit signs. Two kinds of features, binary image and Zernike moments, are used for representing the data to the SVM for training and test. We compared and analyzed the performances of SVM recognition model using different features and different kernels. Moreover, the performances using different recognition models, SVM and Fuzzy ARTMAP, are observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently great emphasis is given for seed metering that assist rigorous demands in relation to longitudinal distribution of seeds, as well as to the index of fails in spacing laws, breaks and double seeds. The evaluation of these variable demands much time and work of attainment of data and processing. The objective of this work went propose to use of graphs of normal probability, facilitating the treatment of the data and decreasing the time of processing. The evaluation methodology consists in the counting of broken seeds, fail spacing and double seeds through the measure of the spacing among seeds, preliminary experiments through combinations of treatments had been carried through whose factors of variation were the level of the reservoir of seeds, the leveling of the seed metering, the speed of displacement and dosage of seeds. The evaluation was carried through in two parts, first through preliminary experiments for elaboration of the graphs of normal probability and later in experiments with bigger sampling for evaluation of the influence of the factors most important. It was done the evaluation of seed metering of rotating internal ring, and the amount of necessary data for the evaluation was very decreased through of the graphs of normal probability that facilitated to prioritize only the significant factors. The dosage of seeds was factor that more important because factor (D) have greater significance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML