884 resultados para Hadoop distributed file system (HDFS)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remanufacturing is the process of rebuilding used products that ensures that the quality of remanufactured products is equivalent to that of new ones. Although the theme is gaining ground, it is still little explored due to lack of knowledge, the difficulty of visualizing it systemically, and implementing it effectively. Few models treat remanufacturing as a system. Most of the studies still treated remanufacturing as an isolated process, preventing it from being seen in an integrated manner. Therefore, the aim of this work is to organize the knowledge about remanufacturing, offering a vision of remanufacturing system and contributing to an integrated view about the theme. The methodology employed was a literature review, adopting the General Theory of Systems to characterize the remanufacturing system. This work consolidates and organizes the elements of this system, enabling a better understanding of remanufacturing and assisting companies in adopting the concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il presente elaborato ha come oggetto l’analisi delle prestazioni e il porting di un sistema di SBI sulla distribuzione Hadoop di Cloudera. Nello specifico è stato fatto un porting dei dati del progetto WebPolEU. Successivamente si sono confrontate le prestazioni del query engine Impala con quelle di ElasticSearch che, diversamente da Oracle, sfrutta la stessa componente hardware (cluster).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negli ultimi anni la biologia ha fatto ricorso in misura sempre maggiore all’informatica per affrontare analisi complesse che prevedono l’utilizzo di grandi quantità di dati. Fra le scienze biologiche che prevedono l’elaborazione di una mole di dati notevole c’è la genomica, una branca della biologia molecolare che si occupa dello studio di struttura, contenuto, funzione ed evoluzione del genoma degli organismi viventi. I sistemi di data warehouse sono una tecnologia informatica che ben si adatta a supportare determinati tipi di analisi in ambito genomico perché consentono di effettuare analisi esplorative e dinamiche, analisi che si rivelano utili quando si vogliono ricavare informazioni di sintesi a partire da una grande quantità di dati e quando si vogliono esplorare prospettive e livelli di dettaglio diversi. Il lavoro di tesi si colloca all’interno di un progetto più ampio riguardante la progettazione di un data warehouse in ambito genomico. Le analisi effettuate hanno portato alla scoperta di dipendenze funzionali e di conseguenza alla definizione di una gerarchia nei dati. Attraverso l’inserimento di tale gerarchia in un modello multidimensionale relativo ai dati genomici sarà possibile ampliare il raggio delle analisi da poter eseguire sul data warehouse introducendo un contenuto informativo ulteriore riguardante le caratteristiche dei pazienti. I passi effettuati in questo lavoro di tesi sono stati prima di tutto il caricamento e filtraggio dei dati. Il fulcro del lavoro di tesi è stata l’implementazione di un algoritmo per la scoperta di dipendenze funzionali con lo scopo di ricavare dai dati una gerarchia. Nell’ultima fase del lavoro di tesi si è inserita la gerarchia ricavata all’interno di un modello multidimensionale preesistente. L’intero lavoro di tesi è stato svolto attraverso l’utilizzo di Apache Spark e Apache Hadoop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il focus di questo elaborato è sui sistemi di recommendations e le relative caratteristiche. L'utilizzo di questi meccanism è sempre più forte e presente nel mondo del web, con un parallelo sviluppo di soluzioni sempre più accurate ed efficienti. Tra tutti gli approcci esistenti, si è deciso di prendere in esame quello affrontato in Apache Mahout. Questa libreria open source implementa il collaborative-filtering, basando il processo di recommendation sulle preferenze espresse dagli utenti riguardo ifferenti oggetti. Grazie ad Apache Mahout e ai principi base delle varie tipologie di recommendationè stato possibile realizzare un applicativo web che permette di produrre delle recommendations nell'ambito delle pubblicazioni scientifiche, selezionando quegli articoli che hanno un maggiore similarità con quelli pubblicati dall'utente corrente. La realizzazione di questo progetto ha portato alla definizione di un sistema ibrido. Infatti l'approccio alla recommendation di Apache Mahout non è completamente adattabile a questa situazione, per questo motivo le sue componenti sono state estese e modellate per il caso di studio. Siè cercato quindi di combinare il collaborative filtering e il content-based in un unico approccio. Di Apache Mahout si è mantenuto l'algoritmo attraverso il quale esaminare i dati del data set, tralasciando completamente l'aspetto legato alle preferenze degli utenti, poichè essi non esprimono delle valutazioni sugli articoli. Del content-based si è utilizzata l'idea del confronto tra i titoli delle pubblicazioni. La valutazione di questo applicativo ha portato alla luce diversi limiti, ma anche possibili sviluppi futuri che potrebbero migliorare la qualità delle recommendations, ma soprattuto le prestazioni. Grazie per esempio ad Apache Hadoop sarebbe possibile una computazione distribuita che permetterebbe di elaborare migliaia di dati con dei risultati più che discreti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy transfer between the interacting waves in a distributed Brillouin sensor can result in a distorted measurement of the local Brillouin gain spectrum, leading to systematic errors. It is demonstrated that this depletion effect can be precisely modelled. This has been validated by experimental tests in an excellent quantitative agreement. Strict guidelines can be enunciated from the model to make the impact of depletion negligible, for any type and any length of fiber. (C) 2013 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comments on an article by Kashima et al. (see record 2007-10111-001). In their target article Kashima and colleagues try to show how a connectionist model conceptualization of the self is best suited to capture the self's temporal and socio-culturally contextualized nature. They propose a new model and to support this model, the authors conduct computer simulations of psychological phenomena whose importance for the self has long been clear, even if not formally modeled, such as imitation, and learning of sequence and narrative. As explicated when we advocated connectionist models as a metaphor for self in Mischel and Morf (2003), we fully endorse the utility of such a metaphor, as these models have some of the processing characteristics necessary for capturing key aspects and functions of a dynamic cognitive-affective self-system. As elaborated in that chapter, we see as their principal strength that connectionist models can take account of multiple simultaneous processes without invoking a single central control. All outputs reflect a distributed pattern of activation across a large number of simple processing units, the nature of which depends on (and changes with) the connection weights between the links and the satisfaction of mutual constraints across these links (Rummelhart & McClelland, 1986). This allows a simple account for why certain input features will at times predominate, while others take over on other occasions. (PsycINFO Database Record (c) 2008 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.