929 resultados para Hardware and software
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
I moderni motori a combustione interna diventano sempre più complessi L'introduzione della normativa antinquinamento EURO VI richiederà una significativa riduzione degli inquinanti allo scarico. La maggiore criticità è rappresentata dalla riduzione degli NOx per i motori Diesel da aggiungersi a quelle già in vigore con le precedenti normative. Tipicamente la messa a punto di una nuova motorizzazione prevede una serie di test specifici al banco prova. Il numero sempre maggiore di parametri di controllo della combustione, sorti come conseguenza della maggior complessità meccanica del motore stesso, causa un aumento esponenziale delle prove da eseguire per caratterizzare l'intero sistema. L'obiettivo di questo progetto di dottorato è quello di realizzare un sistema di analisi della combustione in tempo reale in cui siano implementati diversi algoritmi non ancora presenti nelle centraline moderne. Tutto questo facendo particolare attenzione alla scelta dell'hardware su cui implementare gli algoritmi di analisi. Creando una piattaforma di Rapid Control Prototyping (RCP) che sfrutti la maggior parte dei sensori presenti in vettura di serie; che sia in grado di abbreviare i tempi e i costi della sperimentazione sui motopropulsori, riducendo la necessità di effettuare analisi a posteriori, su dati precedentemente acquisiti, a fronte di una maggior quantità di calcoli effettuati in tempo reale. La soluzione proposta garantisce l'aggiornabilità, la possibilità di mantenere al massimo livello tecnologico la piattaforma di calcolo, allontanandone l'obsolescenza e i costi di sostituzione. Questa proprietà si traduce nella necessità di mantenere la compatibilità tra hardware e software di generazioni differenti, rendendo possibile la sostituzione di quei componenti che limitano le prestazioni senza riprogettare il software.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
The last half-century has seen a continuing population and consumption growth, increasing the competition for land, water and energy. The solution can be found in the new sustainability theories, such as the industrial symbiosis and the zero waste objective. Reducing, reusing and recycling are challenges that the whole world have to consider. This is especially important for organic waste, whose reusing gives interesting results in terms of energy release. Before reusing, organic waste needs a deeper characterization. The non-destructive and non-invasive features of both Nuclear Magnetic Resonance (NMR) relaxometry and imaging (MRI) make them optimal candidates to reach such characterization. In this research, NMR techniques demonstrated to be innovative technologies, but an important work on the hardware and software of the NMR LAGIRN laboratory was initially done, creating new experimental procedures to analyse organic waste samples. The first results came from soil-organic matter interactions. Remediated soils properties were described in function of the organic carbon content, proving the importance of limiting the addition of further organic matter to not inhibit soil processes as nutrients transport. Moreover NMR relaxation times and the signal amplitude of a compost sample, over time, showed that the organic matter degradation of compost is a complex process that involves a number of degradation kinetics, as a function of the mix of waste. Local degradation processes were studied with enhanced quantitative relaxation technique that combines NMR and MRI. The development of this research has finally led to the study of waste before it becomes waste. Since a lot of food is lost when it is still edible, new NMR experiments studied the efficiency of conservation and valorisation processes: apple dehydration, meat preservation and bio-oils production. All these results proved the readiness of NMR for quality controls on a huge kind of organic residues and waste.
Resumo:
In the last decade, the mechanical characterization of bone segments has been seen as a fundamental key to understanding how the distribution of physiological loads works on the bone in everyday life, and the resulting structural deformations. Therefore, characterization allows to obtain the main load directions and, consequently, to observe the structural lamellae of the bone disposal, in order to recreate a prosthesis using artificial materials that behave naturally. This thesis will expose a modular system which provides the mechanical characterization of bone in vitro segment, with particular attention to vertebrae, as the current object of study and research in the lab where I did my thesis work. The system will be able to acquire and process all the appropriately conditioned signals of interest for the test, through dedicated hardware and software architecture, with high speed and high reliability. The aim of my thesis is to create a system that can be used as a versatile tool for experimentation and innovation for future tests of the mechanical characterization of biological components, allowing a quantitative and qualitative assessment of the deformation in analysis, regardless of anatomical regions of interest.
Resumo:
The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.
Resumo:
Major progress has recently been made in the neuro-imaging of stroke as a result of improvements in imaging hardware and software. Imaging may be based on either magnetic resonance imaging (MRI) or computed tomography (CT) techniques. Imaging should provide information on the entire vascular cervical and intracranial network, from the aortic arch to the circle of Willis. Equally, it should also give information on the viability of brain tissue and brain hemodynamics. CT has the advantage in the detection of acute hemorrhage whereas MRI offers more accurate pathophysiological information in the follow-up of patients.
Resumo:
The use of information technology (IT) in dentistry is far ranging. In order to produce a working document for the dental educator, this paper focuses on those methods where IT can assist in the education and competence development of dental students and dentists (e.g. e-learning, distance learning, simulations and computer-based assessment). Web pages and other information-gathering devices have become an essential part of our daily life, as they provide extensive information on all aspects of our society. This is mirrored in dental education where there are many different tools available, as listed in this report. IT offers added value to traditional teaching methods and examples are provided. In spite of the continuing debate on the learning effectiveness of e-learning applications, students request such approaches as an adjunct to the traditional delivery of learning materials. Faculty require support to enable them to effectively use the technology to the benefit of their students. This support should be provided by the institution and it is suggested that, where possible, institutions should appoint an e-learning champion with good interpersonal skills to support and encourage faculty change. From a global prospective, all students and faculty should have access to e-learning tools. This report encourages open access to e-learning material, platforms and programs. The quality of such learning materials must have well defined learning objectives and involve peer review to ensure content validity, accuracy, currency, the use of evidence-based data and the use of best practices. To ensure that the developers' intellectual rights are protected, the original content needs to be secure from unauthorized changes. Strategies and recommendations on how to improve the quality of e-learning are outlined. In the area of assessment, traditional examination schemes can be enriched by IT, whilst the Internet can provide many innovative approaches. Future trends in IT will evolve around improved uptake and access facilitated by the technology (hardware and software). The use of Web 2.0 shows considerable promise and this may have implications on a global level. For example, the one-laptop-per-child project is the best example of what Web 2.0 can do: minimal use of hardware to maximize use of the Internet structure. In essence, simple technology can overcome many of the barriers to learning. IT will always remain exciting, as it is always changing and the users, whether dental students, educators or patients are like chameleons adapting to the ever-changing landscape.
Resumo:
Mit zunehmender Komplexität und Vielfalt der Logistikprozesse steigt der Stellenwert der eingesetzten Informationstechnologien. Die den Warenfluss begleitenden bzw. vorhereilenden Informationen sind erforderlich, um Waren identifizieren und Unternehmensressourcen optimal einsetzen zu können. Als Beispiel ist der klassische Wareneingang zu nennen. Durch die Avisierung von Menge und Art eingehender Waren können der Einsatz des Personals zur Entladung und Vereinnahmung sowie die erforderlichen Ressourcen (Ladehilfsmittel, Flurförderzeuge, usw.) im Vorfeld geplant und bereitgestellt werden. Der Informationsfluss ist demnach als Qualitätsmerkmal und als Wirtschaftlichkeitsfaktor zu verstehen. Die Schnittstelle zwischen dem physischen Warenfluss und dem Informationsfluss auf EDV-Basis bildet die Identifikationstechnologien. In der Industrie verbreitete Identifikationstechnologien bestehen in der Regel aus einem Datenträger und einem Erfassungsgerät. Der Datenträger ist am physischen Objekt fixiert. Das Erfassungsgerät liest die auf dem Datenträger befindlichen Objektinformationen und wandelt sie in einen Binär-Code um, der durch nachgelagerte EDV weiterverarbeitet wird. Die momentan in der Industrie und im Handel am häufigsten verwendete Identifikationstechnologie ist der Barcode. In den letzten Jahren tritt die RFID-Technologie in den Fokus der Industrie und des Handels im Bereich Materialfluss und Logistik. Unter „Radio Frequency IDentification“ wird die Kommunikation per Funkwellen zwischen Datenträger (Transponder) und Lesegerät verstanden. Mittels der RFID-Technologie ist der Anwender, im Gegensatz zum Barcode, in der Lage, Informationen auf dem Transponder ohne Sichtkontakt zu erfassen. Eine Ausrichtung der einzelnen Artikel ist nicht erforderlich. Zudem können auf bestimmten Transpondertypen weitaus größere Datenmengen als auf einem Barcode hinterlegt werden. Transponder mit hoher Speicherkapazität eignen sich in der Regel, um die auf ihnen hinterlegten Daten bei Bedarf aktualisieren zu k��nnen. Eine dezentrale Datenorganisation ist realisierbar. Ein weiterer Vorteil der RFID-Technologie ist die Möglichkeit, mehrere Datenträger im Bruchteil einer Sekunde zu erfassen. In diesem Fall spricht man von einer Pulkerfassung. Diese Eigenschaft ist besonders im Bereich Warenein- und -ausgang von Interesse. Durch RFID ist es möglich, Ladeeinheiten, z. B. Paletten mit Waren, durch einen Antennenbereich zu fördern, und die mit Transpondern versehenen Artikel zu identifizieren und in die EDV zu übertragen. Neben der Funktionalität einer solchen Technologie steht in der Industrie vor allem die Wirtschaftlichkeit im Vordergrund. Transponder sind heute teuerer als Barcodes. Zudem müssen Investitionen in die für den Betrieb von RFID erforderliche Hard- und Software einkalkuliert werden. Daher muss der Einsatz der RFID-Technologie Einsparungen durch die Reorganisation der Unternehmensprozesse nach sich ziehen. Ein Schwachpunkt der RFID-Technologie ist momentan je nach Anwendung die mangelnde Zuverlässigkeit und Wiederholgenauigkeit bei Pulklesungen. Die Industrie und der Handel brauchen Identifikationstechnologien, deren Erfassungsrate im Bereich nahe 100 % liegt. Die Gefahr besteht darin, dass durch ein unzuverlässiges RFID-System unvollständige bzw. fehlerhafte Datensätze erzeugt werden können. Die Korrektur der Daten kann teurer sein als die durch die Reorganisation der Prozesse mittels RFID erzielten Einsparungen. Die Erfassungsrate der Transponder bei Pulkerfassungen wird durch mehrere Faktoren beeinflusst, die im Folgenden detailliert dargestellt werden. Das Institut für Fördertechnik und Logistik (IFT) in Stuttgart untersucht m��gliche Einflussgrößen auf die Erkennungsraten bei Pulkerfassungen. Mit den gewonnenen Erkenntnissen sollen mögliche Schwachstellen bei der Erkennung mehrerer Transponder im Vorfeld einer Implementierung in die Logistikprozesse eines Unternehmens eliminiert werden. With increasing complexity and variety of the logistics processes the significance of the used information technologies increases. The information accompanying the material flow is necessary in order to be able to identify goods and to be able to use corporate resources optimally. The classical goods entrance is to be mentioned as an example. The notification of amount and kind of incoming goods can be used for previously planning and providing of the personnel and necessary resources. The flow of information is to be understood accordingly as a high-quality feature and as an economic efficiency factor. With increasing complexity and variety of the logistics processes the significance of the used information technologies increases. The information accompanying the material flow is necessary in order to be able to identify goods and to be able to use corporate resources optimally. The classical goods entrance is to be mentioned as an example. The notification of amount and kind of incoming goods can be used for previously planning and providing of the personnel and necessary resources. The flow of information is to be understood accordingly as a high-quality feature and as an economic efficiency factor.
Resumo:
We present a high performance-yet low cost-system for multi-view rendering in virtual reality (VR) applications. In contrast to complex CAVE installations, which are typically driven by one render client per view, we arrange eight displays in an octagon around the viewer to provide a full 360° projection, and we drive these eight displays by a single PC equipped with multiple graphics units (GPUs). In this paper we describe the hardware and software setup, as well as the necessary low-level and high-level optimizations to optimally exploit the parallelism of this multi-GPU multi-view VR system.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Commoditization and virtualization of wireless networks are changing the economics of mobile networks to help network providers (e.g., MNO, MVNO) move from proprietary and bespoke hardware and software platforms toward an open, cost-effective, and flexible cellular ecosystem. In addition, rich and innovative local services can be efficiently created through cloudification by leveraging the existing infrastructure. In this work, we present RANaaS, which is a cloudified radio access network delivered as a service. RANaaS provides the service life-cycle of an ondemand, elastic, and pay as you go 3GPP RAN instantiated on top of the cloud infrastructure. We demonstrate an example of realtime cloudified LTE network deployment using the OpenAirInterface LTE implementation and OpenStack running on commodity hardware as well as the flexibility and performance of the platform developed.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
Embedded systems are commonly designed by specifying and developing hardware and software systems separately. On the contrary, the hardware/software (HW/SW) co-development exploits the trade-offs between hardware and software in a system through their concurrent design. HW/SW Codevelopment techniques take advantage of the flexibility of system design to create architectures that can meet stringent performance requirements with a shorter design cycle. This paper presents the work done within the scope of ESA HWSWCO (Hardware-Software Co-design) study. The main objective of this study has been to address the HW/SW co-design phase to integrate this engineering task as part of the ASSERT process (refer to [1]) and compatible with the existing ASSERT approach, process and tool, Advances in the automation of the design of HW and SW and the adoption of the Model Driven Architecture (MDA) [9] paradigm make possible the definition of a proper integration substrate and enables the continuous interaction of the HW and SW design paths.