964 resultados para Distributed algorithm
Resumo:
The determination of the intersection curve between Bézier Surfaces may be seen as the composition of two separated problems: determining initial points and tracing the intersection curve from these points. The Bézier Surface is represented by a parametric function (polynomial with two variables) that maps a point in the tridimensional space from the bidimensional parametric space. In this article, it is proposed an algorithm to determine the initial points of the intersection curve of Bézier Surfaces, based on the solution of polynomial systems with the Projected Polyhedral Method, followed by a method for tracing the intersection curves (Marching Method with differential equations). In order to allow the use of the Projected Polyhedral Method, the equations of the system must be represented in terms of the Bernstein basis, and towards this goal it is proposed a robust and reliable algorithm to exactly transform a multivariable polynomial in terms of power basis to a polynomial written in terms of Bernstein basis .
Resumo:
In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.
Resumo:
This work present the application of a computer package for generating of projection data for neutron computerized tomography, and in second part, discusses an application of neutron tomography, using the projection data obtained by Monte Carlo technique, for the detection and localization of light materials such as those containing hydrogen, concealed by heavy materials such as iron and lead. For tomographic reconstructions of the samples simulated use was made of only six equal projection angles distributed between 0º and 180º, with reconstruction making use of an algorithm (ARIEM), based on the principle of maximum entropy. With the neutron tomography it was possible to detect and locate polyethylene and water hidden by lead and iron (with 1cm-thick). Thus, it is demonstrated that thermal neutrons tomography is a viable test method which can provide important interior information about test components, so, extremely useful in routine industrial applications.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
This bachelor’s thesis, written for Lappeenranta University of Technology and implemented in a medium-sized enterprise (SME), examines a distributed document migration system. The system was created to migrate a large number of electronic documents, along with their metadata, from one document management system to another, so as to enable a rapid switchover of an enterprise resource planning systems inside the company. The paper examines, through theoretical analysis, messaging as a possible enabler of distributing applications and how it naturally fits an event based model, whereby system transitions and states are expressed through recorded behaviours. This is put into practice by analysing the implemented migration systems and how the core components, MassTransit, RabbitMQ and MongoDB, were orchestrated together to realize such a system. As a result, the paper presents an architecture for a scalable and distributed system that could migrate hundreds of thousands of documents over weekend, serving its goals in enabling a rapid system switchover.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Denna avhandling handlar om metoder för att hitta begränsningar för det asymptotiska beteendet hos en förväntad uthoppstid från ett område omkring en xpunkt för processer som har normalfördelad störning. I huvudsak behandlas olika typer av autoregressiva processer. Fyra olika metoder används. En metod som använder principen för stora avvikelser samt en metod som jämför uthoppstiden med en återkomsttid ger övre begränsningar för den förväntade uthoppstiden. En martingalmetod och en metod för normalfördelade stokastiska variabler ger undre begränsningar. Metoderna har alla både förtjänster och nackdelar. Genom att kombinera de olika metoderna får man de bästa resultaten. Vi får fram gränsvärdet för det asymptotiska beteendet hos en uthoppstid för den multivariata autoregressiva processen, samt motsvarande gränsvärde för den univariata autoregressiva processen av ordning n.
Resumo:
Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.
Resumo:
With the new age of Internet of Things (IoT), object of everyday such as mobile smart devices start to be equipped with cheap sensors and low energy wireless communication capability. Nowadays mobile smart devices (phones, tablets) have become an ubiquitous device with everyone having access to at least one device. There is an opportunity to build innovative applications and services by exploiting these devices’ untapped rechargeable energy, sensing and processing capabilities. In this thesis, we propose, develop, implement and evaluate LoadIoT a peer-to-peer load balancing scheme that can distribute tasks among plethora of mobile smart devices in the IoT world. We develop and demonstrate an android-based proof of concept load-balancing application. We also present a model of the system which is used to validate the efficiency of the load balancing approach under varying application scenarios. Load balancing concepts can be apply to IoT scenario linked to smart devices. It is able to reduce the traffic send to the Cloud and the energy consumption of the devices. The data acquired from the experimental outcomes enable us to determine the feasibility and cost-effectiveness of a load balanced P2P smart phone-based applications.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Recent developments in power electronics technology have made it possible to develop competitive and reliable low-voltage DC (LVDC) distribution networks. Further, islanded microgrids—isolated small-scale localized distribution networks— have been proposed to reliably supply power using distributed generations. However, islanded operations face many issues such as power quality, voltage regulation, network stability, and protection. In this thesis, an energy management system (EMS) that ensures efficient energy and power balancing and voltage regulation has been proposed for an LVDC island network utilizing solar panels for electricity production and lead-acid batteries for energy storage. The EMS uses the master/slave method with robust communication infrastructure to control the production, storage, and loads. The logical basis for the EMS operations has been established by proposing functionalities of the network components as well as by defining appropriate operation modes that encompass all situations. During loss-of-powersupply periods, load prioritizations and disconnections are employed to maintain the power supply to at least some loads. The proposed EMS ensures optimal energy balance in the network. A sizing method based on discrete-event simulations has also been proposed to obtain reliable capacities of the photovoltaic array and battery. In addition, an algorithm to determine the number of hours of electric power supply that can be guaranteed to the customers at any given location has been developed. The successful performances of all the proposed algorithms have been demonstrated by simulations.