7 resultados para service level management

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Smart Farming Technologies (SFT) is a term used to define the set of digital technologies able not only to control and manage the farm system, but also to connect it to the many disruptive digital applications posed at multiple links along the value chain. The adoption of SFT has been so far limited, with significant differences at country-levels and among different types of farms and farmers. The objective of this thesis is to analyze what factors contributes to shape the agricultural digital transition and to assess its potential impacts in the Italian agri-food system. Specifically, this overall research objective is approached under three different perspectives. Firstly, we carry out a review of the literature that focuses on the determinants of adoption of farm-level Management Information Systems (MIS), namely the most adopted smart farming solutions in Italy. Secondly, we run an empirical analysis on what factors are currently shaping the adoption of SFT in Italy. In doing so, we focus on the multi-process and multi-faceted aspects of the adoption, by overcoming the one-off binary approach often used to study adoption decisions. Finally, we adopt a forward-looking perspective to investigate what the socio-ethical implications of a diffused use of SFT might be. On the one hand, our results indicate that bigger, more structured farms with higher levels of commercial integration along the agri-food supply chain are those more likely to be early adopters. On the other hand, they highlight the need for the institutional and organizational environment around farms to more effectively support farmers in the digital transition. Moreover, the role of several other actors and actions are discussed and analyzed, by highlighting the key role of specific agri-food stakeholders and ad-hoc policies, with the aim to propose a clearer path towards an efficient, fair and inclusive digitalization of the agrifood sector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The continuous and swift progression of both wireless and wired communication technologies in today's world owes its success to the foundational systems established earlier. These systems serve as the building blocks that enable the enhancement of services to cater to evolving requirements. Studying the vulnerabilities of previously designed systems and their current usage leads to the development of new communication technologies replacing the old ones such as GSM-R in the railway field. The current industrial research has a specific focus on finding an appropriate telecommunication solution for railway communications that will replace the GSM-R standard which will be switched off in the next years. Various standardization organizations are currently exploring and designing a radiofrequency technology based standard solution to serve railway communications in the form of FRMCS (Future Railway Mobile Communication System) to substitute the current GSM-R. Bearing on this topic, the primary strategic objective of the research is to assess the feasibility to leverage on the current public network technologies such as LTE to cater to mission and safety critical communication for low density lines. The research aims to identify the constraints, define a service level agreement with telecom operators, and establish the necessary implementations to make the system as reliable as possible over an open and public network, while considering safety and cybersecurity aspects. The LTE infrastructure would be utilized to transmit the vital data for the communication of a railway system and to gather and transmit all the field measurements to the control room for maintenance purposes. Given the significance of maintenance activities in the railway sector, the ongoing research includes the implementation of a machine learning algorithm to detect railway equipment faults, reducing time and human analysis errors due to the large volume of measurements from the field.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There are various methods to analyse waste, which differ from each other according to the level of detail of the compositio. Waste composed by plastic and used for packaging, for example, can be classified by chemical composition of the polymer used for the specific product. At a more basal level, before dividing a waste according to the specific chemical material of which it is composed it is possible and also important to classify it according to the material category. So, if the secondary aim is to consider the particular polymer that constitutes a plastic waste, or what kind of natural polymer composes a specific waste made of wood, the first aim is to classify the product category of the material that makes up the waste, so, if it is wood made, or plastic, or glass made or metal, or organic. There are not specific instruments to make this subdivision, not specific chemical tests, but only a manual recognition of the material that makes up the product or waste. The first steps of this study is a recognition of the materials of which the waste is composed, the second is a the quantification of differentiated and unsorted waste produced in the area under study, the third is a mass balance of the portions of waste sent for recovery in order to obtain information on quantities that can be effectively recovered and ready for new life cycle as raw material; the fourth and last step is an environmental assessment that provides information on the environmental cost of the recovery process. This process scheme is applied to various specific kinds of waste from separate collection generated in a specific area with the aim to find a model analysis appliable to other portions of territory in order to improve knowledge of recovery technologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.