6 resultados para Operational Process Management
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
Over the past 15 years the Italian brewing scene showed interesting changes, especially with regard to the creation of many breweries with an annual production of less than 10,000 hectoliters. The beers produced by microbreweries are very susceptible to attack by spoilage micro-organisms that cause the deterioration of beer quality characteristics. In addition, most of the microbreweries do not practice heat treatments of stabilization and do not carry out quality checks on the product. The high presence of beer spoilage bacteria is an economic problem for the brewing industry because it can damage the brand and it causes high costs of product retrieval. This thesis project was aimed to study the management of the production process in the Italian microbreweries within a production less than 10,000 hl. In particular, the annual production, type of plant, yeast management, process management, cleaning and sanitizing of a representative sample of microbreweries were investigated. Furthermore was made a collection of samples in order to identify, with simple methods, what are spoilage bacteria more present in the Italian craft beers. 21% of the beers analysed were positive at the presence of lactic acid bacteria. These analytical data show the importance of understanding what are the weak points of the production process that cause the development of spoilage bacteria. Finally, the thesis examined the actual production of two microbreweries in order to understand the process management that can promote the growth of spoilage bacteria in beer and production plant. The analysis of the data for the two case studies was helpful to understand what are the critical points where the microorganisms are most frequently in contact with the product. The hygiene practices are crucial to ensure the quality of the finished product, especially in the case of non-pasteurized beer.
Resumo:
La ricerca riguarda lo studio del cantiere edilizio protobizantino, con particolare riferimento al ciclo della lavorazione del marmo. Quest’ultimo viene analizzato sotto il profilo amministrativo, tecnico, sociale ed artigianale. L’elemento guida della ricerca sono i marchi dei marmorari, sigle apposte da funzionari e maestranze durante il processo produttivo. Dapprima, fonti letterarie ed epigrafiche, tra cui le sigle di cava e officina su marmo, vengono esaminate per ricostruire il sistema alto-imperiale di amministrazione delle cave e di gestione dei flussi marmorei, nonché l’iter tecnico-artigianale adottato per la produzione dei manufatti. Il confronto con i dati disponibili per la tarda antichità, con particolare riferimento alle cave di Proconneso, evidenzia una sostanziale continuità della prassi burocratico-amministrativa, mentre alcuni cambiamenti si riscontrano nell’ambito produttivo-artigianale. Il funzionamento degli atelier marmorari viene approfondito attraverso lo studio dei marchi dei marmorari. Si tratta di caratteri greci singoli, multipli o monogrammi. Una ricognizione sistematica delle sigle dalla pars Orientalis dell’impero, reperite in bibliografia o da ricognizioni autoptiche, ha portato alla raccolta di circa 2360 attestazioni. Per esse si propone una classificazione tipologica tra sigle di cava, stoccaggio, officina. Tra le sigle di cava si annoverano sigle di controllo, destinazione/committenza, assemblaggio/posizionamento. Una particolare attenzione è riservata alle sigle di officina, riferibili ad un nome proprio di persona, ovvero al πρωτομαΐστωρ, il capo-bottega che supervisionava il lavoro dei propri artigiani e fungeva da garante del prodotto consegnato alla committenza. Attraverso lo studio comparato delle sigle reperite a Costantinopoli e in altri contesti si mette in luce la prassi operativa adottata dagli atelier nei processi di manifattura, affrontando anche il problema delle maestranze itineranti. Infine, sono analizzate fonti scritte di varia natura per poter collocare il fenomeno del marmo in un contesto socio-economico più ampio, con particolare riferimento alle figure professionali ed artigianali coinvolte nei cantieri e al problema della committenza.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
In this research project, I have integrated two research streams on international strategic decisions making in international firms: upper echelons or top management teams (TMT) internationalization research and international strategic decision making process research. Both research streams in international business literature have evolved independently, but there is a potential in combining these two streams of research. The first empirical paper “TMT internationalization and international strategic decision making process: a decision level analysis of rationality, speed, and performance” explores the influence of TMT internationalization on strategic decision rationality and speed and, subsequently, their effect on international strategic decision effectiveness (performance). The results show that the internationalization of TMT is positively related to decision effectiveness and this relationship is mediated by decision rationality while the hypotheses regarding the association between TMT internationalization and decision speed, and the mediating effect of speed were not supported. The second paper “TMT internationalization and international strategic decision rationality: the mediating role of international information” of my thesis is a simple but logical extension of first paper. The first paper showed that TMT Internationalization has a significant positive effect on international strategic decision rationality. The second paper explicitly showed that TMT internationalization affect on international strategic decision rationality comes from two sources: international experience (personal international knowledge and information) and international information collected from managerial international contacts. For this research project, I have collected data from international software firms in Pakistan. My research contributes to the literature on upper echelons theory and strategic decision making in context of international business and international firms by explicitly examining the link between TMT internationalization and characteristics of strategic decisions making process (i.e. rationality and speed) in international firms and their possible mediating effect on performance.
Resumo:
Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.