13 resultados para regularly entered default judgment set aside without costs

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La ricognizione delle opere composte da Filippo Tommaso Marinetti tra il 1909 e il 1912 è sostenuta da una tesi paradossale: il futurismo di Marinetti non sarebbe un'espressione della modernità, bensì una reazione anti-moderna, che dietro a una superficiale ed entusiastica adesione ad alcune parole d'ordine della seconda rivoluzione industriale nasconderebbe un pessimismo di fondo nei confronti dell'uomo e della storia. In questo senso il futurismo diventa un emblema del ritardo culturale e del gattopardismo italiano, e anticipa l’analoga operazione svolta in politica da Mussolini: dietro un’adesione formale ad alcune istanze della modernità, la preservazione dello Status Quo. Marinetti è descritto come un corpo estraneo rispetto alla cultura scientifica del Novecento: un futurista senza futuro (rarissime in Marinetti sono le proiezioni fantascientifiche). Questo aspetto è particolarmente evidente nelle opere prodotte del triennio 1908-1911, che non solo sono molto diverse dalle opere futuriste successive, ma per alcuni aspetti rappresentano una vera e propria antitesi di ciò che diventerà il futurismo letterario a partire dal 1912, con la pubblicazione del Manifesto tecnico della letteratura futurista e l'invenzione delle parole in libertà. Nelle opere precedenti, a un sostanziale disinteresse per il progressismo tecnologico corrispondeva un'attenzione ossessiva per la corporeità e un ricorso continuo all'allegoria, con effetti particolarmente grotteschi (soprattutto nel romanzo Mafarka le futuriste) nei quali si rilevano tracce di una concezione del mondo di sapore ancora medioevo-rinascimentale. Questa componente regressiva del futurismo marinettiano viene platealmente abbandonata a partire dal 1912, con Zang Tumb Tumb, salvo riaffiorare ciclicamente, come una corrente sotterranea, in altre fasi della sua carriera: nel 1922, ad esempio, con la pubblicazione de Gli indomabili (un’altra opera allegorica, ricca di reminiscenze letterarie). Quella del 1912 è una vera e propria frattura, che nel primo capitolo è indagata sia da un punto di vista storico (attraverso la documentazione epistolare e giornalistica vengono portate alla luce le tensioni che portarono gran parte dei poeti futuristi ad abbandonare il movimento proprio in quell'anno) che da un punto di vista linguistico: sono sottolineate le differenze sostanziali tra la produzione parolibera e quella precedente, e si arrischia anche una spiegazione psicologica della brusca svolta impressa da Marinetti al suo movimento. Nel secondo capitolo viene proposta un'analisi formale e contenutistica della ‘funzione grottesca’ nelle opere di Marinetti. Nel terzo capitolo un'analisi comparata delle incarnazioni della macchine ritratte nelle opere di Marinetti ci svela che quasi sempre in questo autore la macchina è associata al pensiero della morte e a una pulsione masochistica (dominante, quest'ultima, ne Gli indomabili); il che porta ad arrischiare l'ipotesi che l'esperienza futurista, e in particolare il futurismo parolibero posteriore al 1912, sia la rielaborazione di un trauma. Esso può essere interpretato metaforicamente come lo choc del giovane Marinetti, balzato in pochi anni dalle sabbie d'Alessandria d'Egitto alle brume industriali di Milano, ma anche come una reale esperienza traumatica (l'incidente automobilistico del 1908, “mitologizzato” nel primo manifesto, ma che in realtà fu vissuto dall'autore come esperienza realmente perturbante).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presente ricerca, L’architettura religiosa di Luis Moya Blanco. La costruzione come principio compositivo, tratta i temi inerenti l’edificazione di spazi per il culto della religione cristiana che l’architetto spagnolo progetta e realizza a Madrid dal 1945 al 1970. La tesi è volta ad indagare quali siano i principi alla base della composizione architettonica che si possano considerare immutati, nel lungo arco temporale in cui l’autore si trova ad operare. Tale indagine, partendo da una prima analisi riguardante gli anni della formazione e gli scritti da lui prodotti, verte in particolare sullo studio dei progetti più recenti e ancora poco trattati dalla critica. L’obbiettivo della presente tesi è dunque quello di apportare un contributo originale sull’aspetto compositivo della sua architettura. Ma analizzare la composizione significa, in Moya, analizzare la costruzione che, a dispetto del susseguirsi dei linguaggi, rimarrà l’aspetto principale delle sue opere. Lo studio dei manufatti mediante categorie estrapolate dai suoi stessi scritti – la matematica, il numero, la geometria e i tracciati regolatori - permette di evidenziare punti di contatto e di continuità tra le prime chiese, fortemente caratterizzate da un impianto barocco, e gli ultimi progetti che sembrano cercare invece un confronto con forme decisamente moderne. Queste riflessioni, parallelamente contestualizzate nell’ambito della sua consistente produzione saggistica, andranno a confluire nell’idea finale per cui la costruzione diventi per Luis Moya Blanco il principio compositivo da cui non si può prescindere, la regola che sostanzia nella materia il numero e la geometria. Se la costruzione è dunque la pietrificazione di leggi geometrico-matematiche che sottendono schemi planimetrici; il ricorso allo spazio di origine centrale non risponde all’intenzione di migliorare la liturgia, ma a questioni di tipo filosofico-idealista, che fanno corrispondere alla somma naturalezza della perfezione divina, la somma perfezione della forma circolare o di uno dei suoi derivati come l’ellisse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, consumers became more aware and sensible in respect to environment and food safety matters. They are more and more interested in organic agriculture and markets and tend to prefer ‘organic’ products more than their traditional counterparts. To increase the quality and reduce the cost of production in organic and low-input agriculture, the 6FP-European “QLIF” project investigated the use of natural products such as bio-inoculants. They are mostly composed by arbuscular mycorrhizal fungi and other microorganisms, so-called “plant probiotic” microorganisms (PPM), because they help keeping an high yield, even under abiotic and biotic stressful conditions. Italian laws (DLgs 217, 2006) have recently included them as “special fertilizers”. This thesis focuses on the use of special fertilizers when growing tomatoes with organic methods in open field conditions, and the effects they induce on yield, quality and microbial rhizospheric communities. The primary objective was to achieve a better understanding of how plant-probiotic micro-flora management could buffer future reduction of external inputs, while keeping tomato fruit yield, quality and system sustainability. We studied microbial rhizospheric communities with statistical, molecular and histological methods. This work have demonstrated that long-lasting introduction of inoculum positively affected micorrhizal colonization and resistance against pathogens. Instead repeated introduction of compost negatively affected tomato quality, likely because it destabilized the ripening process, leading to over-ripening and increasing the amount of not-marketable product. Instead. After two years without any significant difference, the third year extreme combinations of inoculum and compost inputs (low inoculum with high amounts of compost, or vice versa) increased mycorrhizal colonization. As a result, in order to reduce production costs, we recommend using only inoculum rather than compost. Secondly, this thesis analyses how mycorrhizal colonization varies in respect to different tomato cultivars and experimental field locations. We found statistically significant differences between locations and between arbuscular colonization patterns per variety. To confirm these histological findings, we started a set of molecular experiments. The thesis discusses preliminary results and recommends their continuation and refinement to gather the complete results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrogen production in the green microalga Chlamydomonas reinhardtii was evaluated by means of a detailed physiological and biotechnological study. First, a wide screening of the hydrogen productivity was done on 22 strains of C. reinhardtii, most of which mutated at the level of the D1 protein. The screening revealed for the first time that mutations upon the D1 protein may result on an increased hydrogen production. Indeed, productions ranged between 0 and more than 500 mL hydrogen per liter of culture (Torzillo, Scoma et al., 2007a), the highest producer (L159I-N230Y) being up to 5 times more performant than the strain cc124 widely adopted in literature (Torzillo, Scoma, et al., 2007b). Improved productivities by D1 protein mutants were generally a result of high photosynthetic capabilities counteracted by high respiration rates. Optimization of culture conditions were addressed according to the results of the physiological study of selected strains. In a first step, the photobioreactor (PBR) was provided with a multiple-impeller stirring system designed, developed and tested by us, using the strain cc124. It was found that the impeller system was effectively able to induce regular and turbulent mixing, which led to improved photosynthetic yields by means of light/dark cycles. Moreover, improved mixing regime sustained higher respiration rates, compared to what obtained with the commonly used stir bar mixing system. As far as the results of the initial screening phase are considered, both these factors are relevant to the hydrogen production. Indeed, very high energy conversion efficiencies (light to hydrogen) were obtained with the impeller device, prooving that our PBR was a good tool to both improve and study photosynthetic processes (Giannelli, Scoma et al., 2009). In the second part of the optimization, an accurate analysis of all the positive features of the high performance strain L159I-N230Y pointed out, respect to the WT, it has: (1) a larger chlorophyll optical cross-section; (2) a higher electron transfer rate by PSII; (3) a higher respiration rate; (4) a higher efficiency of utilization of the hydrogenase; (5) a higher starch synthesis capability; (6) a higher per cell D1 protein amount; (7) a higher zeaxanthin synthesis capability (Torzillo, Scoma et al., 2009). These information were gathered with those obtained with the impeller mixing device to find out the best culture conditions to optimize productivity with strain L159I-N230Y. The main aim was to sustain as long as possible the direct PSII contribution, which leads to hydrogen production without net CO2 release. Finally, an outstanding maximum rate of 11.1 ± 1.0 mL/L/h was reached and maintained for 21.8 ± 7.7 hours, when the effective photochemical efficiency of PSII (ΔF/F'm) underwent a last drop to zero. If expressed in terms of chl (24.0 ± 2.2 µmoles/mg chl/h), these rates of production are 4 times higher than what reported in literature to date (Scoma et al., 2010a submitted). DCMU addition experiments confirmed the key role played by PSII in sustaining such rates. On the other hand, experiments carried out in similar conditions with the control strain cc124 showed an improved final productivity, but no constant PSII direct contribution. These results showed that, aside from fermentation processes, if proper conditions are supplied to selected strains, hydrogen production can be substantially enhanced by means of biophotolysis. A last study on the physiology of the process was carried out with the mutant IL. Although able to express and very efficiently utilize the hydrogenase enzyme, this strain was unable to produce hydrogen when sulfur deprived. However, in a specific set of experiments this goal was finally reached, pointing out that other than (1) a state 1-2 transition of the photosynthetic apparatus, (2) starch storage and (3) anaerobiosis establishment, a timely transition to the hydrogen production is also needed in sulfur deprivation to induce the process before energy reserves are driven towards other processes necessary for the survival of the cell. This information turned out to be crucial when moving outdoor for the hydrogen production in a tubular horizontal 50-liter PBR under sunlight radiation. First attempts with laboratory grown cultures showed that no hydrogen production under sulfur starvation can be induced if a previous adaptation of the culture is not pursued outdoor. Indeed, in these conditions the hydrogen production under direct sunlight radiation with C. reinhardtii was finally achieved for the first time in literature (Scoma et al., 2010b submitted). Experiments were also made to optimize productivity in outdoor conditions, with respect to the light dilution within the culture layers. Finally, a brief study of the anaerobic metabolism of C. reinhardtii during hydrogen oxidation has been carried out. This study represents a good integration to the understanding of the complex interplay of pathways that operate concomitantly in this microalga.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis main topic is the conflict between disclosure in financial markets and the need for confidentiality of the firm. After a recognition of the major dynamics of information production and dissemination in the stock market, the analysis moves to the interactions between the information that a firm is tipically interested in keeping confidential, such as trade secrets or the data usually covered by patent protection, and the countervailing demand for disclosure arising from finacial markets. The analysis demonstrates that despite the seeming divergence between informational contents tipically disclosed to investors and information usually covered by intellectual property protection, the overlapping areas are nonetheless wide and the conflict between transparency in financial markets and the firm’s need for confidentiality arises frequently and sistematically. Indeed, the company’s disclosure policy is based on a continuous trade-off between the costs and the benefits related to the public dissemination of information. Such costs are mainly represented by the competitive harm caused by competitors’ access to sensitive data, while the benefits mainly refer to the lower cost of capital that the firm obtains as a consequence of more disclosure. Secrecy shields the value of costly produced information against third parties’ free riding and constitutes therefore a means to protect the firm’s incentives toward the production of new information and especially toward technological and business innovation. Excessively demanding standards of transparency in financial markets might hinder such set of incentives and thus jeopardize the dynamics of innovation production. Within Italian securities regulation, there are two sets of rules mostly relevant with respect to such an issue: the first one is the rule that mandates issuers to promptly disclose all price-sensitive information to the market on an ongoing basis; the second one is the duty to disclose in the prospectus all the information “necessary to enable investors to make an informed assessment” of the issuers’ financial and economic perspectives. Both rules impose high disclosure standards and have potentially unlimited scope. Yet, they have safe harbours aimed at protecting the issuer need for confidentiality. Despite the structural incompatibility between public dissemination of information and the firm’s need to keep certain data confidential, there are certain ways to convey information to the market while preserving at the same time the firm’s need for confidentality. Such means are insider trading and selective disclosure: both are based on mechanics whereby the process of price reaction to the new information takes place without any corresponding activity of public release of data. Therefore, they offer a solution to the conflict between disclosure and the need for confidentiality that enhances market efficiency and preserves at the same time the private set of incentives toward innovation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Delirium is defined as an acute disorder of attention and cognition. Delirium is common in hospitalized elderly patient and is associated with increased morbidity, length of stay and patient care costs. Although Delirium can develop at any time during hospitalization, it typically presents early in the post-operative period (Post-Operative Delirium, POD) in the surgery context. The molecular mechanism and possible genetics basis of POD onset are not known, as well as all the risk factors are not completely defined. Our hypothesis is that genetic risk factor involving the inflammatory response could have possible effects on the immunoneuroendocrine system. Moreover, our previous data (inflamm-aging) suggest that aging is associated with an increase of inflammatory status, favouring age-related diseases such as neurodegenerative diseases, frailty, depression among other. Some pro-inflammatory or anti-inflammatory cytokines, seem to play a crucial role in increasing the inflammatory status and in the communication and regulation of immunoneuroendocrine system. Objective: this study evaluated the incidence of POD in elderly patients undergoing general surgery, clinical/physical and psychological risk factors of POD insurgency and investigated inflammatory and genetic risk factors. Moreover, this study evaluated the consequence of POD in terms of institutionalization, development of permanent cognitive dysfunction or dementia and mortality Methods: patients aged over 65 admitted for surgery at the Urgency Unit of S.Orsola-Malpighi Hospital were eligible for this case–control study. Risk factors significantly associated with POD in univariate analysis were entered into multivariate analysis to establish those independently associated with POD. Preoperative plasma level of 9 inflammatory markers were measured in 42 control subjects and 43 subjects who developed POD. Functional polymorphisms of IL-1 α , IL-2, IL-6, IL-8, IL-10 and TNF-alpha cytokine genes were determined in 176 control subjects and 27 POD subjects. Results: A total of 351 patients were enrolled in the study. The incidence of POD was 13•2 %. Independent variables associated with POD were: age, co-morbidity, preoperative cognitive impairment, glucose abnormalities. Median length of hospital stay was 21 days for patients with POD versus 8 days for control patients (P < 0•001). The hospital mortality rate was 19 and 8•4 % respectively (P = 0•021) and mortality rate after 1 year was also higher in POD (P= 0.0001). The baseline of IL-6 concentration was higher in POD patients than patients without POD, whereas IL-2 was lower in POD patients compared to patients without POD. In a multivariate analysis only IL-6 remained associated with POD. Moreover IL-6, IL-8 and IL-2 are associated with co-morbidity, intra-hospital mortality, compromised functional status and emergency admission. No significant differences in genotype distribution were found between POD subjects and controls for any SNP analyzed in this study. Conclusion: In this study we found older age, comorbidity, cognitive impairment, glucose abnormalities and baseline of IL-6 as independent risk factors for the development of POD. IL-6 could be proposed as marker of a trait that is associated with an increased risk of delirium; i.e. raised premorbid IL-6 level predict for the development of delirium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We deal with five problems arising in the field of logistics: the Asymmetric TSP (ATSP), the TSP with Time Windows (TSPTW), the VRP with Time Windows (VRPTW), the Multi-Trip VRP (MTVRP), and the Two-Echelon Capacitated VRP (2E-CVRP). The ATSP requires finding a lest-cost Hamiltonian tour in a digraph. We survey models and classical relaxations, and describe the most effective exact algorithms from the literature. A survey and analysis of the polynomial formulations is provided. The considered algorithms and formulations are experimentally compared on benchmark instances. The TSPTW requires finding, in a weighted digraph, a least-cost Hamiltonian tour visiting each vertex within a given time window. We propose a new exact method, based on new tour relaxations and dynamic programming. Computational results on benchmark instances show that the proposed algorithm outperforms the state-of-the-art exact methods. In the VRPTW, a fleet of identical capacitated vehicles located at a depot must be optimally routed to supply customers with known demands and time window constraints. Different column generation bounding procedures and an exact algorithm are developed. The new exact method closed four of the five open Solomon instances. The MTVRP is the problem of optimally routing capacitated vehicles located at a depot to supply customers without exceeding maximum driving time constraints. Two set-partitioning-like formulations of the problem are introduced. Lower bounds are derived and embedded into an exact solution method, that can solve benchmark instances with up to 120 customers. The 2E-CVRP requires designing the optimal routing plan to deliver goods from a depot to customers by using intermediate depots. The objective is to minimize the sum of routing and handling costs. A new mathematical formulation is introduced. Valid lower bounds and an exact method are derived. Computational results on benchmark instances show that the new exact algorithm outperforms the state-of-the-art exact methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il CMV è l’agente patogeno più frequente dopo trapianto (Tx) di cuore determinando sia sindromi cliniche organo specifiche sia un danno immunomediato che può determinare rigetto acuto o malattia coronarica cronica (CAV). I farmaci antivirali in profilassi appaiono superiori all’approccio pre-sintomatico nel ridurre gli eventi da CMV, ma l’effetto anti-CMV dell’everolimus (EVE) in aggiunta alla profilassi antivirale non è stato ancora analizzato. SCOPO DELLO STUDIO: analizzare l’interazione tra le strategie di profilassi antivirale e l’uso di EVE o MMF nell’incidenza di eventi CMV correlati (infezione, necessità di trattamento, malattia/sindrome) nel Tx cardiaco. MATERIALI E METODI: sono stati inclusi pazienti sottoposti a Tx cardiaco e trattati con EVE o MMF e trattamento antivirale di profilassi o pre-sintomatico. L’infezione da CMV è stata monitorata con antigenemia pp65 e PCR DNA. La malattia/sindrome da CMV è stato considerato l’endpoint principale. RISULTATI: 193 pazienti (di cui 10% D+/R-) sono stati inclusi nello studio (42 in EVE e 149 in MMF). Nel complesso, l’infezione da CMV (45% vs. 79%), la necessità di trattamento antivirale (20% vs. 53%), e la malattia/sindrome da CMV (2% vs. 15%) sono risultati significativamente più bassi nel gruppo EVE che nel gruppo MMF (tutte le P<0.01). La profilassi è più efficace nel prevenire tutti gli outcomes rispetto alla strategia pre-sintomatica nei pazienti in MMF (P 0.03), ma non nei pazienti in EVE. In particolare, i pazienti in EVE e strategia pre-sintomatica hanno meno infezioni da CMV (48 vs 70%; P=0.05), e meno malattia/sindrome da CMV (0 vs. 8%; P=0.05) rispetto ai pazienti in MMF e profilassi. CONCLUSIONI: EVE riduce significamene gli eventi correlati al CMV rispetto al MMF. Il beneficio della profilassi risulta conservato solo nei pazienti trattati con MMF mentre l’EVE sembra fornire un ulteriore protezione nel ridurre gli eventi da CMV senza necessità di un estensivo trattamento antivirale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le considerazioni sviluppate in questo scritto si pongono come obiettivo quello di fare chiarezza sul delicato tema delle opere di urbanizzazione a scomputo. La normativa concernente la realizzazione delle opere pubbliche a scomputo totale o parziale degli oneri di urbanizzazione è stata oggetto di svariate modifiche e interpretazioni giurisprudenziali, che si sono susseguite dopo l'importante pronuncia della Corte di Giustizia Europea. E' con questa sentenza che i Giudici del Kirchberg introducono un particolare obbligo procedurale a carico dei privati: nel caso in cui singole opere superino i valori di rilevanza europea, esse devono essere affidate, applicando le procedure di gara previste dalla direttiva 37/93/CEE. Va precisato che sino a quel momento l'affidamento diretto delle opere al privato costituiva nell'ottica del Legislatore lo strumento per realizzare le infrastrutture necessarie per consentire gli insediamenti edilizi che la pubblica amministrazione spesso non era in grado di effettuare. In questo panorama legislativo la sentenza della Corte di Giustizia, appare del tutto dirompente. Infatti, introducendo il principio secondo cui anche la realizzazione diretta delle opere di urbanizzazione da parte del privato deve sottostare alle regole delle procedure europee in materia di appalti, mette inevitabilmente a confronto due normative, quella degli appalti pubblici e quella dell'urbanistica, che sino a quel momento erano riuscite a viaggiare in modo parallelo, senza dar luogo a reciproche sovrapposizioni. Il Legislatore nazionale ha, con molta fatica, recepito il principio comunitario ed è stato negli anni quasi costretto, attraverso una serie di modifiche legislative, ad ampliarne la portata. La presente ricerca, dopo aver analizzato i vari correttivi apportati al Codice degli appalti pubblici vuole, quindi, verificare se l'attuale quadro normativo rappresenti un vero punto di equilibrio tra le contrapposte esigenze di pianificazione del territorio e di rispetto dei principi comunitari di concorrenza nella scelta del contraente.