10 resultados para Safety Performance Function (SPF)
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Introduction The “eversion” technique for carotid endarterectomy (e-CEA), that involves the transection of the internal carotid artery at the carotid bulb and its eversion over the atherosclerotic plaque, has been associated with an increased risk of postoperative hypertension possibly due to a direct iatrogenic damage to the carotid sinus fibers. The aim of this study is to assess the long-term effect of the e-CEA on arterial baroreflex and peripheral chemoreflex function in humans. Methods A retrospective review was conducted on a prospectively compiled computerized database of 3128 CEAs performed on 2617 patients at our Center between January 2001 and March 2006. During this period, a total of 292 patients who had bilateral carotid stenosis ≥70% at the time of the first admission underwent staged bilateral CEAs. Of these, 93 patients had staged bilateral e-CEAs, 126 staged bilateral s- CEAs and 73 had different procedures on each carotid. CEAs were performed with either the eversion or the standard technique with routine Dacron patching in all cases. The study inclusion criteria were bilateral CEA with the same technique on both sides and an uneventful postoperative course after both procedures. We decided to enroll patients submitted to bilateral e-CEA to eliminate the background noise from contralateral carotid sinus fibers. Exclusion criteria were: age >70 years, diabetes mellitus, chronic pulmonary disease, symptomatic ischemic cardiac disease or medical therapy with b-blockers, cardiac arrhythmia, permanent neurologic deficits or an abnormal preoperative cerebral CT scan, carotid restenosis and previous neck or chest surgery or irradiation. Young and aged-matched healthy subjects were also recruited as controls. Patients were assessed by the 4 standard cardiovascular reflex tests, including Lying-to-standing, Orthostatic hypotension, Deep breathing, and Valsalva Maneuver. Indirect autonomic parameters were assessed with a non-invasive approach based on spectral analysis of EKG RR interval, systolic arterial pressure, and respiration variability, performed with an ad hoc software. From the analysis of these parameters the software provides the estimates of spontaneous baroreflex sensitivity (BRS). The ventilatory response to hypoxia was assessed in patients and controls by means of classic rebreathing tests. Results A total of 29 patients (16 males, age 62.4±8.0 years) were enrolled. Overall, 13 patients had undergone bilateral e-CEA (44.8%) and 16 bilateral s-CEA (55.2%) with a mean interval between the procedures of 62±56 days. No patient showed signs or symptoms of autonomic dysfunction, including labile hypertension, tachycardia, palpitations, headache, inappropriate diaphoresis, pallor or flushing. The results of standard cardiovascular autonomic tests showed no evidence of autonomic dysfunction in any of the enrolled patients. At spectral analysis, a residual baroreflex performance was shown in both patient groups, though reduced, as expected, compared to young controls. Notably, baroreflex function was better maintained in e-CEA, compared to standard CEA. (BRS at rest: young controls 19.93 ± 2.45 msec/mmHg; age-matched controls 7.75 ± 1.24; e-CEA 13.85 ± 5.14; s-CEA 4.93 ± 1.15; ANOVA P=0.001; BRS at stand: young controls 7.83 ± 0.66; age-matched controls 3.71 ± 0.35; e-CEA 7.04 ± 1.99; s-CEA 3.57 ± 1.20; ANOVA P=0.001). In all subjects ventilation (VÝ E) and oximetry data fitted a linear regression model with r values > 0.8. Oneway analysis of variance showed a significantly higher slope both for ΔVE/ΔSaO2 in controls compared with both patient groups which were not different from each other (-1.37 ± 0.33 compared with -0.33±0.08 and -0.29 ±0.13 l/min/%SaO2, p<0.05, Fig.). Similar results were observed for and ΔVE/ΔPetO2 (-0.20 ± 0.1 versus -0.01 ± 0.0 and -0.07 ± 0.02 l/min/mmHg, p<0.05). A regression model using treatment, age, baseline FiCO2 and minimum SaO2 achieved showed only treatment as a significant factor in explaining the variance in minute ventilation (R2= 25%). Conclusions Overall, we demonstrated that bilateral e-CEA does not imply a carotid sinus denervation. As a result of some expected degree of iatrogenic damage, such performance was lower than that of controls. Interestingly though, baroreflex performance appeared better maintained in e-CEA than in s-CEA. This may be related to the changes in the elastic properties of the carotid sinus vascular wall, as the patch is more rigid than the endarterectomized carotid wall that remains in the e-CEA. These data confirmed the safety of CEA irrespective of the surgical technique and have relevant clinical implication in the assessment of the frequent hemodynamic disturbances associated with carotid angioplasty stenting.
Resumo:
This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Pancreatic islet transplantation represents a fascinating procedure that, at the moment, can be considered as alternative to standard insulin treatment or pancreas transplantation only for selected categories of patients with type 1 diabetes mellitus. Among the factors responsible for leading to poor islet engraftment, hypoxia plays an important role. Mesenchymal stem cells (MSCs) were recently used in animal models of islet transplantation not only to reduce allograft rejection, but also to promote revascularization. Currently adipose tissue represents a novel and good source of MSCs. Moreover, the capability of adipose-derived stem cells (ASCs) to improve islet graft revascularization was recently reported after hybrid transplantation in mice. Within this context, we have previously shown that hyaluronan esters of butyric and retinoic acids can significantly enhance the rescuing potential of human MSCs. Here we evaluated whether ex vivo preconditioning of human ASCs (hASCs) with a mixture of hyaluronic (HA), butyric (BU), and retinoic (RA) acids may result in optimization of graft revascularization after islet/stem cell intrahepatic cotransplantation in syngeneic diabetic rats. We demonstrated that hASCs exposed to the mixture of molecules are able to increase the secretion of vascular endothelial growth factor (VEGF), as well as the transcription of angiogenic genes, including VEGF, KDR (kinase insert domain receptor), and hepatocyte growth factor (HGF). Rats transplanted with islets cocultured with preconditioned hASCs exhibited a better glycemic control than rats transplanted with an equal volume of islets and control hASCs. Cotransplantation with preconditioned hASCs was also associated with enhanced islet revascularization in vivo, as highlighted by graft morphological analysis. The observed increase in islet graft revascularization and function suggests that our method of stem cell preconditioning may represent a novel strategy to remarkably improve the efficacy of islets-hMSCs cotransplantation.
Resumo:
Agri-food supply chains extend beyond national boundaries, partially facilitated by a policy environment that encourages more liberal international trade. Rising concentration within the downstream sector has driven a shift towards “buyer-driven” global value chains (GVCs) extending internationally with global sourcing and the emergence of multinational key economic players that compete with increase emphasis on product quality attributes. Agri-food systems are thus increasingly governed by a range of inter-related public and private standards, both of which are becoming a priori mandatory, especially in supply chains for high-value and quality-differentiated agri-food products and tend to strongly affect upstream agricultural practices, firms’ internal organization and strategic behaviour and to shape the food chain organization. Notably, increasing attention has been given to the impact of SPS measures on agri-food trade and notably on developing countries’ export performance. Food and agricultural trade is the vital link in the mutual dependency of the global trade system and developing countries. Hence, developing countries derive a substantial portion of their income from food and agricultural trade. In Morocco, fruit and vegetable (especially fresh) are the primary agricultural export. Because of the labor intensity, this sector (especially citrus and tomato) is particularly important in terms of income and employment generation, especially for the female laborers hired in the farms and packing houses. Hence, the emergence of agricultural and agrifood product safety issues and the subsequent tightening of market requirements have challenged mutual gains due to the lack of technical and financial capacities of most developing countries.
Resumo:
Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.
Resumo:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Resumo:
This PhD thesis focused on nanomaterial (NM) engineering for occupational health and safety, in the frame of the EU project “Safe Nano Worker Exposure Scenarios (SANOWORK)”. Following a safety by design approach, surface engineering (surface coating, purification process, colloidal force control, wet milling, film coating deposition and granulation) were proposed as risk remediation strategies (RRS) to decrease toxicity and emission potential of NMs within real processing lines. In the first case investigated, the PlasmaChem ZrO2 manufacturing, the colloidal force control applied to the washing of synthesis rector, allowed to reduce ZrO2 contamination in wastewater, performing an efficient recycling procedure of ZrO2 recovered. Furthermore, ZrO2 NM was investigated in the ceramic process owned by CNR-ISTEC and GEA-Niro; the spray drying and freeze drying techniques were employed decreasing NM emissivity, but maintaining a reactive surface in dried NM. Considering the handling operation of nanofibers (NFs) obtained through Elmarco electrospinning procedure, the film coating deposition was applied on polyamide non-woven to avoid free fiber release. For TiO2 NF the wet milling was applied to reduce and homogenize the aspect ratio, leading to a significant mitigation of fiber toxicity. In the Colorobbia spray coating line, Ag and TiO2 nanosols, employed to transfer respectively antibacterial or depolluting properties to different substrates, were investigated. Ag was subjected to surface coating and purification, decreasing NM toxicity. TiO2 was modified by surface coating, spray drying and blending with colloidal SiO2, improving its technological performance. In the extrusion of polymeric matrix charged with carbon nanotube (CNTs) owned by Leitat, the CNTs used as filler were granulated by spray drying and freeze spray drying techniques, allowing to reduce their exposure potential. Engineered NMs tested by biologists were further investigated in relevant biological conditions, to improve the knowledge of structure/toxicity mechanisms and obtain new insights for the design of safest NMs.
Resumo:
The aims of this research study is to explore the opportunity to set up Performance Objectives (POs) parameters for specific risks in RTE products to propose for food industries and food authorities. In fact, even if microbiological criteria for Salmonella and Listeria monocytogenes Ready-to-Eat (RTE) products are included in the European Regulation, these parameters are not risk based and no microbiological criteria for Bacillus cereus in RTE products is present. For these reasons the behaviour of Salmonella enterica in RTE mixed salad, the microbiological characteristics in RTE spelt salad, and the definition of POs for Bacillus cereus and Listeria monocytogenes in RTE spelt salad has been assessed. Based on the data produced can be drawn the following conclusions: 1. A rapid growth of Salmonella enterica may occurr in mixed ingredient salads, and strict temperature control during the production chain of the product is critical. 2. Spelt salad is characterized by the presence of high number of Lactic Acid Bacteria. Listeria spp. and Enterobacteriaceae, on the contrary, did not grow during the shlef life, probably due to the relevant metabolic activity of LAB. 3. The use of spelt and cheese compliant with the suggested POs might significantly reduce the incidence of foodborne intoxications due to Bacillus cereus and Listeria monocytogenes and the proportions of recalls, causing huge economic losses for food companies commercializing RTE products. 4. The approach to calculate the POs values and reported in my work can be easily adapted to different food/risk combination as well as to any changes in the formulation of the same food products. 5. The optimized sampling plans in term of number of samples to collect can be derive in order to verify the compliance to POs values selected.