17 resultados para Level of processing
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Gnocchi is a typical Italian potato-based fresh pasta that can be either homemade or industrially manufactured. The homemade traditional product is consumed fresh on the day it is produced, whereas the industrially manufactured one is vacuum-packed in polyethylene and usually stored at refrigerated conditions. At industrial level, most kinds of gnocchi are usually produced by using some potato derivatives (i.e. flakes, dehydrated products or flour) to which soft wheat flour, salt, some emulsifiers and aromas are added. Recently, a novel type of gnocchi emerged on the Italian pasta market, since it would be as much similar as possible to the traditional homemade one. It is industrially produced from fresh potatoes as main ingredient and soft wheat flour, pasteurized liquid eggs and salt, moreover this product undergoes steam cooking and mashing industrial treatments. Neither preservatives nor emulsifiers are included in the recipe. The main aim of this work was to get inside the industrial manufacture of gnocchi, in order to improve the quality characteristics of the final product, by the study of the main steps of the production, starting from the raw and steam cooked tubers, through the semi-finished materials, such as the potato puree and the formulated dough. For this purpose the investigation of the enzymatic activity of the raw and steam cooked potatoes, the main characteristics of the puree (colour, texture and starch), the interaction among ingredients of differently formulated doughs and the basic quality aspects of the final product have been performed. Results obtained in this work indicated that steam cooking influenced the analysed enzymes (Pectin methylesterase and α- and β-amylases) in different tissues of the tuber. PME resulted still active in the cortex, it therefore may affect the texture of cooked potatoes to be used as main ingredient in the production of gnocchi. Starch degrading enzymes (α- and β-amylases) were inactivated both in the cortex and in the pith of the tuber. The study performed on the potato puree showed that, between the two analysed samples, the product which employed dual lower pressure treatments seemed to be the most suitable to the production of gnocchi, in terms of its better physicochemical and textural properties. It did not evidence aggregation phenomena responsible of hard lumps, which may occur in this kind of semi-finished product. The textural properties of gnocchi doughs were not influenced by the different formulation as expected. Among the ingredients involved in the preparation of the different samples, soft wheat flour seemed to be the most crucial in affecting the quality features of gnocchi doughs. As a consequence of the interactive effect of the ingredients on the physicochemical and textural characteristics of the different doughs, a uniform and well-defined split-up among samples was not obtained. In the comparison of different kinds of gnocchi, the optimal physicochemical and textural properties were detected in the sample made with fresh tubers. This was probably caused not only by the use of fresh steam cooked potatoes, but also by the pasteurized liquid eggs and by the absence of any kind of emulsifier, additive or preserving substance.
Resumo:
According to much evidence, observing objects activates two types of information: structural properties, i.e., the visual information about the structural features of objects, and function knowledge, i.e., the conceptual information about their skilful use. Many studies so far have focused on the role played by these two kinds of information during object recognition and on their neural underpinnings. However, to the best of our knowledge no study so far has focused on the different activation of this information (structural vs. function) during object manipulation and conceptualization, depending on the age of participants and on the level of object familiarity (familiar vs. non-familiar). Therefore, the main aim of this dissertation was to investigate how actions and concepts related to familiar and non-familiar objects may vary across development. To pursue this aim, four studies were carried out. A first study led to the creation of the Familiar and Non-Familiar Stimuli Database, a set of everyday objects classified by Italian pre-schoolers, schoolers, and adults, useful to verify how object knowledge is modulated by age and frequency of use. A parallel study demonstrated that factors such as sociocultural dynamics may affect the perception of objects. Specifically, data for familiarity, naming, function, using and frequency of use of the objects used to create the Familiar And Non-Familiar Stimuli Database were collected with Dutch and Croatian children and adults. The last two studies on object interaction and language provide further evidence in support of the literature on affordances and on the link between affordances and the cognitive process of language from a developmental point of view, supporting the perspective of a situated cognition and emphasizing the crucial role of human experience.
Resumo:
This doctoral thesis presents a project carried out in secondary schools located in the city of Ferrara with the primary objective of demonstrating the effectiveness of an intervention based on Well-Being Therapy (Fava, 2016) in reducing alcohol use and improving lifestyles. In the first part (chapters 1-3), an introduction on risky behaviors and unhealthy lifestyle in adolescence is presented, followed by an examination of the phenomenon of binge drinking and of the concept of psychological well-being. In the second part (chapters 4-6), the experimental study is presented. A three-arm cluster randomized controlled trial including three test periods was implemented. The study involved eleven classes that were randomly assigned to receive well-being intervention (WBI), lifestyle intervention (LI) or not receive intervention (NI). Results were analyzed by linear mixed model and mixed-effects logistic regression with the aim to test the efficacy of WBI in comparison with LI and NI. AUDIT-C total score increased more in NI in comparison with WBI (p=0.008) and LI (p=0.003) at 6-month. The odds to be classified as at-risk drinker was lower in WBI (OR 0.01; 95%CI 0.01–0.14) and LI (OR 0.01; 95%CI 0.01–0.03) than NI at 6-month. The odds to use e-cigarettes at 6-month (OR 0.01; 95%CI 0.01–0.35) and cannabis at post-test (OR 0.01; 95%CI 0.01–0.18) were less in WBI than NI. Sleep hours at night decreased more in NI than in WBI (p = 0.029) and LI (p = 0.006) at 6-month. Internet addiction scores decreased more in WBI (p = 0.003) and LI (p = 0.004) at post-test in comparison with NI. Conclusions about the obtained results, limitations of the study, and future implications are discussed. In the seventh chapter, the data of the project collected during the pandemic are presented and compared with those from recent literature.
1° level of automation: the effectiveness of adaptive cruise control on driving and visual behaviour
Resumo:
The research activities have allowed the analysis of the driver assistance systems, called Advanced Driver Assistance Systems (ADAS) in relation to road safety. The study is structured according to several evaluation steps, related to definite on-site tests that have been carried out with different samples of users, according to their driving experience with the ACC. The evaluation steps concern: •The testing mode and the choice of suitable instrumentation to detect the driver’s behaviour in relation to the ACC. •The analysis modes and outputs to be obtained, i.e.: - Distribution of attention and inattention; - Mental workload; - The Perception-Reaction Time (PRT), the Time To Collision (TTC) and the Time Headway (TH). The main purpose is to assess the interaction between vehicle drivers and ADAS, highlighting the inattention and variation of the workloads they induce regarding the driving task. The research project considered the use of a system for monitoring visual behavior (ASL Mobile Eye-XG - ME), a powerful GPS that allowed to record the kinematic data of the vehicle (Racelogic Video V-BOX) and a tool for reading brain activity (Electroencephalographic System - EEG). Just during the analytical phase, a second and important research objective was born: the creation of a graphical interface that would allow exceeding the frame count limit, making faster and more effective the labeling of the driver’s points of view. The results show a complete and exhaustive picture of the vehicle-driver interaction. It has been possible to highlight the main sources of criticalities related to the user and the vehicle, in order to concretely reduce the accident rate. In addition, the use of mathematical-computational methodologies for the analysis of experimental data has allowed the optimization and verification of analytical processes with neural networks that have made an effective comparison between the manual and automatic methodology.
Resumo:
Traceability is often perceived by food industry executives as an additional cost of doing business, one to be avoided if possible. However, a traceability system can in fact comply the regulatory requirements, increase food safety and recall performance, improving marketing performances and, as well as, improving supply chain management. Thus, traceability affects business performances of firms in terms of costs and benefits determined by traceability practices. Costs and benefits affect factors such as, firms’ characteristics, level of traceability and ,lastly, costs and benefits perceived prior to traceability implementation. This thesis was undertaken to understand how these factors are linked to affect the outcome of costs and benefits. Analysis of the results of a plant level survey of the Italian ichthyic processing industry revealed that processors generally adopt various level of traceability while government support appears to increase the level of traceability and the expectations and actual costs and benefits. None of the firms’ characteristics, with the exception of government support, influences costs and level of traceability. Only size of firms and level of QMS certifications are linked with benefits while precision of traceability increases benefits without affecting costs. Finally, traceability practices appear due to the request from “external“ stakeholders such as government, authority and customers rather than “internal” factors (e.g. improving the firm management) while the traceability system does not provide any added value from the market in terms of price premium or market share increase.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
The cathepsin enzymes represent an important family of lysosomal proteinases with a broad spectrum of functions in many, if not in all, tissues and cell types. In addition to their primary role during the normal protein turnover, they possess highly specific proteolytic activities, including antigen processing in the immune response and a direct role in the development of obesity and tumours. In pigs, the involvement of cathepsin enzymes in proteolytic processes have important effects during the conversion of muscle to meat, due to their influence on meat texture and sensory characteristics, mainly in seasoned products. Their contribution is fundamental in flavour development of dry-curing hams. However, several authors have demonstrated that high cathepsin activity, in particular of cathepsin B, is correlated to defects of these products, such as an excessive meat softness together with abnormal free tyrosine content, astringent or metallic aftertastes and formation of a white film on the cut surface. Thus, investigation of their genetic variability could be useful to identify DNA markers associated with these dry cured hams parameters, but also with meat quality, production and carcass traits in Italian heavy pigs. Unfortunately, no association has been found between cathepsin markers and meat quality traits so far, in particular with cathepsin B activity, suggesting that other genes, besides these, affect meat quality parameters. Nevertheless, significant associations were observed with several carcass and production traits in pigs. A recent study has demonstrated that different single nucleotide polymorphisms (SNPs) localized in cathepsin D (CTSD), F (CTSF), H and Z genes were highly associated with growth, fat deposition and production traits in an Italian Large White pig population. The aim of this thesis was to confirm some of these results in other pig populations and identify new cathepsin markers in order to evaluate their effects on cathepsin activity and other production traits. Furthermore, starting from the data obtained in previous studies on CTSD gene, we also analyzed the known polymorphism located in the insulin-like growth factor 2 gene (IGF2 intron3-g.3072G>A). This marker is considered the causative mutation for the quantitative trait loci (QTL) affecting muscle mass and fat deposition in pigs. Since IGF2 maps very close to CTSD on porcine chromosome (SSC) 2, we wanted to clarify if the effects of the CTSD marker were due to linkage disequilibrium with the IGF2 intron3-g.3072G>A mutation or not. In the first chapter, we reported the results from these two SSC2 gene markers. First of all, we evaluated the effects of the IGF2 intron3-g.3072G>A polymorphism in the Italian Large White breed, for which no previous studies have analysed this marker. Highly significant associations were identified with all estimated breeding values for production and carcass traits (P<0.00001), while no effects were observed for meat quality traits. Instead, the IGF2 intron3-g.3072G>A mutation did not show any associations with the analyzed traits in the Italian Duroc pigs, probably due to the low level of variability at this polymorphic site for this breed. In the same Duroc pig population, significant associations were obtained for the CTSD marker for all production and carcass traits (P < 0.001), after excluding possible confounding effects of the IGF2 mutation. The effects of the CTSD g.70G>A polymorphism were also confirmed in a group of Italian Large White pigs homozygous for the IGF2 intron3-g.3072G allele G (IGF2 intron3-g.3072GG) and by haplotype analysis between the markers of the two considered genes. Taken together, all these data indicated that the IGF2 intron3-g.3072G>A mutation is not the only polymorphism affecting fatness and muscle deposition in pigs. In the second chapter, we reported the analysis of two new SNPs identified in cathepsin L (CTSL) and cathepsin S (CTSS) genes and the association results with meat quality parameters (including cathepsin B activity) and several production traits in an Italian Large White pig population. Allele frequencies of these two markers were evaluated in 7 different pig breeds. Furthermore, we mapped using a radiation hybrid panel the CTSS gene on SSC4. Association studies with several production traits, carried out in 268 Italian Large White pigs, indicated positive effects of the CTSL polymorphism on average daily gain, weight of lean cuts and backfat thickness (P<0.05). The results for these latter traits were also confirmed using a selective genotype approach in other Italian Large White pigs (P<0.01). In the 268 pig group, the CTSS polymorphism was associated with feed:gain ratio and average daily gain (P<0.05). Instead, no association was observed between the analysed markers and meat quality parameters. Finally, we wanted to verify if the positive results obtained for the cathepsin L and S markers and for other previous identified SNPs (cathepsin F, cathepsin Z and their inhibitor cystatin B) were confirmed in the Italian Duroc pig breed (third chapter). We analysed them in two groups of Duroc pigs: the first group was made of 218 performance-tested pigs not selected by any phenotypic criteria, the second group was made of 100 Italian Duroc pigs extreme and divergent for visible intermuscular fat trait. In the first group, the CTSL polymorphism was associated with weight of lean cuts (P<0.05), while suggestive associations were obtained for average daily gain and backfat thickness (P<0.10). Allele frequencies of the CTSL gene marker also differed positively among the visible intermuscular extreme tails. Instead, no positive effects were observed for the other DNA markers on the analysed traits. In conclusion, in agreement with the present data and for the biological role of these enzymes, the porcine CTSD and CTSL markers: a) may have a direct effect in the biological mechanisms involved in determining fat and lean meat content in pigs, or b) these markers could be very close to the putative functional mutation(s) present in other genes. These findings have important practical applications, in particular the CTSD and CTSL mutations could be applied in a marker assisted selection (MAS) both in the Italian Large White and Italian Duroc breeds. Marker assisted selection could also increase in efficiency by adding information from the cathepsin S genotype, but only in the Italian Large White breed.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.
Resumo:
Italy registers a fast increase of low income population. Academics and policy makers consider income inequalities as a key determinant for low or inadequate healthy food consumption. Thus the objective is to understand how to overcome the agrofood chain barriers towards healthy food production, commercialisation and consumption for population at risk of poverty (ROP) in Italy. The study adopts a market oriented food chain approach, focusing the research ambit on ROP consumers, processing industries and retailers. The empirical investigation adopts a qualitative methodology with an explorative approach. The actors are investigated through 4 focus groups for consumers and carrying out 27 face to face semi-structured interviews for industries and retailers’ representatives. The results achieved provide the perceptions of each actor integrated into an overall chain approach. The analysis shows that all agrofood actors lack of an adequate level of knowledge towards healthy food definition. Food industries and retailers also show poor awareness about ROP consumers’ segment. In addition they perceive that the high costs for producing healthy food conflict with the low economic performances expected from ROP consumers’ segment. These aspects induce a scarce interest in investing on commercialisation strategies for healthy food for ROP consumers. Further ROP consumers show other notable barriers to adopt healthy diets caused, among others, by a personal strong negative attitude and lack of motivation. The personal barriers are also negatively influenced by several external socio-economic factors. The solutions to overcome the barriers shall rely on the improvement of the agrofood chain internal relations to identify successful strategies for increasing interest on low cost healthy food. In particular the focus should be on improved collaboration on innovation adoption and marketing strategies, considering ROP consumers’ preferences and needs. An external political intervention is instead necessary to fill the knowledge and regulations’ gaps on healthy food issues.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
The study defines a new farm classification and identifies the arable land management. These aspects and several indicators are taken into account to estimate the sustainability level of farms, for organic and conventional regimes. The data source is Italian Farm Account Data Network (RICA) for years 2007-2011, which samples structural and economical information. An environmental data has been added to the previous one to better describe the farm context. The new farm classification describes holding by general informations and farm structure. The general information are: adopted regime and farm location in terms of administrative region, slope and phyto-climatic zone. The farm structures describe the presence of main productive processes and land covers, which are recorded by FADN database. The farms, grouped by homogeneous farm structure or farm typology, are evaluated in terms of sustainability. The farm model MAD has been used to estimate a list of indicators. They describe especially environmental and economical areas of sustainability. Finally arable lands are taken into account to identify arable land managements and crop rotations. Each arable land has been classified by crop pattern. Then crop rotation management has been analysed by spatial and temporal approaches. The analysis reports a high variability inside regimes. The farm structure influences indicators level more than regimes, and it is not always possible to compare the two regimes. However some differences between organic and conventional agriculture have been found. Organic farm structures report different frequency and geographical location than conventional ones. Also different connections among arable lands and farm structures have been identified.
Resumo:
In Cystic Fibrosis (CF) the deletion of phenylalanine 508 (F508del) in the CFTR anion channel is associated to misfolding and defective gating of the mutant protein. Among the known proteins involved in CFTR processing, one of the most promising drug target is the ubiquitin ligase RNF5, which normally promotes F508del-CFTR degradation. In this context, a small molecule RNF5 inhibitor is expected to chemically mimic a condition of RNF5 silencing, thus preventing mutant CFTR degradation and causing its stabilization and plasma membrane trafficking. Hence, by exploiting a virtual screening (VS) campaign, the hit compound inh-2 was discovered as the first-in-class inhibitor of RNF5. Evaluation of inh-2 efficacy on CFTR rescue showed that it efficiently decreases ubiquitination of mutant CFTR and increases chloride current in human primary bronchial epithelia. Based on the promising biological results obtained with inh-2, this thesis reports the structure-based design of potential RNF5 inhibitors having improved potency and efficacy. The optimization of general synthetic strategies gave access to a library of analogues of the 1,2,4-thiadiazol-5-ylidene inh-2 for SAR investigation. The new analogues were tested for their corrector activity in CFBE41o- cells by using the microfluorimetric HS-YFP assay as a primary screen. Then, the effect of putative RNF5 inhibitors on proliferation, apoptosis and the formation of autophagic vacuoles was evaluated. Some of the new analogs significantly increased the basal level of autophagy, reproducing RNF5 silencing effect in cell. Among them, one compound also displayed a greater rescue of the F508del-CFTR trafficking defect than inh-2. Our preliminary results suggest that the 1,2,4-thiadiazolylidene could be a suitable scaffold for the discovery of potential RNF5 inhibitors able to rescue mutant CFTRs. Biological tests are still ongoing to acquire in-depth knowledge about the mechanism of action and therapeutic relevance of this unprecedented pharmacological strategy.