32 resultados para Definition of cuisine
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Aberrant expression of ETS transcription factors, including FLI1 and ERG, due to chromosomal translocations has been described as a driver event in initiation and progression of different tumors. In this study, the impact of prostate cancer (PCa) fusion gene TMPRSS2-ERG was evaluated on components of the insulin-like growth factor (IGF) system and the CD99 molecule, two well documented targets of EWS-FLI1, the hallmark of Ewing sarcoma (ES). The aim of this study was to identify common or distinctive ETS-related mechanisms which could be exploited at biological and clinical level. The results demonstrate that IGF-1R represents a common target of ETS rearrangements as ERG and FLI1 bind IGF-1R gene promoter and their modulation causes alteration in IGF-1R protein levels. At clinical level, this mechanism provides basis for a more rationale use of anti-IGF-1R inhibitors as PCa cells expressing the fusion gene better respond to anti-IGF-1R agents. EWS-FLI1/IGF-1R axis provides rationale for combination of anti-IGF-1R agents with trabectedin, an alkylator agent causing enhanced EWS-FLI1 occupancy on the IGF-1R promoter. TMPRSS2-ERG also influences prognosis relevance of IGF system as high IGF-1R correlates with a better biochemical progression free survival (BPFS) in PCa patients negative for the fusion gene while marginal or no association was found in the total cases or TMPRSS2-ERG-positive cases, respectively. This study indicates CD99 is differentially regulated between ETS-related tumors as CD99 is not a target of ERG. In PCa, CD99 did not show differential expression between TMPRSS2-ERG-positive and –negative cells. A direct correlation was anyway found between ERG and CD99 proteins both in vitro and in patients putatively suggesting that ERG target genes comprehend regulators of CD99. Despite a little trend suggesting a correlation between CD99 expression and a better BPFS, no clinical relevance for CD99 was found in the field of prognostic biomarkers.
Resumo:
The aims of this research study is to explore the opportunity to set up Performance Objectives (POs) parameters for specific risks in RTE products to propose for food industries and food authorities. In fact, even if microbiological criteria for Salmonella and Listeria monocytogenes Ready-to-Eat (RTE) products are included in the European Regulation, these parameters are not risk based and no microbiological criteria for Bacillus cereus in RTE products is present. For these reasons the behaviour of Salmonella enterica in RTE mixed salad, the microbiological characteristics in RTE spelt salad, and the definition of POs for Bacillus cereus and Listeria monocytogenes in RTE spelt salad has been assessed. Based on the data produced can be drawn the following conclusions: 1. A rapid growth of Salmonella enterica may occurr in mixed ingredient salads, and strict temperature control during the production chain of the product is critical. 2. Spelt salad is characterized by the presence of high number of Lactic Acid Bacteria. Listeria spp. and Enterobacteriaceae, on the contrary, did not grow during the shlef life, probably due to the relevant metabolic activity of LAB. 3. The use of spelt and cheese compliant with the suggested POs might significantly reduce the incidence of foodborne intoxications due to Bacillus cereus and Listeria monocytogenes and the proportions of recalls, causing huge economic losses for food companies commercializing RTE products. 4. The approach to calculate the POs values and reported in my work can be easily adapted to different food/risk combination as well as to any changes in the formulation of the same food products. 5. The optimized sampling plans in term of number of samples to collect can be derive in order to verify the compliance to POs values selected.
Resumo:
The subject of this doctoral dissertation concerns the definition of a new methodology for the morphological and morphometric study of fossilized human teeth, and therefore strives to provide a contribution to the reconstruction of human evolutionary history that proposes to extend to the different species of hominid fossils. Standardized investigative methodologies are lacking both regarding the orientation of teeth subject to study and in the analysis that can be carried out on these teeth once they are oriented. The opportunity to standardize a primary analysis methodology is furnished by the study of certain early Neanderthal and preneanderthal molars recovered in two caves in southern Italy [Grotta Taddeo (Taddeo Cave) and Grotta del Poggio (Poggio Cave), near Marina di Camerata, Campania]. To these we can add other molars of Neanderthal and modern man of the upper Paleolithic era, specifically scanned in the paleoanthropology laboratory of the University of Arkansas (Fayetteville, Arkansas, USA), in order to increase the paleoanthropological sample data and thereby make the final results of the analyses more significant. The new analysis methodology is rendered as follows: 1. Standardization of an orientation system for primary molars (superior and inferior), starting from a scan of a sample of 30 molars belonging to modern man (15 M1 inferior and 15 M1 superior), the definition of landmarks, the comparison of various systems and the choice of a system of orientation for each of the two dental typologies. 2. The definition of an analysis procedure that considers only the first 4 millimeters of the dental crown starting from the collar: 5 sections parallel to the plane according to which the tooth has been oriented are carried out, spaced 1 millimeter between them. The intention is to determine a method that allows for the differentiation of fossilized species even in the presence of worn teeth. 3. Results and Conclusions. The new approach to the study of teeth provides a considerable quantity of information that can better be evaluated by increasing the fossil sample data. It has been demonstrated to be a valid tool in evolutionary classification that has allowed (us) to differentiate the Neanderthal sample from that of modern man. In a particular sense the molars of Grotta Taddeo, which up until this point it has not been possible to determine with exactness their species of origin, through the present research they are classified as Neanderthal.
Resumo:
This thesis deals with two important research aspects concerning radio frequency (RF) microresonators and switches. First, a new approach for compact modeling and simulation of these devices is presented. Then, a combined process flow for their simultaneous fabrication on a SOI substrate is proposed. Compact models for microresonators and switches are extracted by applying mathematical model order reduction (MOR) to the devices finite element (FE) description in ANSYS c° . The behaviour of these devices includes forms of nonlinearities. However, an approximation in the creation of the FE model is introduced, which enables the use of linear model order reduction. Microresonators are modeled with the introduction of transducer elements, which allow for direct coupling of the electrical and mechanical domain. The coupled system element matrices are linearized around an operating point and reduced. The resulting macromodel is valid for small signal analysis around the bias point, such as harmonic pre-stressed analysis. This is extremely useful for characterizing the frequency response of resonators. Compact modelling of switches preserves the nonlinearity of the device behaviour. Nonlinear reduced order models are obtained by reducing the number of nonlinearities in the system and handling them as input to the system. In this way, the system can be reduced using linear MOR techniques and nonlinearities are introduced directly in the reduced order model. The reduction of the number of system nonlinearities implies the approximation of all distributed forces in the model with lumped forces. Both for microresonators and switches, a procedure for matrices extraction has been developed so that reduced order models include the effects of electrical and mechanical pre-stress. The extraction process is fast and can be done automatically from ANSYS binary files. The method has been applied for the simulation of several devices both at devices and circuit level. Simulation results have been compared with full model simulations, and, when available, experimental data. Reduced order models have proven to conserve the accuracy of finite element method and to give a good description of the overall device behaviour, despite the introduced approximations. In addition, simulation is very fast, both at device and circuit level. A combined process-flow for the integrated fabrication of microresonators and switches has been defined. For this purpose, two processes that are optimized for the independent fabrication of these devices are merged. The major advantage of this process is the possibility to create on-chip circuit blocks that include both microresonators and switches. An application is, for example, aswitched filter bank for wireless transceiver. The process for microresonators fabrication is characterized by the use of silicon on insulator (SOI) wafers and on a deep reactive ion etching (DRIE) step for the creation of the vibrating structures in single-crystal silicon and the use of a sacrificial oxide layer for the definition of resonator to electrode distance. The fabrication of switches is characterized by the use of two different conductive layers for the definition of the actuation electrodes and by the use of a photoresist as a sacrificial layer for the creation of the suspended structure. Both processes have a gold electroplating step, for the creation of the resonators electrodes, transmission lines and suspended structures. The combined process flow is designed such that it conserves the basic properties of the original processes. Neither the performance of the resonators nor the performance of the switches results affected by the simultaneous fabrication. Moreover, common fabrication steps are shared, which allows for cheaper and faster fabrication.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
Mathematical models of the knee joint are important tools which have both theoretical and practical applications. They are used by researchers to fully understand the stabilizing role of the components of the joint, by engineers as an aid for prosthetic design, by surgeons during the planning of an operation or during the operation itself, and by orthopedists for diagnosis and rehabilitation purposes. The principal aims of knee models are to reproduce the restraining function of each structure of the joint and to replicate the relative motion of the bones which constitute the joint itself. It is clear that the first point is functional to the second one. However, the standard procedures for the dynamic modelling of the knee tend to be more focused on the second aspect: the motion of the joint is correctly replicated, but the stabilizing role of the articular components is somehow lost. A first contribution of this dissertation is the definition of a novel approach — called sequential approach — for the dynamic modelling of the knee. The procedure makes it possible to develop more and more sophisticated models of the joint by a succession of steps, starting from a first simple model of its passive motion. The fundamental characteristic of the proposed procedure is that the results obtained at each step do not worsen those already obtained at previous steps, thus preserving the restraining function of the knee structures. The models which stem from the first two steps of the sequential approach are then presented. The result of the first step is a model of the passive motion of the knee, comprehensive of the patello-femoral joint. Kinematical and anatomical considerations lead to define a one degree of freedom rigid link mechanism, whose members represent determinate components of the joint. The result of the second step is a stiffness model of the knee. This model is obtained from the first one, by following the rules of the proposed procedure. Both models have been identified from experimental data by means of an optimization procedure. The simulated motions of the models then have been compared to the experimental ones. Both models accurately reproduce the motion of the joint under the corresponding loading conditions. Moreover, the sequential approach makes sure the results obtained at the first step are not worsened at the second step: the stiffness model can also reproduce the passive motion of the knee with the same accuracy than the previous simpler model. The procedure proved to be successful and thus promising for the definition of more complex models which could also involve the effect of muscular forces.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
This thesis is a part of a larger study about the characterization of mechanical and histomorphometrical properties of bone. The main objects of this study were the bone tissue properties and its resistance to mechanical loads. Moreover, the knowledge about the equipment selected to carry out the analyses, the micro-computed tomography (micro-CT), was improved. Particular attention was given to the reliability over time of the measuring instrument. In order to understand the main characteristics of bone mechanical properties a study of the skeletal, the bones of which it is composed and biological principles that drive their formation and remodelling, was necessary. This study has led to the definition of two macro-classes describing the main components responsible for the resistance to fracture of bone: quantity and quality of bone. The study of bone quantity is the current clinical standard measure for so-called bone densitometry, and research studies have amply demonstrated that the amount of tissue is correlated with its mechanical properties of elasticity and fracture. However, the models presented in the literature, including information on the mere quantity of tissue, have often been limited in describing the mechanical behaviour. Recent investigations have underlined that also the bone-structure and the tissue-mineralization play an important role in the mechanical characterization of bone tissue. For this reason in this thesis the class defined as bone quality was mainly studied, splitting it into two sub-classes of bone structure and tissue quality. A study on bone structure was designed to identify which structural parameters, among the several presented in the literature, could be integrated with the information about quantity, in order to better describe the mechanical properties of bone. In this way, it was also possible to analyse the iteration between structure and function. It has been known for long that bone tissue is capable of remodeling and changing its internal structure according to loads, but the dynamics of these changes are still being analysed. This part of the study was aimed to identify the parameters that could quantify the structural changes of bone tissue during the development of a given disease: osteoarthritis. A study on tissue quality would have to be divided into different classes, which would require a scale of analysis not suitable for the micro-CT. For this reason the study was focused only on the mineralization of the tissue, highlighting the difference between bone density and tissue density, working in a context where there is still an ongoing scientific debate.
Resumo:
The present study is part of the EU Integrated Project “GEHA – Genetics of Healthy Aging” (Franceschi C et al., Ann N Y Acad Sci. 1100: 21-45, 2007), whose aim is to identify genes involved in healthy aging and longevity, which allow individuals to survive to advanced age in good cognitive and physical function and in the absence of major age-related diseases. Aims The major aims of this thesis were the following: 1. to outline the recruitment procedure of 90+ Italian siblings performed by the recruiting units of the University of Bologna (UNIBO) and Rome (ISS). The procedures related to the following items necessary to perform the study were described and commented: identification of the eligible area for recruitment, demographic aspects related to the need of getting census lists of 90+siblings, mail and phone contact with 90+ subjects and their families, bioethics aspects of the whole procedure, standardization of the recruitment methodology and set-up of a detailed flow chart to be followed by the European recruitment centres (obtainment of the informed consent form, anonimization of data by using a special code, how to perform the interview, how to collect the blood, how to enter data in the GEHA Phenotypic Data Base hosted at Odense). 2. to provide an overview of the phenotypic characteristics of 90+ Italian siblings recruited by the recruiting units of the University of Bologna (UNIBO) and Rome (ISS). The following items were addressed: socio-demographic characteristics, health status, cognitive assessment, physical conditions (handgrip strength test, chair-stand test, physical ability including ADL, vision and hearing ability, movement ability and doing light housework), life-style information (smoking and drinking habits) and subjective well-being (attitude towards life). Moreover, haematological parameters collected in the 90+ sibpairs as optional parameters by the Bologna and Rome recruiting units were used for a more comprehensive evaluation of the results obtained using the above mentioned phenotypic characteristics reported in the GEHA questionnaire. 3. to assess 90+ Italian siblings as far as their health/functional status is concerned on the basis of three classification methods proposed in previous studies on centenarians, which are based on: • actual functional capabilities (ADL, SMMSE, visual and hearing abilities) (Gondo et al., J Gerontol. 61A (3): 305-310, 2006); • actual functional capabilities and morbidity (ADL, ability to walk, SMMSE, presence of cancer, ictus, renal failure, anaemia, and liver diseases) (Franceschi et al., Aging Clin Exp Res, 12:77-84, 2000); • retrospectively collected data about past history of morbidity and age of disease onset (hypertension, heart disease, diabetes, stroke, cancer, osteopororis, neurological diseases, chronic obstructive pulmonary disease and ocular diseases) (Evert et al., J Gerontol A Biol Sci Med Sci. 58A (3): 232-237, 2003). Firstly these available models to define the health status of long-living subjects were applied to the sample and, since the classifications by Gondo and Franceschi are both based on the present functional status, they were compared in order to better recognize the healthy aging phenotype and to identify the best group of 90+ subjects out of the entire studied population. 4. to investigate the concordance of health and functional status among 90+ siblings in order to divide sibpairs in three categories: the best (both sibs are in good shape), the worst (both sibs are in bad shape) and an intermediate group (one sib is in good shape and the other is in bad shape). Moreover, the evaluation wanted to discover which variables are concordant among siblings; thus, concordant variables could be considered as familiar variables (determined by the environment or by genetics). 5. to perform a survival analysis by using mortality data at 1st January 2009 from the follow-up as the main outcome and selected functional and clinical parameters as explanatory variables. Methods A total of 765 90+ Italian subjects recruited by UNIBO (549 90+ siblings, belonging to 258 families) and ISS (216 90+ siblings, belonging to 106 families) recruiting units are included in the analysis. Each subject was interviewed according to a standardized questionnaire, comprising extensively utilized questions that have been validated in previous European studies on elderly subjects and covering demographic information, life style, living conditions, cognitive status (SMMSE), mood, health status and anthropometric measurements. Moreover, subjects were asked to perform some physical tests (Hand Grip Strength test and Chair Standing test) and a sample of about 24 mL of blood was collected and then processed according to a common protocol for the preparation and storage of DNA aliquots. Results From the analysis the main findings are the following: - a standardized protocol to assess cognitive status, physical performances and health status of European nonagenarian subjects was set up, in respect to ethical requirements, and it is available as a reference for other studies in this field; - GEHA families are enriched in long-living members and extreme survival, and represent an appropriate model for the identification of genes involved in healthy aging and longevity; - two simplified sets of criteria to classify 90+ sibling according to their health status were proposed, as operational tools for distinguishing healthy from non healthy subjects; - cognitive and functional parameters have a major role in categorizing 90+ siblings for the health status; - parameters such as education and good physical abilities (500 metres walking ability, going up and down the stairs ability, high scores at hand grip and chair stand tests) are associated with a good health status (defined as “cognitive unimpairment and absence of disability”); - male nonagenarians show a more homogeneous phenotype than females, and, though far fewer in number, tend to be healthier than females; - in males the good health status is not protective for survival, confirming the male-female health survival paradox; - survival after age 90 was dependent mainly on intact cognitive status and absence of functional disabilities; - haemoglobin and creatinine levels are both associated with longevity; - the most concordant items among 90+ siblings are related to the functional status, indicating that they contain a familiar component. It is still to be investigated at what level this familiar component is determined by genetics or by environment or by the interaction between genetics, environment and chance (and at what level). Conclusions In conclusion, we could state that this study, in accordance with the main objectives of the whole GEHA project, represents one of the first attempt to identify the biological and non biological determinants of successful/unsuccessful aging and longevity. Here, the analysis was performed on 90+ siblings recruited in Northern and Central Italy and it could be used as a reference for others studies in this field on Italian population. Moreover, it contributed to the definition of “successful” and “unsuccessful” aging and categorising a very large cohort of our most elderly subjects into “successful” and “unsuccessful” groups provided an unrivalled opportunity to detect some of the basic genetic/molecular mechanisms which underpin good health as opposed to chronic disability. Discoveries in the topic of the biological determinants of healthy aging represent a real possibility to identify new markers to be utilized for the identification of subgroups of old European citizens having a higher risk to develop age-related diseases and disabilities and to direct major preventive medicine strategies for the new epidemic of chronic disease in the 21st century.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.
Resumo:
Can space and place foster child development, and in particular social competence and ecological literacy? If yes, how can space and place do that? This study shows that the answer to the first question is positive and then tries to explain the way space and place can make a difference. The thesis begins with the review of literature from different disciplines – child development and child psychology, education, environmental psychology, architecture and landscape architecture. Some bridges among such disciplines are created and in some cases the ideas from the different areas of research merge: thus, this is an interdisciplinary study. The interdisciplinary knowledge from these disciplines is translated into a range of design suggestions that can foster the development of social competence and ecological literacy. Using scientific knowledge from different disciplines is a way of introducing forms of evidence into the development of design criteria. However, the definition of design criteria also has to pass through the study of a series of school buildings and un-built projects: case studies can give a positive contribution to the criteria because examples and good practices can help “translating” the theoretical knowledge into design ideas and illustrations. To do that, the different case studies have to be assessed in relation to the various themes that emerged in the literature review. Finally, research by design can be used to help define the illustrated design criteria: based on all the background knowledge that has been built, the role of the architect is to provide a series of different design solutions that can give answers to the different “questions” emerged in the literature review.
Resumo:
This thesis is a collection of essays related to the topic of innovation in the service sector. The choice of this structure is functional to the purpose of single out some of the relevant issues and try to tackle them, revising first the state of the literature and then proposing a way forward. Three relevant issues has been therefore selected: (i) the definition of innovation in the service sector and the connected question of measurement of innovation; (ii) the issue of productivity in services; (iii) the classification of innovative firms in the service sector. Facing the first issue, chapter II shows how the initial width of the original Schumpeterian definition of innovation has been narrowed and then passed to the service sector form the manufacturing one in a reduce technological form. Chapter III tackle the issue of productivity in services, discussing the difficulties for measuring productivity in a context where the output is often immaterial. We reconstruct the dispute on the Baumol’s cost disease argument and propose two different ways to go forward in the research on productivity in services: redefining the output along the line of a characteristic approach; and redefining the inputs, particularly analysing which kind of input it’s worth saving. Chapter IV derives an integrated taxonomy of innovative service and manufacturing firms, using data coming from the 2008 CIS survey for Italy. This taxonomy is based on the enlarged definition of “innovative firm” deriving from the Schumpeterian definition of innovation and classify firms using a cluster analysis techniques. The result is the emergence of a four cluster solution, where firms are differentiated by the breadth of the innovation activities in which they are involved. Chapter 5 reports some of the main conclusions of each singular previous chapter and the points worth of further research in the future.