21 resultados para system parameter identification

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research activity aims at providing a reliable estimation of particular state variables or parameters concerning the dynamics and performance optimization of a MotoGP-class motorcycle, integrating the classical model-based approach with new methodologies involving artificial intelligence. The first topic of the research focuses on the estimation of the thermal behavior of the MotoGP carbon braking system. Numerical tools are developed to assess the instantaneous surface temperature distribution in the motorcycle's front brake discs. Within this application other important brake parameters are identified using Kalman filters, such as the disc convection coefficient and the power distribution in the disc-pads contact region. Subsequently, a physical model of the brake is built to estimate the instantaneous braking torque. However, the results obtained with this approach are highly limited by the knowledge of the friction coefficient (μ) between the disc rotor and the pads. Since the value of μ is a highly nonlinear function of many variables (namely temperature, pressure and angular velocity of the disc), an analytical model for the friction coefficient estimation appears impractical to establish. To overcome this challenge, an innovative hybrid solution is implemented, combining the benefit of artificial intelligence (AI) with classical model-based approach. Indeed, the disc temperature estimated through the thermal model previously implemented is processed by a machine learning algorithm that outputs the actual value of the friction coefficient thus improving the braking torque computation performed by the physical model of the brake. Finally, the last topic of this research activity regards the development of an AI algorithm to estimate the current sideslip angle of the motorcycle's front tire. While a single-track motorcycle kinematic model and IMU accelerometer signals theoretically enable sideslip calculation, the presence of accelerometer noise leads to a significant drift over time. To address this issue, a long short-term memory (LSTM) network is implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis focuses on the dynamics of underactuated cable-driven parallel robots (UACDPRs), including various aspects of robotic theory and practice, such as workspace computation, parameter identification, and trajectory planning. After a brief introduction to CDPRs, UACDPR kinematic and dynamic models are analyzed, under the relevant assumption of inextensible cables. The free oscillatory motion of the end-effector (EE), which is a unique feature of underactuated mechanisms, is studied in detail, from both a kinematic and a dynamic perspective. The free (small) oscillations of the EE around equilibria are proved to be harmonic and the corresponding natural oscillation frequencies are analytically computed. UACDPR workspace computation and analysis are then performed. A new performance index is proposed for the analysis of the influence of actuator errors on cable tensions around equilibrium configurations, and a new type of workspace, called tension-error-insensitive, is defined as the set of poses that a UACDPR EE can statically attain even in presence of actuation errors, while preserving tensions between assigned (positive) bounds. EE free oscillations are then employed to conceive a novel procedure aimed at identifying the EE inertial parameters. This approach does not require the use of force or torque measurements. Moreover, a self-calibration procedure for the experimental determination of UACDPR initial cable lengths is developed, which consequently enables the robot to automatically infer the EE initial pose at machine start-up. Lastly, trajectory planning of UACDPRs is investigated. Two alternative methods are proposed, which aim at (i) reducing EE oscillations even when model parameters are uncertain or (ii) eliminate EE oscillations in case model parameters are perfectly known. EE oscillations are reduced in real-time by dynamically scaling a nominal trajectory and filtering it with an input shaper, whereas they can be eliminated if an off-line trajectory is computed that accounts for the system internal dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The weight-transfer effect, consisting of the change in dynamic load distribution between the front and the rear tractor axles, is one of the most impairing phenomena for the performance, comfort, and safety of agricultural operations. Excessive weight transfer from the front to the rear tractor axle can occur during operation or maneuvering of implements connected to the tractor through the three-point hitch (TPH). In this respect, an optimal design of the TPH can ensure better dynamic load distribution and ultimately improve operational performance, comfort, and safety. In this study, a computational design tool (The Optimizer) for the determination of a TPH geometry that minimizes the weight-transfer effect is developed. The Optimizer is based on a constrained minimization algorithm. The objective function to be minimized is related to the tractor front-to-rear axle load transfer during a simulated reference maneuver performed with a reference implement on a reference soil. Simulations are based on a 3-degrees-of-freedom (DOF) dynamic model of the tractor-TPH-implement aggregate. The inertial, elastic, and viscous parameters of the dynamic model were successfully determined through a parameter identification algorithm. The geometry determined by the Optimizer complies with the ISO-730 Standard functional requirements and other design requirements. The interaction between the soil and the implement during the simulated reference maneuver was successfully validated against experimental data. Simulation results show that the adopted reference maneuver is effective in triggering the weight-transfer effect, with the front axle load exhibiting a peak-to-peak value of 27.1 kN during the maneuver. A benchmark test was conducted starting from four geometries of a commercially available TPH. As result, all the configurations were optimized by above 10%. The Optimizer, after 36 iterations, was able to find an optimized TPH geometry which allows to reduce the weight-transfer effect by 14.9%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decades, the increase of industrial activities and of the request for the world food requirement, the intensification of natural resources exploitation, directly connected to pollution, have aroused an increasing interest of the public opinion towards initiatives linked to the regulation of food production, as well to the institution of a modern legislation for the consumer guardianship. This work was planned taking into account some important thematics related to marine environment, collecting and showing the data obtained from the studies made on different marine species of commercial interest (Chamelea gallina, Mytilus edulis, Ostrea edulis, Crassostrea gigas, Salmo salar, Gadus morhua). These studies have evaluated the effects of important physic and chemical parameters variations (temperature, xenobiotics like drugs, hydrocarbons and pesticides) on cells involved in the immune defence (haemocytes) and on some important enzymatic systems involved in xenobiotic biotransformation processes (cytochrome P450 complex) and in the related antioxidant defence processes (Superoxide dismutase, Catalase, Heat Shock Protein), from a biochemical and bimolecular point of view. Oxygen is essential in the biological answer of a living organism. Its consume in the normal cellular breathing physiological processes and foreign substances biotransformation, leads to reactive oxygen species (ROS) formation, potentially toxic and responsible of biological macromolecules damages with consequent pathologies worsening. Such processes can bring to a qualitative alteration of the derived products, but also to a general state of suffering that in the most serious cases can provoke the death of the organism, with important repercussions in economic field, in the output of the breedings, of fishing and of aquaculture. In this study it seemed interesting to apply also alternative methodologies currently in use in the medical field (cytofluorimetry) and in proteomic studies (bidimensional electrophoresis, mass spectrometry) with the aim of identify new biomarkers to place beside the traditional methods for the control of the animal origin food quality. From the results it’s possible to point out some relevant aspects from each experiment: 1. The cytofluorimetric techniques applied to O. edulis and C. gigas could bring to important developments in the search of alternative methods that quickly allows to identify with precision the origin of a specific sample, contributing to oppose possible alimentary frauds, in this case for example related to presence of a different species, also under a qualitative profile, but morpholgically similar. A concrete perspective for the application in the inspective field of this method has to be confirmed by further laboratory tests that take also in account in vivo experiments to evaluate the effect in the whole organism of the factors evaluated only on haemocytes in vitro. These elements suggest therefore the possibility to suit the cytofluorimetric methods for the study of animal organisms of food interest, still before these enter the phase of industrial working processes, giving useful information about the possible presence of contaminants sources that can induce an increase of the immune defence and an alteration of normal cellular parameter values. 2. C. gallina immune system has shown an interesting answer to benzo[a]pyrene (B[a]P) exposure, dose and time dependent, with a significant decrease of the expression and of the activity of one of the most important enzymes involved in the antioxidant defence in haemocytes and haemolymph. The data obtained are confirmed by several measurements of physiological parameters, that together with the decrease of the activity of 7-etossi-resourifine-O-deetilase (EROD linked to xenobiotic biotransformation processes) during exposure, underline the major effects of B[a]P action. The identification of basal levels of EROD supports the possible presence of CYP1A subfamily in the invertebrates, still today controversial, never identified previously in C. gallina and never isolated in the immune cells, as confirmed instead in this study with the identification of CYP1A-immunopositive protein (CYP1A-IPP). This protein could reveal a good biomarker at the base of a simple and quick method that could give clear information about specific pollutants presence, even at low concentrations in the environment where usually these organisms are fished before being commercialized. 3. In this experiment it has been evaluated the effect of the antibiotic chloramphenicol (CA) in an important species of commercial interest, Chamelea gallina. Chloramphenicol is a drug still used in some developing countries, also in veterinary field. Controls to evaluate its presence in the alimentary products of animal origin, can reveal ineffective whereas the concentration results to be below the limit of sensitivity of the instruments usually used in this type of analysis. Negative effects of CA towards the CYP1A- IPP proteins, underlined in this work, seem to be due to the attack of free radicals resultant from the action of the antibiotic. This brings to a meaningful alteration of the biotransformation mechanisms through the free radicals. It seems particularly interesting to pay attention to the narrow relationships in C. gallina, between SOD/CAT and CYP450 system, actively involved in detoxification mechanism, especially if compared with the few similar works today present about mollusc, a group that is composed by numerous species that enter in the food field and on which constant controls are necessary to evaluate in a rapid and effective way the presence of possible contaminations. 4. The investigations on fishes (Gadus morhua, and Salmo salar) and on a bivalve mollusc (Mytilus edulis) have allowed to evaluate different aspects related to the possibility to identify a biomarker for the evaluation of the health of organisms of food interest and consequently for the quality of the final product through 2DE methodologies. In the seafood field these techniques are currently used with a discreet success only for vertebrates (fishes), while in the study of the invertebrates (molluscs) there are a lot of difficulties. The results obtained in this work have underline several problems in the correct identification of the isolated proteins in animal organisms of which doesn’t currently exist a complete genomic sequence. This brings to attribute some identities on the base of the comparison with similar proteins in other animal groups, incurring in the possibility to obtain inaccurate data and above all discordant with those obtained on the same animals by other authors. Nevertheless the data obtained in this work after MALDI-ToF analysis, result however objective and the spectra collected could be again analyzed in the future after the update of genomic database related to the species studied. 4-A. The investigation about the presence of HSP70 isoforms directly induced by different phenomena of stress like B[a]P presence, has used bidimensional electrophoresis methods in C. gallina, that have allowed to isolate numerous protein on 2DE gels, allowing the collection of several spots currently in phase of analysis with MALDI-ToF-MS. The present preliminary work has allowed therefore to acquire and to improve important methodologies in the study of cellular parameters and in the proteomic field, that is not only revealed of great potentiality in the application in medical and veterinary field, but also in the field of the inspection of the foods with connections to the toxicology and the environmental pollution. Such study contributes therefore to the search of rapid and new methodologies, that can increase the inspective strategies, integrating themselves with those existing, but improving at the same time the general background of information related to the state of health of the considered animal organism, with the possibility, still hypothetical, to replace in particular cases the employment of the traditional techniques in the alimentary field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ALICE experiment at the LHC has been designed to cope with the experimental conditions and observables of a Quark Gluon Plasma reaction. One of the main assets of the ALICE experiment with respect to the other LHC experiments is the particle identification. The large Time-Of-Flight (TOF) detector is the main particle identification detector of the ALICE experiment. The overall time resolution, better that 80 ps, allows the particle identification over a large momentum range (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p). The TOF makes use of the Multi-gap Resistive Plate Chamber (MRPC), a detector with high efficiency, fast response and intrinsic time resoltion better than 40 ps. The TOF detector embeds a highly-segmented trigger system that exploits the fast rise time and the relatively low noise of the MRPC strips, in order to identify several event topologies. This work aims to provide detailed description of the TOF trigger system. The results achieved in the 2009 cosmic-ray run at CERN are presented to show the performances and readiness of TOF trigger system. The proposed trigger configuration for the proton-proton and Pb-Pb beams are detailed as well with estimates of the efficiencies and purity samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrothermal fluids are a fundamental resource for understanding and monitoring volcanic and non-volcanic systems. This thesis is focused on the study of hydrothermal system through numerical modeling with the geothermal simulator TOUGH2. Several simulations are presented, and geophysical and geochemical observables, arising from fluids circulation, are analyzed in detail throughout the thesis. In a volcanic setting, fluids feeding fumaroles and hot spring may play a key role in the hazard evaluation. The evolution of the fluids circulation is caused by a strong interaction between magmatic and hydrothermal systems. A simultaneous analysis of different geophysical and geochemical observables is a sound approach for interpreting monitored data and to infer a consistent conceptual model. Analyzed observables are ground displacement, gravity changes, electrical conductivity, amount, composition and temperature of the emitted gases at surface, and extent of degassing area. Results highlight the different temporal response of the considered observables, as well as the different radial pattern of variation. However, magnitude, temporal response and radial pattern of these signals depend not only on the evolution of fluid circulation, but a main role is played by the considered rock properties. Numerical simulations highlight differences that arise from the assumption of different permeabilities, for both homogeneous and heterogeneous systems. Rock properties affect hydrothermal fluid circulation, controlling both the range of variation and the temporal evolution of the observable signals. Low temperature fumaroles and low discharge rate may be affected by atmospheric conditions. Detailed parametric simulations were performed, aimed to understand the effects of system properties, such as permeability and gas reservoir overpressure, on diffuse degassing when air temperature and barometric pressure changes are applied to the ground surface. Hydrothermal circulation, however, is not only a characteristic of volcanic system. Hot fluids may be involved in several mankind problems, such as studies on geothermal engineering, nuclear waste propagation in porous medium, and Geological Carbon Sequestration (GCS). The current concept for large-scale GCS is the direct injection of supercritical carbon dioxide into deep geological formations which typically contain brine. Upward displacement of such brine from deep reservoirs driven by pressure increases resulting from carbon dioxide injection may occur through abandoned wells, permeable faults or permeable channels. Brine intrusion into aquifers may degrade groundwater resources. Numerical results show that pressure rise drives dense water up to the conduits, and does not necessarily result in continuous flow. Rather, overpressure leads to new hydrostatic equilibrium if fluids are initially density stratified. If warm and salty fluid does not cool passing through the conduit, an oscillatory solution is then possible. Parameter studies delineate steady-state (static) and oscillatory solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term Congenital Nystagmus (Early Onset Nystagmus or Infantile Nystagmus Syndrome) refers to a pathology characterised by an involuntary movement of the eyes, which often seriously reduces a subject’s vision. Congenital Nystagmus (CN) is a specific kind of nystagmus within the wider classification of infantile nystagmus, which can be best recognized and classified by means of a combination of clinical investigations and motility analysis; in some cases, eye movement recording and analysis are indispensable for diagnosis. However, interpretation of eye movement recordings still lacks of complete reliability; hence new analysis techniques and precise identification of concise parameters directly related to visual acuity are necessary to further support physicians’ decisions. To this aim, an index computed from eye movement recordings and related to the visual acuity of a subject is proposed in this thesis. This estimator is based on two parameters: the time spent by a subject effectively viewing a target (foveation time - Tf) and the standard deviation of eye position (SDp). Moreover, since previous studies have shown that visual acuity largely depends on SDp, a data collection pilot study was also conducted with the purpose of specifically identifying eventual slow rhythmic component in the eye position and to characterise in more detail the SDp. The results are presented in this thesis. In addition, some oculomotor system models are reviewed and a new approach to those models, i.e. the recovery of periodic orbits of the oculomotor system in patients with CN, is tested on real patients data. In conclusion, the results obtained within this research consent to completely and reliably characterise the slow rhythmic component sometimes present in eye position recordings of CN subjects and to better classify the different kinds of CN waveforms. Those findings can successfully support the clinicians in therapy planning and treatment outcome evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With life expectancies increasing around the world, populations are getting age and neurodegenerative diseases have become a global issue. For this reason we have focused our attention on the two most important neurodegenerative diseases: Parkinson’s and Alzheimer’s. Parkinson’s disease is a chronic progressive neurodegenerative movement disorder of multi-factorial origin. Environmental toxins as well as agricultural chemicals have been associated with PD. Has been observed that N/OFQ contributes to both neurotoxicity and symptoms associated with PD and that pronociceptin gene expression is up-regulated in rat SN of 6-OHDA and MPP induced experimental parkinsonism. First, we investigated the role of N/OFQ-NOP system in the pathogenesis of PD in an animal model developed using PQ and/or MB. Then we studied Alzheimer's disease. This disorder is defined as a progressive neurologic disease of the brain leading to the irreversible loss of neurons and the loss of intellectual abilities, including memory and reasoning, which become severe enough to impede social or occupational functioning. Effective biomarker tests could prevent such devastating damage occurring. We utilized the peripheral blood cells of AD discordant monozygotic twin in the search of peripheral markers which could reflect the pathology within the brain, and also support the hypothesis that PBMC might be a useful model of epigenetic gene regulation in the brain. We investigated the mRNA levels in several genes involve in AD pathogenesis, as well DNA methylation by MSP Real-Time PCR. Finally by Western Blotting we assess the immunoreactivity levels for histone modifications. Our results support the idea that epigenetic changes assessed in PBMCs can also be useful in neurodegenerative disorders, like AD and PD, enabling identification of new biomarkers in order to develop early diagnostic programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes design methods and test tools, for optical systems, which may be used in an industrial environment, where not only precision and reliability but also ease of use is important. The approach to the problem has been conceived to be as general as possible, although in the present work, the design of a portable device for automatic identification applications has been studied, because this doctorate has been funded by Datalogic Scanning Group s.r.l., a world-class producer of barcode readers. The main functional components of the complete device are: electro-optical imaging, illumination and pattern generator systems. For what concerns the electro-optical imaging system, a characterization tool and an analysis one has been developed to check if the desired performance of the system has been achieved. Moreover, two design tools for optimizing the imaging system have been implemented. The first optimizes just the core of the system, the optical part, improving its performance ignoring all other contributions and generating a good starting point for the optimization of the whole complex system. The second tool optimizes the system taking into account its behavior with a model as near as possible to reality including optics, electronics and detection. For what concerns the illumination and the pattern generator systems, two tools have been implemented. The first allows the design of free-form lenses described by an arbitrary analytical function exited by an incoherent source and is able to provide custom illumination conditions for all kind of applications. The second tool consists of a new method to design Diffractive Optical Elements excited by a coherent source for large pattern angles using the Iterative Fourier Transform Algorithm. Validation of the design tools has been obtained, whenever possible, comparing the performance of the designed systems with those of fabricated prototypes. In other cases simulations have been used.