20 resultados para Software-based techniques
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
Resumo:
In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.
Resumo:
The thesis is dedicated to the implementation of advanced x-ray-based techniques for the investigation of the battery systems, more predominantly, the cathode materials. The implemented characterisation methods include synchrotron based x-ray absorption spectroscopy, powder x-ray diffraction, 2-dimensional x-ray fluorescence, full field transmission soft x-ray microscopy, and laboratory x-ray photoelectron spectroscopy. The research highlights the different areas of expertise for each described method, in terms of material characterisation, exploring their complementarities and intersections. The results are focused over manganese hexacyanoferrate and partially Ni substituted manganese hexacyanoferrate, through both organic and aqueous battery systems. In aqueous system, the modification of cathode composition has been observed with various techniques, indicating to the processes occurring in bulk, surface, locally or in long-range, including with the speciation by 2-dimensional scanning, and the time-resolution, by the implementation of the operando measurements. In organic media, the inhomogenisation of the cathode material during the aging process was investigated by the development of the special image treatment procedure for the maps, obtained from the transmission soft x-ray microscopy. It worth mentioning, that apart from the combination of the outcomes from the various x-ray measurements, the exploration of the new capabilities was also conducted, namely, probing the oxidation state of the element with the synchrotron-based 2-dimensional x-ray fluorescence technique, which, generally, with conventional set up, is not possible to achieve. The results and methodology from this thesis can, of course, be generalised on the characterisation of the other battery systems, and not only, as the x-ray techniques are one of the most informative and sophisticated methods for advanced structural investigation of the materials.
Resumo:
The present PhD project was focused on the development of new tools and methods for luminescence-based techniques. In particular, the ultimate goal was to present substantial improvements to the currently available technologies for both research and diagnostic in the fields of biology, proteomics and genomics. Different aspects and problems were investigated, requiring different strategies and approaches. The whole work was thus divided into separate chapters, each based on the study of one specific aspect of luminescence: Chemiluminescence, Fluorescence and Electrochemiluminescence. CHAPTER 1, Chemiluminescence The work on luminol-enhancer solution lead to a new luminol solution formulation with 1 order of magnitude lower detection limit for HRP. This technology was patented with Cyanagen brand and is now sold worldwide for Western Blot and ELISA applications. CHAPTER 2, Fluorescescence The work on dyed-doped silica nanoparticles is marking a new milestone in the development of nanotechnologies for biological applications. While the project is still in progress, preliminary studies on model structures are leading to very promising results. The improved brightness of these nano-sized objects, their simple synthesis and handling, their low toxicity will soon turn them, we strongly believe, into a new generation of fluorescent labels for many applications. CHAPTER 3, Electrochemiluminescence The work on electrochemiluminescence produced interesting results that can potentially turn into great improvements from an analytical point of view. Ru(bpy)3 derivatives were employed both for on-chip microarray (Chapter 3.1) and for microscopic imaging applications (Chapter 3.2). The development of these new techniques is still under investigation, but the obtained results confirm the possibility to achieve the final goal. Furthermore the development of new ECL-active species (Chapter 3.3, 3.4, 3.5) and their use in these applications can significantly improve overall performances, thus helping to spread ECL as powerful analytical tool for routinary techniques. To conclude, the results obtained are of strong value to largely increase the sensitivity of luminescence techniques, thus fulfilling the expectation we had at the beginning of this research work.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Introduzione Attualmente i principali punti critici del trattamento dell’HCC avanzato sono: 1) la mancanza di predittori di risposta alla terapia con sorafenib, 2) lo sviluppo resistenze al sorafenib, 3) la mancanza di terapie di seconda linea codificate. Scopo della tesi 1) ricerca di predittori clinico-laboratoristici di risposta al sorafenib in pazienti ambulatoriali con HCC; 2) valutazione dell’impatto della sospensione temporanea-definitiva del sorafenib in un modello murino di HCC mediante tecniche ecografiche; 3) valutazione dell’efficacia della capecitabina metronomica come seconda linea dell’HCC non responsivo a sorafenib. Risultati Studio-1: 94 pazienti con HCC trattato con sorafenib: a presenza di metastasi e PVT-neoplastica non sembra inficiare l’efficacia del sorafenib. AFP basale <19 ng/ml è risultata predittrice di maggiore sopravvivenza, mentre lo sviluppo di nausea di una peggiore sopravvivenza. Studio -2: 14 topi con xenografts di HCC: gruppo-1 trattato con placebo, gruppo-2 trattato con sorafenib con interruzione temporanea del farmaco e gruppo-3 trattato con sorafenib con sospensione definitiva del sorafenib. La CEUS targettata per il VEGFR2 ha mostrato al giorno 13 valori maggiori di dTE nel gruppo-3 confermato da un aumento del VEGFR2 al Western-Blot. I tumori del gruppo-2 dopo 2 giorni di ritrattamento, hanno mostrato un aumento dell’elasticità tissutale all’elastonografia. Studio-3:19 pazienti trattati con capecitabina metronomica dopo sorafenib. Il TTP è stato di 5 mesi (95% CI 0-10), la PFS di 3,6 mesi (95% CI 2,8-4,3) ed la OS di 6,3 mesi (95% CI 4-8,6). Conclusioni Lo sviluppo di nausea ed astenia ed AFP basale >19, sono risultati predittivi di una minore risposta al sorafenib. La sospensione temporanea del sorafenib in un modello murino di HCC non impedisce il ripristino della risposta tumorale, mentre una interruzione definitiva tende a stimolare un “effetto rebound” dell’angiogenesi. La capecitabina metronomica dopo sorafenib ha mostrato una discreta attività anti-neoplastica ed una sicurezza accettabile.
Resumo:
Radars are expected to become the main sensors in various civilian applications, especially for autonomous driving. Their success is mainly due to the availability of low cost integrated devices, equipped with compact antenna arrays, and computationally efficient signal processing techniques. This thesis focuses on the study and the development of different deterministic and learning based techniques for colocated multiple-input multiple-output (MIMO) radars. In particular, after providing an overview on the architecture of these devices, the problem of detecting and estimating multiple targets in stepped frequency continuous wave (SFCW) MIMO radar systems is investigated and different deterministic techniques solving it are illustrated. Moreover, novel solutions, based on an approximate maximum likelihood approach, are developed. The accuracy achieved by all the considered algorithms is assessed on the basis of the raw data acquired from low power wideband radar devices. The results demonstrate that the developed algorithms achieve reasonable accuracies, but at the price of different computational efforts. Another important technical problem investigated in this thesis concerns the exploitation of machine learning and deep learning techniques in the field of colocated MIMO radars. In this thesis, after providing a comprehensive overview of the machine learning and deep learning techniques currently being considered for use in MIMO radar systems, their performance in two different applications is assessed on the basis of synthetically generated and experimental datasets acquired through a commercial frequency modulated continuous wave (FMCW) MIMO radar. Finally, the application of colocated MIMO radars to autonomous driving in smart agriculture is illustrated.
Resumo:
This PhD project focuses on the study of the early stages of bone biomineralization in 2D and 3D cultures of osteoblast-like SaOS-2 osteosarcoma cells, exposed to an osteogenic cocktail. The efficacy of osteogenic treatment was assessed on 2D cell cultures after 7 days. A large calcium minerals production, an overexpression of osteogenic markers and of alkaline phosphatase activity occurred in treated samples. TEM microscopy and cryo-XANES micro-spectroscopy were performed for localizing and characterizing Ca-depositions. These techniques revealed a different localization and chemical composition of Ca-minerals over time and after treatment. Nevertheless, the Mito stress test showed in treated samples a significant increase in maximal respiration levels associated to an upregulation of mitochondrial biogenesis indicative of an ongoing differentiation process. The 3D cell cultures were realized using two different hydrogels: a commercial collagen type I and a mixture of agarose and lactose-modified chitosan (CTL). Both biomaterials showed good biocompatibility with SaOS-2 cells. The gene expression analysis of SaOS-2 cells on collagen scaffolds indicated an osteogenic commitment after treatment. and Alizarin red staining highlighted the presence of Ca-spots in the differentiated samples. In addition, the intracellular magnesium quantification, and the X-ray microscopy on mineral depositions, suggested the incorporation of Mg during the early stages of bone formation process., SaOS-2 cells treated with osteogenic cocktail produced Ca mineral deposits also on CTL/agarose scaffolds, as confirmed by alizarin red staining. Further studies are underway to evaluate the differentiation also at the genetic level. Thanks to the combination of conventional laboratory methods and synchrotron-based techniques, it has been demonstrated that SaOS-2 is a suitable model for the study of biomineralization in vitro. These results have contributed to a deeper knowledge of biomineralization process in osteosarcoma cells and could provide new evidences about a therapeutic strategy acting on the reversibility of tumorigenicity by osteogenic induction.
Resumo:
Introduction. The term New Psychoactive Substances (NPS) encompasses a broad category of drugs which have become available on the market in recent years and whose illicit use for recreational purposes has recently exploded. The analysis of NPS usually requires mass spectrometry based techniques. The aim of our study was to define the preva-lence of NPS consumption in patients with a history of drug addiction followed by Public Services for Pathological Addictions, with the purpose of highlighting the effective presence of NPS within the area of Bologna and evaluating their association with classical drugs of abuse (DOA). Materials and methods. Sustained by literature, a multi-analyte UHPLC-MS/MS method for the identification of 127 NPS (phenethylamines, arylcyclohexylamines, synthetic opioids, tryptamines, synthetic cannabinoids, synthetic cathinones, designer benzodiazepines) and 15 classic drugs of abuse (DOA) in hair samples was developed and validated according to International Guidelines [112]. Samples pretreatment consisted of washing steps and overnight incubation at 45°C in an acid mixture of methanol and water. After cooling, supernatant were injected into the chromatographic system coupled with a tandem mass spectrometry detector. Results. Successful validation was achieved for almost all of the compounds. The method met all the required technical parameters. LOQ was set from 4 to 80 pg/mg The developed method was applied to 107 cases (85 males and 22 females) of clinical interest. Out of 85 hair samples resulting positive to classical drugs of abuse, NPS were found in twelve (8 male and 4 female). Conclusion. The present methodology represents an easy, low cost, wide-panel method for the de-tection of 127 NPS and 15 DOA in hair samples. Such multi-analyte methods facilitates the study of the prevalence of drugs abused that will enable the competent control authorities to obtain evi-dence-based reports regarding the critical spread of the threat represented by NPS.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).