13 resultados para VANet, gps, algoritmi distribuiti, V2V, 802.11p, I2V

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obstructive sleep apnoea/hypopnoea syndrome (OSAHS) is the periodic reduction or cessation of airflow during sleep. The syndrome is associated whit loud snoring, disrupted sleep and observed apnoeas. Surgery aims to alleviate symptoms of daytime sleepiness, improve quality of life and reduce the signs of sleep apnoea recordered by polysomnography. Surgical intervention for snoring and OSAHS includes several procedures, each designed to increase the patency of the upper airway. Procedures addressing nasal obstruction include septoplasty, turbinectomy, and radiofrequency ablation (RF) of the turbinates. Surgical procedures to reduce soft palate redundancy include uvulopalatopharyngoplasty with or without tonsillectomy, uvulopalatal flap, laser-assisted uvulopalatoplasty, and RF of the soft palate. More significant, however, particularly in cases of severe OSA, is hypopharyngeal or retrolingual obstruction related to an enlarged tongue, or more commonly due to maxillomandibular deficiency. Surgeries in these cases are aimed at reducing the bulk of the tongue base or providing more space for the tongue in the oropharynx so as to limit posterior collapse during sleep. These procedures include tongue-base suspension, genioglossal advancement, hyoid suspension, lingualplasty, and maxillomandibular advancement. We reviewed 269 patients undergoing to osas surgery at the ENT Department of Forlì Hospital in the last decade. Surgery was considered a success if the postoperative apnea/hypopnea index (AHI) was less than 20/h. According to the results, we have developed surgical decisional algorithms with the aims to optimize the success of these procedures by identifying proper candidates for surgery and the most appropriate surgical techniques. Although not without risks and not as predictable as positive airway pressure therapy, surgery remains an important treatment option for patients with obstructive sleep apnea (OSA), particularly for those who have failed or cannot tolerate positive airway pressure therapy. Successful surgery depends on proper patient selection, proper procedure selection, and experience of the surgeon. The intended purpose of medical algorithms is to improve and standardize decisions made in the delivery of medical care, assist in standardizing selection and application of treatment regimens, to reduce potential introduction of errors. Nasal Continuous Positive Airway Pressure (nCPAP) is the recommended therapy for patients with moderate to severe OSAS. Unfortunately this treatment is not accepted by some patient, appears to be poorly tolerated in a not neglible number of subjects, and the compliance may be critical, especially in the long term if correctly evaluated with interview as well with CPAP smart cards analysis. Among the alternative options in Literature, surgery is a long time honoured solution. However until now no clear scientific evidence exists that surgery can be considered a really effective option in OSAHS management. We have design a randomized prospective study comparing MMA and a ventilatory device (Autotitrating Positive Airways Pressure – APAP) in order to understand the real effectiveness of surgery in the management of moderate to severe OSAS. Fifty consecutive previously full informed patients suffering from severe OSAHS were enrolled and randomised into a conservative (APAP) or surgical (MMA) arm. Demographic, biometric, PSG and ESS profiles of the two group were statistically not significantly different. One year after surgery or continuous APAP treatment both groups showed a remarkable improvement of mean AHI and ESS; the degree of improvement was not statistically different. Provided the relatively small sample of studied subjects and the relatively short time of follow up, MMA proved to be in our adult and severe OSAHS patients group a valuable alternative therapeutical tool with a success rate not inferior to APAP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly stringent exhaust emission limits and higher fuel economy are the main drivers of the engine development process. As a consequence, the complexity of the propulsion units and its subsystems increase, due to the extensive use of sensors and actuators needed to obtain a precise control over the combustion phase. Since engine calibration process consumes most of the development time, new tools and methodologies are needed to shorten the development time and increase the performance attainable. Real time combustion analysis, based on the in-cylinder pressure signal, can significantly improve the calibration of the engine control strategies and the development of new algorithms, giving instantaneous feedback on the engine behavior. A complete combustion analysis and diagnosis system has been developed, capable of evaluating the most important indicators about the combustion process, such as indicated mean effective pressure, heat release, mass fraction burned and knock indexes. Such a tool is built on top of a flexible, modular and affordable hardware platform, capable of satisfying the requirements needed for accuracy and precision, but also enabling the use directly on-board the vehicle, due to its small form factor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combustion control is one of the key factors to obtain better performances and lower pollutant emissions for diesel, spark ignition and HCCI engines. An algorithm that allows estimating, as an example, the mean indicated torque for each cylinder, could be easily used in control strategies, in order to carry out cylinders trade-off, control the cycle to cycle variation, or detect misfires. A tool that allows evaluating the 50% of Mass Fraction Burned (MFB50), or the net Cumulative Heat Release (CHRNET), or the ROHR peak value (Rate of Heat Release), could be used to optimize spark advance or to detect knock in gasoline engines and to optimize injection pattern in diesel engines. Modern management systems are based on the control of the mean indicated torque produced by the engine: they need a real or virtual sensor in order to compare the measured value with the target one. Many studies have been performed in order to obtain an accurate and reliable over time torque estimation. The aim of this PhD activity was to develop two different algorithms: the first one is based on the instantaneous engine speed fluctuations measurement. The speed signal is picked up directly from the sensor facing the toothed wheel mounted on the engine for other control purposes. The engine speed fluctuation amplitudes depend on the combustion and on the amount of torque delivered by each cylinder. The second algorithm processes in-cylinder pressure signals in the angular domain. In this case a crankshaft encoder is not necessary, because the angular reference can be obtained using a standard sensor wheel. The results obtained with these two methodologies are compared in order to evaluate which one is suitable for on board applications, depending on the accuracy required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional procedures for rainfall-runoff model calibration are generally based on the fit of the individual values of simulated and observed hydrographs. It is used here an alternative option that is carried out by matching, in the optimisation process, a set of statistics of the river flow. Such approach has the additional, significant advantage to allow also a straightforward regional calibration of the model parameters, based on the regionalisation of the selected statistics. The minimisation of the set of objective functions is carried out by using the AMALGAM algorithm, leading to the identification of behavioural parameter sets. The procedure is applied to a set of river basins located in central Italy: the basins are treated alternatively as gauged and ungauged and, as a term of comparison, the results obtained with a traditional time-domain calibration is also presented. The results show that a suitable choice of the statistics to be optimised leads to interesting results in real world case studies as far as the reproduction of the different flow regimes is concerned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have used kinematic models in two Italian regions to reproduce surface interseismic velocities obtained from InSAR and GPS measurements. We have considered a Block modeling, BM, approach to evaluate which fault system is actively accommodating the occurring deformation in both considered areas. We have performed a study for the Umbria-Marche Apennines, obtaining that the tectonic extension observed by GPS measurements is explained by the active contribution of at least two fault systems, one of which is the Alto Tiberina fault, ATF. We have estimated also the interseismic coupling distribution for the ATF using a 3D surface and the result shows an interesting correlation between the microseismicity and the uncoupled fault portions. The second area analyzed concerns the Gargano promontory for which we have used jointly the available InSAR and GPS velocities. Firstly we have attached the two datasets to the same terrestrial reference frame and then using a simple dislocation approach, we have estimated the best fault parameters reproducing the available data, providing a solution corresponding to the Mattinata fault. Subsequently we have considered within a BM analysis both GPS and InSAR datasets in order to evaluate if the Mattinata fault may accommodate the deformation occurring in the central Adriatic due to the relative motion between the North-Adriatic and South-Adriatic plates. We obtain that the deformation occurring in that region should be accommodated by more that one fault system, that is however difficult to detect since the poor coverage of geodetic measurement offshore of the Gargano promontory. Finally we have performed also the estimate of the interseismic coupling distribution for the Mattinata fault, obtaining a shallow coupling pattern. Both of coupling distributions found using the BM approach have been tested by means of resolution checkerboard tests and they demonstrate that the coupling patterns depend on the geodetic data positions.