19 resultados para testing, test, verifica, validazione
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
I test di qualifica a vibrazioni vengono usati in fase di progettazione di un componente per verificarne la resistenza meccanica alle sollecitazioni dinamiche (di natura vibratoria) applicate durante la sua vita utile. La durata delle vibrazioni applicate al componente durante la sua vita utile (migliaia di ore) deve essere ridotta al fine di realizzare test fattibili in laboratorio, condotti in genere utilizzando uno shaker elettrodinamico. L’idea è quella di aumentare l’intensità delle vibrazioni riducendone la durata. Esistono diverse procedure di Test Tailoring che tramite un metodo di sintesi definiscono un profilo vibratorio da applicare in laboratorio a partire dalle reali vibrazioni applicate al componente: una delle metodologie più comuni si basa sull’equivalenza del danno a fatica prodotto dalle reali vibrazioni e dalle vibrazioni sintetizzate. Questo approccio è piuttosto diffuso tuttavia all’autore non risulta presente nessun riferimento in letteratura che ne certifichi la validità tramite evidenza sperimentalmente. L’obiettivo dell’attività di ricerca è stato di verificare la validità del metodo tramite una campagna sperimentale condotta su opportuni provini. Il metodo viene inizialmente usato per sintetizzare un profilo vibratorio (random stazionario) avente la stessa durata di un profilo vibratorio non stazionario acquisito in condizioni reali. Il danno a fatica prodotto dalla vibrazione sintetizzata è stato confrontato con quello della vibrazione reale in termini di tempo di rottura dei provini. I risultati mostrano che il danno prodotto dalla vibrazione sintetizzata è sovrastimato, quindi l’equivalenza non è rispettata. Sono stati individuati alcuni punti critici e sono state proposte alcune modifiche al metodo per rendere la teoria più robusta. Il metodo è stato verificato con altri test e i risultati confermano la validità del metodo a condizione che i punti critici individuati siano correttamente analizzati.
Resumo:
We observed 82 healthy subjects, from both sexes, aged between 19 and 77 years. All subjects performed two different tests: for being scientifically acknowledged, the first one was used as a reference and it was a stress test (CPX). During the entire test, heart rate and gas exchange were recorded continuously; the second, the actual object of this study, was a submaximal test (TOP). Only heart rate was recorded continuously. The main purpose was to determinate an index of physical fitness as result of TOP. CPX test allowed us to individuate anaerobic threshold. We used an incremental protocol of 10/20 Watt/min, different by age. For our TOP test we used an RHC400 UPRIGHT BIKE, by Air Machine. Each subject was monitored for heart frequency. After 2 minutes of resting period there was a first step: 3 minutes of pedalling at a constant rate of 60 RPM, (40 watts for elder subjects and 60 watts for the younger ones). Then, the subject was allowed to rest for a recovery phase of 5 minutes. Third and last step consisted of 3 minutes of pedalling again at 60 RPM but now set to 60 watts for elder subjects and 80 watts for the young subjects. Finally another five minutes of recovery. A good correlation was found between TOP and CPX results especially between punctua l heart rate reserve (HRR’) and anaerobic threshold parameters such as Watt, VO2, VCO2 . HRR’ was obtained by subtracting maximal heart rate during TOP from maximal theoretic heart rate (206,9-(0,67*age)). Data were analyzed through cluster analysis in order to obtain 3 homogeneous groups. The first group contains the least fit subjects (inactive, women, elderly). The other groups contain the “average fit” and the fittest subjects (active, men, younger). Concordance between test resulted in 83,23%. Afterwards, a linear combinations of the most relevant variables gave us a formula to classify people in the correct group. The most relevant result is that this submaximal test is able to discriminate subjects with different physical condition and to provide information (index) about physical fitness through HRR’. Compared to a traditional incremental stress test, the very low load of TOP, short duration and extended resting period, make this new method suitable to very different people. To better define the TOP index, it is necessary to enlarge our subject sample especially by diversifying the age range.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
Il modello afflussi-deflussi e di erosione Kineros2, fisicamente basato, distribuito e a scala di evento, è stato applicato a due bacini idrografici montani della provincia di Bologna (Italia) al fine di testare e valutare il suo funzionamento in ambiente appenninico. Dopo la parametrizzazione dei due bacini, Kineros2 è stato calibrato e validato utilizzando dati sperimentali di portata e di concentrazione dei solidi sospesi, collezionati alla chiusura dei bacini grazie alla presenza di due stazioni di monitoraggio idrotorbidimetrico. La modellazione ha consentito di valutare la capacità del modello di riprodurre correttamente le dinamiche idrologiche osservate, nonchè di trarre conclusioni sulle sue potenzialità e limitazioni.
Resumo:
A wall film model has been implemented in a customized version of KIVA code developed at University of Bologna. Under the hypothesis of `thin laminar ow' the model simulates the dynamics of a liquid wall film generated by impinging sprays. Particular care has been taken in numerical implementation of the model. The major phenomena taken into account in the present model are: wall film formation by impinging spray; body forces, such as gravity or acceleration of the wall; shear stress at the interface with the gas and no slip condition on the wall; momentum contribution and dynamic pressure generated by the tangential and normal component of the impinging drops; film evaporation by heat exchange with wall and surrounding gas. The model doesn't consider the effect of the wavy film motion and suppose that all the impinging droplets adhere to the film. The governing equations have been integrated in space by using a finite volume approach with a first order upwind differencing scheme and they have been integrated in time with a fully explicit method. The model is validated using two different test cases reproducing PFI gasoline and DI Diesel engine wall film conditions.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
La risposta emodinamica all'esercizio dinamico è stata oggetto di numerosi studi scientifici. Poca attenzione è stata invece rivolta agli aggiustamenti cardiovascolari che si verificano quando si interrompe uno sforzo dinamico. Al cessare dell' esercizio, la frequenza cardiaca e la contrattilità miocardica subiscono un decremento repentino e vengono rilasciati in quantità i prodotti finali del metabolismo muscolare, come lattato, ioni idrogeno, adenosina, sostanze in grado di indurre vasodilatazione nei gruppi muscolari precedentemente attivati determinando una riduzione del precarico, post-carico cardiaco, contrattilità miocardica e una dilatazione delle arteriole periferiche, così da mantenere le resistenze vascolari periferiche a un basso livello. Inoltre, si verificano alterazioni della concentrazione ematica di elettroliti, diminuzione delle catecolamine circolanti e si verifica un ipertono vagale : tutti questi fenomeni possono avere un effetto significativo sullo stato emodinamico. In questo studio si voleva valutare in che misura l’eventuale effetto ipotensivo dovuto all’esercizio fosse legato all’intensità del carico lavorativo applicato ed alla sua durata. Il campione esaminato comprendeva 20 soggetti maschi attivi. I soggetti venivano sottoposti a quattro test in giornate diverse. La prova da sforzo preliminare consisteva in una prova da sforzo triangolare massimale eseguita al cicloergometro con un protocollo incrementale di 30 Watt al minuto. Il test si articolava in una prima fase della durata di 3 minuti nei quali venivano registrati i dati basali, in una seconda fase della durata di tre minuti in cui il soggetto compiva un riscaldamento al cicloergometro, che precedeva l’inizio dello sforzo, ad un carico di 20 W. Al termine della prova venivano calcolati il massimo carico lavorativo raggiunto (Wmax) ed il valore di soglia anaerobica (SA). Dopo la prova da sforzo preliminare il soggetto effettuava 3 esercizi rettangolari di diversa intensità in maniera randomizzata così strutturati: test 70% SA; test 130% SA, 130% Wmax : prove da sforzo rettangolari ad un carico lavorativo pari alla percentuale indicatain relazione ai valori di SA e Wmax ottenuti nella prova da sforzo preliminare. Tali test duravano dieci minuti o fino all'esaurimento del soggetto. Le prova erano precedute da tre minuti di riposo e da tre minuti di riscaldamento. Il recupero aveva una durata di 30 minuti. La PA veniva misurata ogni 5 minuti durante lo sforzo, ogni minuto nei primi 5 minuti di recupero e successivamente ogni 5 minuti fino alla conclusione del recupero. Dai risultati emerge come l'effetto ipotensivo sia stato più marcato nel recupero dall'intensità di carico lavorativo meno elevata, cioè dopo il test 70%SA. C'è da considerare che la più bassa intensità di sforzo permetteva di praticare un esercizio significativamente più lungo rispetto ai test 130%SA e 130%Wmax. È quindi verosimile che anche la durata dell'esercizio e non solo la sua intensità abbia avuto un ruolo fondamentale nel determinare l'ipotensione nel recupero evidenziata in questo studio. L’effetto ipotensivo più evidente si è manifestato nelle prove a più bassa intensità ma con carico lavorativo totale più elevato. I dati supportano la tendenza a considerare non tanto l’intensità e la durata dell’esercizio in modo isolato, quanto piuttosto il carico lavorativo totale (intensità x durata). L'effetto ipotensivo registrato nello studio è da ascriversi soprattutto ad una persistente vasodilatazione susseguente allo sforzo. Infatti, nel recupero dal test 70%SA, le RVP si mantenevano basse rispetto ai valori di riposo. Tale dato potrebbe avere un grande valore clinico nella prescrizione dell'attività fisica più idonea nei soggetti ipertesi,che potrebbero beneficiare di un eventuale effetto ipotensivo successivo all'attività praticata. Pertanto in futuro bisognerà estendere lo studio ai soggetti ipertesi. La conferma di tale risultato in questi soggetti permetterebbe di scegliere correttamente l'intensità e la durata del carico lavorativo, in modo da calibrare lo sforzo al grado di patologia del soggetto.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
The work undertaken in this PhD thesis is aimed at the development and testing of an innovative methodology for the assessment of the vulnerability of coastal areas to marine catastrophic inundation (tsunami). Different approaches are used at different spatial scales and are applied to three different study areas: 1. The entire western coast of Thailand 2. Two selected coastal suburbs of Sydney – Australia 3. The Aeolian Islands, in the South Tyrrhenian Sea – Italy I have discussed each of these cases study in at least one scientific paper: one paper about the Thailand case study (Dall’Osso et al., in review-b), three papers about the Sydney applications (Dall’Osso et al., 2009a; Dall’Osso et al., 2009b; Dall’Osso and Dominey-Howes, in review) and one last paper about the work at the Aeolian Islands (Dall’Osso et al., in review-a). These publications represent the core of the present PhD thesis. The main topics dealt with are outlined and discussed in a general introduction while the overall conclusions are outlined in the last section.
Resumo:
The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.
Resumo:
The needs of customers to improve machinery in recent years have driven tractor manufacturers to reduce product life and development costs. The most significant efforts have concentrated on the attempt to decrease the costs of the experimental testing sector. The validation of the tractor prototypes are presently performed with a replication of a particularly unfavourable condition a defined number of times. These laboratory tests do not always faithfully reproduce the real use of the tractor. Therefore, field tests are also carried out to evaluate the prototype during real use, but it is difficult to perform such tests for a period of time long enough to reproduce tractor life usage. In this context, accelerated tests have been introduced in the automotive sector, producing a certain damage to the structure in a reduced amount of time. The goal of this paper is to define a methodology for the realization of accelerated structural tests on a tractor, through the reproduction of real customer tractor usage. A market analysis was performed on a 80 kW power tractor and a series of measures were then taken to simulate the real use of the tractor. Subsequently, the rainflow matrixes of the signals were extrapolated and used to estimate the tractor loadings for 10 years of tractor life. Finally these loadings were reproduced on testing grounds with special road pavements. The results obtained highlight the possibility of reproducing field loadings during road driving on proving grounds (PGs), but the use of two field operations is also necessary. The global acceleration factor obtained in this first step of the methodology is equal to three.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
Background Decreased exercise capacity, and reduction in peak oxygen uptake are present in most patients affected by hypertrophic cardiomyopathy (HCM) . In addition an abnormal blood pressure response during a maximal exercise test was seen to be associated with high risk for sudden cardiac death in adult patients affected by HCM. Therefore exercise test (CPET) has become an important part of the evaluation of the HCM patients, but data on its role in patients with HCM in the pediatric age are quite limited. Methods and results Between 2004 and 2010, using CPET and echocardiography, we studied 68 children (mean age 13.9 ± 2 years) with HCM. The exercise test was completed by all the patients without adverse complications. The mean value of achieved VO2 max was 31.4 ± 8.3 mL/Kg/min which corresponded to 77.5 ± 16.9 % of predicted range. 51 patients (75%) reached a subnormal value of VO2max. On univariate analysis the achieved VO2 as percentage of predicted and the peak exercise systolic blood pressure (BP) Z score were inversely associated with max left ventricle (LV) wall thickness, with E/Ea ratio, and directly related with Ea and Sa wave velocities No association was found with the LV outflow tract gradient. During a mean follow up of 2.16 ± 1.7 years 9 patients reached the defined clinical end point of death, transplantation, implanted cardioverter defibrillator (ICD) shock, ICD implantation for secondary prevention or myectomy. Patients with peak VO2 < 52% or with peak systolic BP Z score < -5.8 had lower event free survival at follow up. Conclusions Exercise capacity is decreased in patients with HCM in pediatric age and global ventricular function seems being the most important determinant of exercise capacity in these patients. CPET seems to play an important role in prognostic stratification of children affected by HCM.
Resumo:
Obiettivi: Valutare la prevalenza dei diversi genotipi di HPV in pazienti con diagnosi di CIN2/3 nella Regione Emilia-Romagna, la persistenza genotipo-specifica di HPV e l’espressione degli oncogeni virali E6/E7 nel follow-up post-trattamento come fattori di rischio di recidiva/persistenza o progressione di malattia; verificare l’applicabilità di nuovi test diagnostici biomolecolari nello screening del cervicocarcinoma. Metodi: Sono state incluse pazienti con citologia di screening anormale, sottoposte a trattamento escissionale (T0) per diagnosi di CIN2/3 su biopsia mirata. Al T0 e durante il follow-up a 6, 12, 18 e 24 mesi, oltre al Pap test e alla colposcopia, sono state effettuate la ricerca e la genotipizzazione dell'HPV DNA di 28 genotipi. In caso di positività al DNA dei 5 genotipi 16, 18, 31, 33 e/o 45, si è proceduto alla ricerca dell'HPV mRNA di E6/E7. Risultati preliminari: Il 95.8% delle 168 pazienti selezionate è risultato HPV DNA positivo al T0. Nel 60.9% dei casi le infezioni erano singole (prevalentemente da HPV 16 e 31), nel 39.1% erano multiple. L'HPV 16 è stato il genotipo maggiormente rilevato (57%). Il 94.3% (117/124) delle pazienti positive per i 5 genotipi di HPV DNA sono risultate mRNA positive. Abbiamo avuto un drop-out di 38/168 pazienti. A 18 mesi (95% delle pazienti) la persistenza dell'HPV DNA di qualsiasi genotipo era del 46%, quella dell'HPV DNA dei 5 genotipi era del 39%, con espressione di mRNA nel 21%. Abbiamo avuto recidiva di malattia (CIN2+) nel 10.8% (14/130) a 18 mesi. Il pap test era negativo in 4/14 casi, l'HPV DNA test era positivo in tutti i casi, l'mRNA test in 11/12 casi. Conclusioni: L'HR-HPV DNA test è più sensibile della citologia, l'mRNA test è più specifico nell'individuare una recidiva. I dati definitivi saranno disponibili al termine del follow-up programmato.
Resumo:
A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.