916 resultados para automatic test case generation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa tesi riguarda l'analisi delle trasmissioni ad ingranaggi e delle ruote dentate in generale, nell'ottica della minimizzazione delle perdite di energia. È stato messo a punto un modello per il calcolo della energia e del calore dissipati in un riduttore, sia ad assi paralleli sia epicicloidale. Tale modello consente di stimare la temperatura di equilibrio dell'olio al variare delle condizioni di funzionamento. Il calcolo termico è ancora poco diffuso nel progetto di riduttori, ma si è visto essere importante soprattutto per riduttori compatti, come i riduttori epicicloidali, per i quali la massima potenza trasmissibile è solitamente determinata proprio da considerazioni termiche. Il modello è stato implementato in un sistema di calcolo automatizzato, che può essere adattato a varie tipologie di riduttore. Tale sistema di calcolo consente, inoltre, di stimare l'energia dissipata in varie condizioni di lubrificazione ed è stato utilizzato per valutare le differenze tra lubrificazione tradizionale in bagno d'olio e lubrificazione a “carter secco” o a “carter umido”. Il modello è stato applicato al caso particolare di un riduttore ad ingranaggi a due stadi: il primo ad assi paralleli ed il secondo epicicloidale. Nell'ambito di un contratto di ricerca tra il DIEM e la Brevini S.p.A. di Reggio Emilia, sono state condotte prove sperimentali su un prototipo di tale riduttore, prove che hanno consentito di tarare il modello proposto [1]. Un ulteriore campo di indagine è stato lo studio dell’energia dissipata per ingranamento tra due ruote dentate utilizzando modelli che prevedano il calcolo di un coefficiente d'attrito variabile lungo il segmento di contatto. I modelli più comuni, al contrario, si basano su un coefficiente di attrito medio, mentre si può constatare che esso varia sensibilmente durante l’ingranamento. In particolare, non trovando in letteratura come varia il rendimento nel caso di ruote corrette, ci si è concentrati sul valore dell'energia dissipata negli ingranaggi al variare dello spostamento del profilo. Questo studio è riportato in [2]. È stata condotta una ricerca sul funzionamento di attuatori lineari vite-madrevite. Si sono studiati i meccanismi che determinano le condizioni di usura dell'accoppiamento vite-madrevite in attuatori lineari, con particolare riferimento agli aspetti termici del fenomeno. Si è visto, infatti, che la temperatura di contatto tra vite e chiocciola è il parametro più critico nel funzionamento di questi attuatori. Mediante una prova sperimentale, è stata trovata una legge che, data pressione, velocità e fattore di servizio, stima la temperatura di esercizio. Di tale legge sperimentale è stata data un'interpretazione sulla base dei modelli teorici noti. Questo studio è stato condotto nell'ambito di un contratto di ricerca tra il DIEM e la Ognibene Meccanica S.r.l. di Bologna ed è pubblicato in [3].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim of the research: to develop a prototype of homogeneous high-throughput screening (HTS) for identification of novel integrin antagonists for the treatment of ocular allergy and to better understand the mechanisms of action of integrin-mediated levocabastine antiallergic action. Results: This thesis provides evidence that adopting scintillation proximity assay (SPA) levocabastine (IC50=406 mM), but not the first-generation antihistamine chlorpheniramine, displaces [125I]fibronectin (FN) binding to human a4b1 integrin. This result is supported by flow cytometry analysis, where levocabastine antagonizes the binding of a primary antibody to integrin a4 expressed in Jurkat E6.1 cells. Levocabastine, but not chlorpheniramine, binds to a4b1 integrin and prevents eosinophil adhesion to VCAM-1, FN or human umbilical vein endothelial cells (HUVEC) cultured in vitro. Similarly, levocabastine affects aLb2/ICAM-1-mediated adhesion of Jurkat E6.1 cells. Analyzing the supernatant of TNF-a-treated (24h) eosinophilic cells (EoL-1), we report that levocabastine reduces the TNF-a-induced release of the cytokines IL-12p40, IL-8 and VEGF. Finally, in a model of allergic conjunctivitis, levocastine eye drops (0.05%) reduced the clinical aspects of the early and late phase reactions and the conjunctival expression of a4b1 integrin by reducing infiltrated eosinophils. Conclusions: SPA is a highly efficient, amenable to automation and robust binding assay to screen novel integrin antagonists in a HTS setting. We propose that blockade of integrinmediated cell adhesion might be a target of the anti-allergic action of levocabastine and may play a role in preventing eosinophil adhesion and infiltration in allergic conjunctivitis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work undertaken in this PhD thesis is aimed at the development and testing of an innovative methodology for the assessment of the vulnerability of coastal areas to marine catastrophic inundation (tsunami). Different approaches are used at different spatial scales and are applied to three different study areas: 1. The entire western coast of Thailand 2. Two selected coastal suburbs of Sydney – Australia 3. The Aeolian Islands, in the South Tyrrhenian Sea – Italy I have discussed each of these cases study in at least one scientific paper: one paper about the Thailand case study (Dall’Osso et al., in review-b), three papers about the Sydney applications (Dall’Osso et al., 2009a; Dall’Osso et al., 2009b; Dall’Osso and Dominey-Howes, in review) and one last paper about the work at the Aeolian Islands (Dall’Osso et al., in review-a). These publications represent the core of the present PhD thesis. The main topics dealt with are outlined and discussed in a general introduction while the overall conclusions are outlined in the last section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hard X-ray band (10 - 100 keV) has been only observed so far by collimated and coded aperture mask instruments, with a sensitivity and an angular resolution lower than two orders of magnitude as respects the current X-ray focusing telescopes operating below 10 - 15 keV. The technological advance in X-ray mirrors and detection systems is now able to extend the X-ray focusing technique to the hard X-ray domain, filling the gap in terms of observational performances and providing a totally new deep view on some of the most energetic phenomena of the Universe. In order to reach a sensitivity of 1 muCrab in the 10 - 40 keV energy range, a great care in the background minimization is required, a common issue for all the hard X-ray focusing telescopes. In the present PhD thesis, a comprehensive analysis of the space radiation environment, the payload design and the resulting prompt X-ray background level is presented, with the aim of driving the feasibility study of the shielding system and assessing the scientific requirements of the future hard X-ray missions. A Geant4 based multi-mission background simulator, BoGEMMS, is developed to be applied to any high energy mission for which the shielding and instruments performances are required. It allows to interactively create a virtual model of the telescope and expose it to the space radiation environment, tracking the particles along their path and filtering the simulated background counts as a real observation in space. Its flexibility is exploited to evaluate the background spectra of the Simbol-X and NHXM mission, as well as the soft proton scattering by the X-ray optics and the selection of the best shielding configuration. Altough the Simbol-X and NHXM missions are the case studies of the background analysis, the obtained results can be generalized to any future hard X-ray telescope. For this reason, a simplified, ideal payload model is also used to select the major sources of background in LEO. All the results are original contributions to the assessment studies of the cited missions, as part of the background groups activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Lower pole kidney stones represent at time a challenge for the urologist. The gold standard treatment for intrarenal stones <2 cm is Extracorporeal Shock Wave Lithotripsy (ESWL) while for those >2 cm is Percutaneous Nephrolithotomy (PCNL). The success rate of ESWL, however, decreases when it is employed for lower pole stones, and this is particularly true in the presence of narrow calices or acute infundibular angles. Studies have proved that ureteroscopy (URS) is an efficacious alternative to ESWL for lower pole stones <2 cm, but this is not reflected by either the European or the American guidelines. The aim of this study is to present the results of a large series of flexible ureteroscopies and PCNLs for lower pole kidney stones from high-volume centers, in order to provide more evidences on the potential indications of the flexible ureteroscopy for the treatment of kidney stones. Materials and Methods A database was created and the participating centres retrospectively entered their data relating to the percutaneous and flexible ureteroscopic management of lower pole kidney stones. Patients included were treated between January 2005 and January 2010. Variables analyzed included case load number, preoperative and postoperative imaging, stone burden, anaesthesia (general vs. spinal), type of lithotripter, access location and size, access dilation type, ureteral access sheath use, visual clarity, operative time, stone-free rate, complication rate, hospital stay, analgesic requirement and follow-up time. Stone-free rate was defined as absence of residual fragments or presence of a single fragment <2 mm in size at follow-up imaging. Primary end-point was to test the efficacy and safety of flexible URS for the treatment of lower pole stones; the same descriptive analysis was conducted for the PCNL approach, as considered the gold standard for the treatment of lower pole kidney stones. In this setting, no statistical analysis was conducted owing to the different selection criteria of the patients. Secondary end-point consisted in matching the results of stone-free rates, operative time and complications rate of flexible URS and PCNL in the subgroup of patients harbouring lower pole kidney stones between 1 and 2 cm in the higher diameter. Results A total 246 patients met the criteria for inclusion. There were 117 PCNLs (group 1) and 129 flexible URS (group 2). Ninety-six percent of cases were diagnosed by CT KUB scan. Mean stone burden was 175±160 and 50±62 mm2 for groups 1 and 2, respectively. General anaesthesia was induced in 100 % and 80% of groups 1 and 2, respectively. Pneumo-ultrasonic energy was used in 84% of cases in the PCNL group, and holmium laser in 95% of the cases in the flexible URS group. The mean operative time was 76.9±44 and 63±37 minutes for groups 1 and 2 respectively. There were 12 major complications (11%) in group 1 (mainly Grade II complications according to Clavidien classification) and no major complications in group 2. Mean hospital stay was 5.7 and 2.6 days for groups 1 and 2, respectively. Ninety-five percent of group 1 and 52% of group 2 required analgesia for a period longer than 24 hours. Intraoperative stone-free rate after a single treatment was 88.9% for group 1 and 79.1% for group 2. Overall, 6% of group 1 and 14.7% of group 2 required a second look procedure. At 3 months, stone-free rates were 90.6% and 92.2% for groups 1 and 2, respectively, as documented by follow-up CT KUB (22%) or combination of intra-venous pyelogram, regular KUB and/or kidney ultrasound (78%). In the subanalysis conducted comparing 82 vs 65 patients who underwent PCNL and flexible URS for lower pole stones between 1 and 2 cm, intreoperative stone-free rates were 88% vs 68% (p= 0.03), respectively; anyway, after an auxiliary procedure which was necessary in 6% of the cases in group 1 and 23% in group 2 (p=0.03), stone-free rates at 3 months were not statistically significant (91.5% vs 89.2%; p=0.6). Conversely, the patients undergoing PCNL maintained a higher risk of complications during the procedure, with 9 cases observed in this group versus 0 in the group of patients treated with URS (p=0.01) Conclusions These data highlight the value of flexible URS as a very effective and safe option for the treatment of kidney stones; thanks to the latest generation of flexible devices, this new technical approach seems to be a valid alternative in particular for the treatment of lower pole kidney stones less than 2 cm. In high-volume centres and in the hands of skilled surgeons, this technique can approach the stone-free rates achievable through PCNL in lower pole stones between 1 and 2 cm, with a very low risk of complications. Furthermore, the results confirm the high success rate and relatively low morbidity of modern PCNL for lower pole stones, with no difference detectable between the prone and supine position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes design methods and test tools, for optical systems, which may be used in an industrial environment, where not only precision and reliability but also ease of use is important. The approach to the problem has been conceived to be as general as possible, although in the present work, the design of a portable device for automatic identification applications has been studied, because this doctorate has been funded by Datalogic Scanning Group s.r.l., a world-class producer of barcode readers. The main functional components of the complete device are: electro-optical imaging, illumination and pattern generator systems. For what concerns the electro-optical imaging system, a characterization tool and an analysis one has been developed to check if the desired performance of the system has been achieved. Moreover, two design tools for optimizing the imaging system have been implemented. The first optimizes just the core of the system, the optical part, improving its performance ignoring all other contributions and generating a good starting point for the optimization of the whole complex system. The second tool optimizes the system taking into account its behavior with a model as near as possible to reality including optics, electronics and detection. For what concerns the illumination and the pattern generator systems, two tools have been implemented. The first allows the design of free-form lenses described by an arbitrary analytical function exited by an incoherent source and is able to provide custom illumination conditions for all kind of applications. The second tool consists of a new method to design Diffractive Optical Elements excited by a coherent source for large pattern angles using the Iterative Fourier Transform Algorithm. Validation of the design tools has been obtained, whenever possible, comparing the performance of the designed systems with those of fabricated prototypes. In other cases simulations have been used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to handle Natural disasters, emergency areas are often individuated over the territory, close to populated centres. In these areas, rescue services are located which respond with resources and materials for population relief. A method of automatic positioning of these centres in case of a flood or an earthquake is presented. The positioning procedure consists of two distinct parts developed by the research group of Prof Michael G. H. Bell of Imperial College, London, refined and applied to real cases at the University of Bologna under the coordination of Prof Ezio Todini. There are certain requirements that need to be observed such as the maximum number of rescue points as well as the number of people involved. Initially, the candidate points are decided according to the ones proposed by the local civil protection services. We then calculate all possible routes from each candidate rescue point to all other points, generally using the concept of the "hyperpath", namely a set of paths each one of which may be optimal. The attributes of the road network are of fundamental importance, both for the calculation of the ideal distance and eventual delays due to the event measured in travel time units. In a second phase, the distances are used to decide the optimum rescue point positions using heuristics. This second part functions by "elimination". In the beginning, all points are considered rescue centres. During every interaction we wish to delete one point and calculate the impact it creates. In each case, we delete the point that creates less impact until we reach the number of rescue centres we wish to keep.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obiettivi: Valutare la prevalenza dei diversi genotipi di HPV in pazienti con diagnosi di CIN2/3 nella Regione Emilia-Romagna, la persistenza genotipo-specifica di HPV e l’espressione degli oncogeni virali E6/E7 nel follow-up post-trattamento come fattori di rischio di recidiva/persistenza o progressione di malattia; verificare l’applicabilità di nuovi test diagnostici biomolecolari nello screening del cervicocarcinoma. Metodi: Sono state incluse pazienti con citologia di screening anormale, sottoposte a trattamento escissionale (T0) per diagnosi di CIN2/3 su biopsia mirata. Al T0 e durante il follow-up a 6, 12, 18 e 24 mesi, oltre al Pap test e alla colposcopia, sono state effettuate la ricerca e la genotipizzazione dell'HPV DNA di 28 genotipi. In caso di positività al DNA dei 5 genotipi 16, 18, 31, 33 e/o 45, si è proceduto alla ricerca dell'HPV mRNA di E6/E7. Risultati preliminari: Il 95.8% delle 168 pazienti selezionate è risultato HPV DNA positivo al T0. Nel 60.9% dei casi le infezioni erano singole (prevalentemente da HPV 16 e 31), nel 39.1% erano multiple. L'HPV 16 è stato il genotipo maggiormente rilevato (57%). Il 94.3% (117/124) delle pazienti positive per i 5 genotipi di HPV DNA sono risultate mRNA positive. Abbiamo avuto un drop-out di 38/168 pazienti. A 18 mesi (95% delle pazienti) la persistenza dell'HPV DNA di qualsiasi genotipo era del 46%, quella dell'HPV DNA dei 5 genotipi era del 39%, con espressione di mRNA nel 21%. Abbiamo avuto recidiva di malattia (CIN2+) nel 10.8% (14/130) a 18 mesi. Il pap test era negativo in 4/14 casi, l'HPV DNA test era positivo in tutti i casi, l'mRNA test in 11/12 casi. Conclusioni: L'HR-HPV DNA test è più sensibile della citologia, l'mRNA test è più specifico nell'individuare una recidiva. I dati definitivi saranno disponibili al termine del follow-up programmato.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agri-food supply chains extend beyond national boundaries, partially facilitated by a policy environment that encourages more liberal international trade. Rising concentration within the downstream sector has driven a shift towards “buyer-driven” global value chains (GVCs) extending internationally with global sourcing and the emergence of multinational key economic players that compete with increase emphasis on product quality attributes. Agri-food systems are thus increasingly governed by a range of inter-related public and private standards, both of which are becoming a priori mandatory, especially in supply chains for high-value and quality-differentiated agri-food products and tend to strongly affect upstream agricultural practices, firms’ internal organization and strategic behaviour and to shape the food chain organization. Notably, increasing attention has been given to the impact of SPS measures on agri-food trade and notably on developing countries’ export performance. Food and agricultural trade is the vital link in the mutual dependency of the global trade system and developing countries. Hence, developing countries derive a substantial portion of their income from food and agricultural trade. In Morocco, fruit and vegetable (especially fresh) are the primary agricultural export. Because of the labor intensity, this sector (especially citrus and tomato) is particularly important in terms of income and employment generation, especially for the female laborers hired in the farms and packing houses. Hence, the emergence of agricultural and agrifood product safety issues and the subsequent tightening of market requirements have challenged mutual gains due to the lack of technical and financial capacities of most developing countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In case of violation of CPT- and Lorentz Symmetry, the minimal Standard Model Extension (SME) of Kostelecky and coworkers predicts sidereal modulations of atomic transition frequencies as the Earth rotates relative to a Lorentz-violating background field. One method to search for these modulations is the so-called clock-comparison experiment, where the frequencies of co-located clocks are compared as they rotate with respect to the fixed stars. In this work an experiment is presented where polarized 3He and 129Xe gas samples in a glass cell serve as clocks, whose nuclear spin precession frequencies are detected with the help of highly sensitive SQUID sensors inside a magnetically shielded room. The unique feature of this experiment is the fact that the spins are precessing freely, with transverse relaxation times of up to 4.4 h for 129Xe and 14.1 h for 3He. To be sensitive to Lorentz-violating effects, the influence of external magnetic fields is canceled via the weighted difference of the 3He and 129Xe frequencies or phases. The Lorentz-violating SME parameters for the neutron are determined out of a fit on the phase difference data of 7 spin precession measurements of 12 to 16 hours length. The result of the fit gives an upper limit for the equatorial component of the neutron parameter b_n of 3.7×10^(−32) GeV at the 95% confidence level. This value is not limited by the signal-to-noise ratio, but by the strong correlations between the fit parameters. To reduce the correlations and therewith improve the sensitivity of future experiments, it will be necessary to change the time structure of the weighted phase difference, which can be realized by increasing the 129Xe relaxation time.