12 resultados para TEST CASE GENERATION

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process algebraic architectural description languages provide a formal means for modeling software systems and assessing their properties. In order to bridge the gap between system modeling and system im- plementation, in this thesis an approach is proposed for automatically generating multithreaded object-oriented code from process algebraic architectural descriptions, in a way that preserves – under certain assumptions – the properties proved at the architectural level. The approach is divided into three phases, which are illustrated by means of a running example based on an audio processing system. First, we develop an architecture-driven technique for thread coordination management, which is completely automated through a suitable package. Second, we address the translation of the algebraically-specified behavior of the individual software units into thread templates, which will have to be filled in by the software developer according to certain guidelines. Third, we discuss performance issues related to the suitability of synthesizing monitors rather than threads from software unit descriptions that satisfy specific constraints. In addition to the running example, we present two case studies about a video animation repainting system and the implementation of a leader election algorithm, in order to summarize the whole approach. The outcome of this thesis is the implementation of the proposed approach in a translator called PADL2Java and its integration in the architecture-centric verification tool TwoTowers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim of the research: to develop a prototype of homogeneous high-throughput screening (HTS) for identification of novel integrin antagonists for the treatment of ocular allergy and to better understand the mechanisms of action of integrin-mediated levocabastine antiallergic action. Results: This thesis provides evidence that adopting scintillation proximity assay (SPA) levocabastine (IC50=406 mM), but not the first-generation antihistamine chlorpheniramine, displaces [125I]fibronectin (FN) binding to human a4b1 integrin. This result is supported by flow cytometry analysis, where levocabastine antagonizes the binding of a primary antibody to integrin a4 expressed in Jurkat E6.1 cells. Levocabastine, but not chlorpheniramine, binds to a4b1 integrin and prevents eosinophil adhesion to VCAM-1, FN or human umbilical vein endothelial cells (HUVEC) cultured in vitro. Similarly, levocabastine affects aLb2/ICAM-1-mediated adhesion of Jurkat E6.1 cells. Analyzing the supernatant of TNF-a-treated (24h) eosinophilic cells (EoL-1), we report that levocabastine reduces the TNF-a-induced release of the cytokines IL-12p40, IL-8 and VEGF. Finally, in a model of allergic conjunctivitis, levocastine eye drops (0.05%) reduced the clinical aspects of the early and late phase reactions and the conjunctival expression of a4b1 integrin by reducing infiltrated eosinophils. Conclusions: SPA is a highly efficient, amenable to automation and robust binding assay to screen novel integrin antagonists in a HTS setting. We propose that blockade of integrinmediated cell adhesion might be a target of the anti-allergic action of levocabastine and may play a role in preventing eosinophil adhesion and infiltration in allergic conjunctivitis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work undertaken in this PhD thesis is aimed at the development and testing of an innovative methodology for the assessment of the vulnerability of coastal areas to marine catastrophic inundation (tsunami). Different approaches are used at different spatial scales and are applied to three different study areas: 1. The entire western coast of Thailand 2. Two selected coastal suburbs of Sydney – Australia 3. The Aeolian Islands, in the South Tyrrhenian Sea – Italy I have discussed each of these cases study in at least one scientific paper: one paper about the Thailand case study (Dall’Osso et al., in review-b), three papers about the Sydney applications (Dall’Osso et al., 2009a; Dall’Osso et al., 2009b; Dall’Osso and Dominey-Howes, in review) and one last paper about the work at the Aeolian Islands (Dall’Osso et al., in review-a). These publications represent the core of the present PhD thesis. The main topics dealt with are outlined and discussed in a general introduction while the overall conclusions are outlined in the last section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hard X-ray band (10 - 100 keV) has been only observed so far by collimated and coded aperture mask instruments, with a sensitivity and an angular resolution lower than two orders of magnitude as respects the current X-ray focusing telescopes operating below 10 - 15 keV. The technological advance in X-ray mirrors and detection systems is now able to extend the X-ray focusing technique to the hard X-ray domain, filling the gap in terms of observational performances and providing a totally new deep view on some of the most energetic phenomena of the Universe. In order to reach a sensitivity of 1 muCrab in the 10 - 40 keV energy range, a great care in the background minimization is required, a common issue for all the hard X-ray focusing telescopes. In the present PhD thesis, a comprehensive analysis of the space radiation environment, the payload design and the resulting prompt X-ray background level is presented, with the aim of driving the feasibility study of the shielding system and assessing the scientific requirements of the future hard X-ray missions. A Geant4 based multi-mission background simulator, BoGEMMS, is developed to be applied to any high energy mission for which the shielding and instruments performances are required. It allows to interactively create a virtual model of the telescope and expose it to the space radiation environment, tracking the particles along their path and filtering the simulated background counts as a real observation in space. Its flexibility is exploited to evaluate the background spectra of the Simbol-X and NHXM mission, as well as the soft proton scattering by the X-ray optics and the selection of the best shielding configuration. Altough the Simbol-X and NHXM missions are the case studies of the background analysis, the obtained results can be generalized to any future hard X-ray telescope. For this reason, a simplified, ideal payload model is also used to select the major sources of background in LEO. All the results are original contributions to the assessment studies of the cited missions, as part of the background groups activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Lower pole kidney stones represent at time a challenge for the urologist. The gold standard treatment for intrarenal stones <2 cm is Extracorporeal Shock Wave Lithotripsy (ESWL) while for those >2 cm is Percutaneous Nephrolithotomy (PCNL). The success rate of ESWL, however, decreases when it is employed for lower pole stones, and this is particularly true in the presence of narrow calices or acute infundibular angles. Studies have proved that ureteroscopy (URS) is an efficacious alternative to ESWL for lower pole stones <2 cm, but this is not reflected by either the European or the American guidelines. The aim of this study is to present the results of a large series of flexible ureteroscopies and PCNLs for lower pole kidney stones from high-volume centers, in order to provide more evidences on the potential indications of the flexible ureteroscopy for the treatment of kidney stones. Materials and Methods A database was created and the participating centres retrospectively entered their data relating to the percutaneous and flexible ureteroscopic management of lower pole kidney stones. Patients included were treated between January 2005 and January 2010. Variables analyzed included case load number, preoperative and postoperative imaging, stone burden, anaesthesia (general vs. spinal), type of lithotripter, access location and size, access dilation type, ureteral access sheath use, visual clarity, operative time, stone-free rate, complication rate, hospital stay, analgesic requirement and follow-up time. Stone-free rate was defined as absence of residual fragments or presence of a single fragment <2 mm in size at follow-up imaging. Primary end-point was to test the efficacy and safety of flexible URS for the treatment of lower pole stones; the same descriptive analysis was conducted for the PCNL approach, as considered the gold standard for the treatment of lower pole kidney stones. In this setting, no statistical analysis was conducted owing to the different selection criteria of the patients. Secondary end-point consisted in matching the results of stone-free rates, operative time and complications rate of flexible URS and PCNL in the subgroup of patients harbouring lower pole kidney stones between 1 and 2 cm in the higher diameter. Results A total 246 patients met the criteria for inclusion. There were 117 PCNLs (group 1) and 129 flexible URS (group 2). Ninety-six percent of cases were diagnosed by CT KUB scan. Mean stone burden was 175±160 and 50±62 mm2 for groups 1 and 2, respectively. General anaesthesia was induced in 100 % and 80% of groups 1 and 2, respectively. Pneumo-ultrasonic energy was used in 84% of cases in the PCNL group, and holmium laser in 95% of the cases in the flexible URS group. The mean operative time was 76.9±44 and 63±37 minutes for groups 1 and 2 respectively. There were 12 major complications (11%) in group 1 (mainly Grade II complications according to Clavidien classification) and no major complications in group 2. Mean hospital stay was 5.7 and 2.6 days for groups 1 and 2, respectively. Ninety-five percent of group 1 and 52% of group 2 required analgesia for a period longer than 24 hours. Intraoperative stone-free rate after a single treatment was 88.9% for group 1 and 79.1% for group 2. Overall, 6% of group 1 and 14.7% of group 2 required a second look procedure. At 3 months, stone-free rates were 90.6% and 92.2% for groups 1 and 2, respectively, as documented by follow-up CT KUB (22%) or combination of intra-venous pyelogram, regular KUB and/or kidney ultrasound (78%). In the subanalysis conducted comparing 82 vs 65 patients who underwent PCNL and flexible URS for lower pole stones between 1 and 2 cm, intreoperative stone-free rates were 88% vs 68% (p= 0.03), respectively; anyway, after an auxiliary procedure which was necessary in 6% of the cases in group 1 and 23% in group 2 (p=0.03), stone-free rates at 3 months were not statistically significant (91.5% vs 89.2%; p=0.6). Conversely, the patients undergoing PCNL maintained a higher risk of complications during the procedure, with 9 cases observed in this group versus 0 in the group of patients treated with URS (p=0.01) Conclusions These data highlight the value of flexible URS as a very effective and safe option for the treatment of kidney stones; thanks to the latest generation of flexible devices, this new technical approach seems to be a valid alternative in particular for the treatment of lower pole kidney stones less than 2 cm. In high-volume centres and in the hands of skilled surgeons, this technique can approach the stone-free rates achievable through PCNL in lower pole stones between 1 and 2 cm, with a very low risk of complications. Furthermore, the results confirm the high success rate and relatively low morbidity of modern PCNL for lower pole stones, with no difference detectable between the prone and supine position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obiettivi: Valutare la prevalenza dei diversi genotipi di HPV in pazienti con diagnosi di CIN2/3 nella Regione Emilia-Romagna, la persistenza genotipo-specifica di HPV e l’espressione degli oncogeni virali E6/E7 nel follow-up post-trattamento come fattori di rischio di recidiva/persistenza o progressione di malattia; verificare l’applicabilità di nuovi test diagnostici biomolecolari nello screening del cervicocarcinoma. Metodi: Sono state incluse pazienti con citologia di screening anormale, sottoposte a trattamento escissionale (T0) per diagnosi di CIN2/3 su biopsia mirata. Al T0 e durante il follow-up a 6, 12, 18 e 24 mesi, oltre al Pap test e alla colposcopia, sono state effettuate la ricerca e la genotipizzazione dell'HPV DNA di 28 genotipi. In caso di positività al DNA dei 5 genotipi 16, 18, 31, 33 e/o 45, si è proceduto alla ricerca dell'HPV mRNA di E6/E7. Risultati preliminari: Il 95.8% delle 168 pazienti selezionate è risultato HPV DNA positivo al T0. Nel 60.9% dei casi le infezioni erano singole (prevalentemente da HPV 16 e 31), nel 39.1% erano multiple. L'HPV 16 è stato il genotipo maggiormente rilevato (57%). Il 94.3% (117/124) delle pazienti positive per i 5 genotipi di HPV DNA sono risultate mRNA positive. Abbiamo avuto un drop-out di 38/168 pazienti. A 18 mesi (95% delle pazienti) la persistenza dell'HPV DNA di qualsiasi genotipo era del 46%, quella dell'HPV DNA dei 5 genotipi era del 39%, con espressione di mRNA nel 21%. Abbiamo avuto recidiva di malattia (CIN2+) nel 10.8% (14/130) a 18 mesi. Il pap test era negativo in 4/14 casi, l'HPV DNA test era positivo in tutti i casi, l'mRNA test in 11/12 casi. Conclusioni: L'HR-HPV DNA test è più sensibile della citologia, l'mRNA test è più specifico nell'individuare una recidiva. I dati definitivi saranno disponibili al termine del follow-up programmato.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agri-food supply chains extend beyond national boundaries, partially facilitated by a policy environment that encourages more liberal international trade. Rising concentration within the downstream sector has driven a shift towards “buyer-driven” global value chains (GVCs) extending internationally with global sourcing and the emergence of multinational key economic players that compete with increase emphasis on product quality attributes. Agri-food systems are thus increasingly governed by a range of inter-related public and private standards, both of which are becoming a priori mandatory, especially in supply chains for high-value and quality-differentiated agri-food products and tend to strongly affect upstream agricultural practices, firms’ internal organization and strategic behaviour and to shape the food chain organization. Notably, increasing attention has been given to the impact of SPS measures on agri-food trade and notably on developing countries’ export performance. Food and agricultural trade is the vital link in the mutual dependency of the global trade system and developing countries. Hence, developing countries derive a substantial portion of their income from food and agricultural trade. In Morocco, fruit and vegetable (especially fresh) are the primary agricultural export. Because of the labor intensity, this sector (especially citrus and tomato) is particularly important in terms of income and employment generation, especially for the female laborers hired in the farms and packing houses. Hence, the emergence of agricultural and agrifood product safety issues and the subsequent tightening of market requirements have challenged mutual gains due to the lack of technical and financial capacities of most developing countries.