959 resultados para Laboratory techniques and procedures
Resumo:
The Internet has become an integral part of our nation's critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a 'distance metric'. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.
Resumo:
Smokeless powder additives are usually detected by their extraction from post-blast residues or unburned powder particles followed by analysis using chromatographic techniques. This work presents the first comprehensive study of the detection of the volatile and semi-volatile additives of smokeless powders using solid phase microextraction (SPME) as a sampling and pre-concentration technique. Seventy smokeless powders were studied using laboratory based chromatography techniques and a field deployable ion mobility spectrometer (IMS). The detection of diphenylamine, ethyl and methyl centralite, 2,4-dinitrotoluene, diethyl and dibutyl phthalate by IMS to associate the presence of these compounds to smokeless powders is also reported for the first time. A previously reported SPME-IMS analytical approach facilitates rapid sub-nanogram detection of the vapor phase components of smokeless powders. A mass calibration procedure for the analytical techniques used in this study was developed. Precise and accurate mass delivery of analytes in picoliter volumes was achieved using a drop-on-demand inkjet printing method. Absolute mass detection limits determined using this method for the various analytes of interest ranged between 0.03 - 0.8 ng for the GC-MS and between 0.03 - 2 ng for the IMS. Mass response graphs generated for different detection techniques help in the determination of mass extracted from the headspace of each smokeless powder. The analyte mass present in the vapor phase was sufficient for a SPME fiber to extract most analytes at amounts above the detection limits of both chromatographic techniques and the ion mobility spectrometer. Analysis of the large number of smokeless powders revealed that diphenylamine was present in the headspace of 96% of the powders. Ethyl centralite was detected in 47% of the powders and 8% of the powders had methyl centralite available for detection from the headspace sampling of the powders by SPME. Nitroglycerin was the dominant peak present in the headspace of the double-based powders. 2,4-dinitrotoluene which is another important headspace component was detected in 44% of the powders. The powders therefore have more than one headspace component and the detection of a combination of these compounds is achievable by SPME-IMS leading to an association to the presence of smokeless powders.
Resumo:
Bioscience subjects require a significant amount of training in laboratory techniques to produce highly skilled science graduates. Many techniques which are currently used in diagnostic, research and industrial laboratories require expensive equipment for single users; examples of which include next generation sequencing, quantitative PCR, mass spectrometry and other analytical techniques. The cost of the machines, reagents and limited access frequently preclude undergraduate students from using such cutting edge techniques. In addition to cost and availability, the time taken for analytical runs on equipment such as High Performance Liquid Chromatography (HPLC) does not necessarily fit with the limitations of timetabling. Understanding the theory underlying these techniques without the accompanying practical classes can be unexciting for students. One alternative from wet laboratory provision is to use virtual simulations of such practical which enable students to see the machines and interact with them to generate data. The Faculty of Science and Technology at the University of Westminster has provided all second and third year undergraduate students with iPads so that these students all have access to a mobile device to assist with learning. We have purchased licences from Labster to access a range of virtual laboratory simulations. These virtual laboratories are fully equipped and require student responses to multiple answer questions in order to progress through the experiment. In a pilot study to look at the feasibility of the Labster virtual laboratory simulations with the iPad devices; second year Biological Science students (n=36) worked through the Labster HPLC simulation on iPads. The virtual HPLC simulation enabled students to optimise the conditions for the separation of drugs. Answers to Multiple choice questions were necessary to progress through the simulation, these focussed on the underlying principles of the HPLC technique. Following the virtual laboratory simulation students went to a real HPLC in the analytical suite in order to separate of asprin, caffeine and paracetamol. In a survey 100% of students (n=36) in this cohort agreed that the Labster virtual simulation had helped them to understand HPLC. In free text responses one student commented that "The terminology is very clear and I enjoyed using Labster very much”. One member of staff commented that “there was a very good knowledge interaction with the virtual practical”.
Resumo:
This article contributes to understanding the conditions of social-ecological change by focusing on the agency of individuals in the pathways to institutionalization. Drawing on the case of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), it addresses institutional entrepreneurship in an emerging environmental science-policy institution (ESPI) at a global scale. Drawing on ethnographic observations, semistructured interviews, and document analysis, we propose a detailed chronology of the genesis of the IPBES before focusing on the final phase of the negotiations toward the creation of the institution. We analyze the techniques and skills deployed by the chairman during the conference to handle the tensions at play both to prevent participants from deserting the negotiations arena and to prevent a lack of inclusiveness from discrediting the future institution. We stress that creating a new global environmental institution requires the situated exercise of an art of “having everybody on board” through techniques of inclusiveness that we characterize. Our results emphazise the major challenge of handling the fragmentation and plasticity of the groups of interest involved in the institutionalization process, thus adding to the theory of transformative agency of institutional entrepreneurs. Although inclusiveness might remain partly unattainable, such techniques of inclusiveness appear to be a major condition of the legitimacy and success of the institutionalization of a new global ESPI. Our results also add to the literature on boundary making within ESPIs by emphasizing the multiplicity and plasticity of the groups actually at stake.
Resumo:
Institutions are widely regarded as important, even ultimate drivers of economic growth and performance. A recent mainstream of institutional economics has concentrated on the effect of persisting, often imprecisely measured institutions and on cataclysmic events as agents of noteworthy institutional change. As a consequence, institutional change without large-scale shocks has received little attention. In this dissertation I apply a complementary, quantitative-descriptive approach that relies on measures of actually enforced institutions to study institutional persistence and change over a long time period that is undisturbed by the typically studied cataclysmic events. By placing institutional change into the center of attention one can recognize different speeds of institutional innovation and the continuous coexistence of institutional persistence and change. Specifically, I combine text mining procedures, network analysis techniques and statistical approaches to study persistence and change in England’s common law over the Industrial Revolution (1700-1865). Based on the doctrine of precedent - a peculiarity of common law systems - I construct and analyze the apparently first citation network that reflects lawmaking in England. Most strikingly, I find large-scale change in the making of English common law around the turn of the 19th century - a period free from the typically studied cataclysmic events. Within a few decades a legal innovation process with low depreciation rates (1 to 2 percent) and strong past-persistence transitioned to a present-focused innovation process with significantly higher depreciation rates (4 to 6 percent) and weak past-persistence. Comparison with U.S. Supreme Court data reveals a similar U.S. transition towards the end of the 19th century. The English and U.S. transitions appear to have unfolded in a very specific manner: a new body of law arose during the transitions and developed in a self-referential manner while the existing body of law lost influence, but remained prominent. Additional findings suggest that Parliament doubled its influence on the making of case law within the first decades after the Glorious Revolution and that England’s legal rules manifested a high degree of long-term persistence. The latter allows for the possibility that the often-noted persistence of institutional outcomes derives from the actual persistence of institutions.
Resumo:
Objective of the thesis is to develop project management procedure for chilled beam projects. In organization is recognized that project management techniques could help in large and complex projects. Information sharing have been challenging in projects, so improvement of information sharing is one key topic of the thesis. Academic researches and literature are used to find suitable project management theories and methods. Main theories are related to phases of the project and project management tools. Practical knowledge of project management is collected from two project business oriented companies. Project management tools are chosen and modified to fulfill needs of the beam projects. Result of the thesis is proposed project management procedure, which includes phases of the chilled beam projects and project milestones. Project management procedure helps to recognize the most critical phases of the project and tools help to manage information of the project. Procedure increases knowledge of the project management techniques and tools. It also forms coherent project management working method among the chilled beam project group.
Resumo:
Congenital heart disease (CHD) is the most common birth defect, causing an important rate of morbidity and mortality. Treatment of CHD requires surgical correction in a significant percentage of cases which exposes patients to cardiac and end organ injury. Cardiac surgical procedures often require the utilisation of cardiopulmonary bypass (CPB), a system that replaces heart and lungs function by diverting circulation into an external circuit. The use of CPB can initiate potent inflammatory responses, in addition a proportion of procedures require a period of aortic cross clamp during which the heart is rendered ischaemic and is exposed to injury. High O2 concentrations are used during cardiac procedures and when circulation is re-established to the heart which had adjusted metabolically to ischaemia, further injury is caused in a process known as ischaemic reperfusion injury (IRI). Several strategies are in place in order to protect the heart during surgery, however injury is still caused, having detrimental effects in patients at short and long term. Remote ischaemic preconditioning (RIPC) is a technique proposed as a potential cardioprotective measure. It consists of exposing a remote tissue bed to brief episodes of ischaemia prior to surgery in order to activate protective pathways that would act during CPB, ischaemia and reperfusion. This study aimed to assess RIPC in paediatric patients requiring CHD surgical correction with a translational approach, integrating clinical outcome, marker analysis, cardiac function parameters and molecular mechanisms within the cardiac tissue. A prospective, single blinded, randomized, controlled trial was conducted applying a RIPC protocol to randomised patients through episodes of limb ischaemia on the day before surgery which was repeated right before the surgery started, after anaesthesia induction. Blood samples were obtained before surgery and at three post-operative time points from venous lines, additional pre and post-bypass blood samples were obtained from the right atrium. Myocardial tissue was resected during the ischaemic period of surgery. Echocardiographic images were obtained before the surgery started after anaesthetic induction and the day after surgery, images were stored for later off line analysis. PICU surveillance data was collected including ventilation parameters, inotrope use, standard laboratory analysis and six hourly blood gas analysis. Pre and post-operative quantitation of markers in blood specimens included cardiac troponin I (cTnI) and B-type natriuretic peptide (BNP), inflammatory mediators including interleukins IL-6, IL-8, IL-10, tumour necrosis factor (TNF-α), and the adhesion molecules ICAM-1 and VCAM-1; the renal marker Cystatin C and the cardiovascular markers asymmetric dymethylarginine (ADMA) and symmetric dymethylarginine (SDMA). Nitric oxide (NO) metabolites and cyclic guanosine monophosphate (cGMP) were measured before and after bypass. Myocardial tissue was processed at baseline and after incubation at hyperoxic concentration during four hours in order to mimic surgical conditions. Expression of genes involved in IRI and RIPC pathways was analysed including heat shock proteins (HSPs), toll like receptors (TLRs), transcription factors nuclear factor κ-B (NF- κ-B) and hypoxia inducible factor 1 (HIF-1). The participation of hydrogen sulfide enzymatic genes, apelin and its receptor were explored. There was no significant difference according to group allocation in any of the echocardiographic parameters. There was a tendency for higher cTnI values and inotropic score in control patients post-operatively, however this was not statistically significant. BNP presented no significant difference according to group allocation. Inflammatory parameters tended to be higher in the control group, however only TNF- α was significantly higher. There was no difference in levels of Cystatin C, NO metabolites, cGMP, ADMA or SDMA. RIPC patients required shorter PICU stay, all other clinical and laboratory analysis presented no difference related to the intervention. Gene expression analysis revealed interesting patterns before and after incubation. HSP-60 presented a lower expression at baseline in tissue corresponding to RIPC patients, no other differences were found. This study provided with valuable descriptive information on previously known and newly explored parameters in the study population. Demographic characteristics and the presence of cyanosis before surgery influenced patterns of activity in several parameters, numerous indicators were linked to the degree of injury suffered by the myocardium. RIPC did not reduce markers of cardiac injury or improved echocardiographic parameters and it did not have an effect on end organ function; some effects were seen in inflammatory responses and gene expression analysis. Nevertheless, an important clinical outcome indicator, PICU length of stay was reduced suggesting benefit from the intervention. Larger studies with more statistical power could determine if the tendency of lower injury and inflammatory markers linked to RIPC is real. The present results mostly support findings of larger multicentre trials which have reported no cardiac benefit from RIPC in paediatric cardiac surgery.
Resumo:
This document describes the experience of academic cooperation between professionals in the field of library science, both from West Chester University (WCU), and the National University (UNA) of Costa Rica. The event took place at West Chester University during the week May 4th to May 8th, 2009. The objectives of this revolved around the exchange of ideas and interests in the academic and cultural relations between the two universities. In addition, it unveiled several services and procedures in the handling of information and highlighted the importance of promoting the exchange of students from both institutions. Finally, this article highlights the schedule of activities to integrate international and intercultural perspective in various areas related to the teaching-learning process, the contribution of university libraries on student success and techniques of information dissemination.
Resumo:
Recommendation for Oxygen Measurements from Argo Floats: Implementation of In-Air-Measurement Routine to Assure Highest Long-term Accuracy As Argo has entered its second decade and chemical/biological sensor technology is improving constantly, the marine biogeochemistry community is starting to embrace the successful Argo float program. An augmentation of the global float observatory, however, has to follow rather stringent constraints regarding sensor characteristics as well as data processing and quality control routines. Owing to the fairly advanced state of oxygen sensor technology and the high scientific value of oceanic oxygen measurements (Gruber et al., 2010), an expansion of the Argo core mission to routine oxygen measurements is perhaps the most mature and promising candidate (Freeland et al., 2010). In this context, SCOR Working Group 142 “Quality Control Procedures for Oxygen and Other Biogeochemical Sensors on Floats and Gliders” (www.scor-int.org/SCOR_WGs_WG142.htm) set out in 2014 to assess the current status of biogeochemical sensor technology with particular emphasis on float-readiness, develop pre- and post-deployment quality control metrics and procedures for oxygen sensors, and to disseminate procedures widely to ensure rapid adoption in the community.
Resumo:
Vesiculoviruses (VSV) are zoonotic viruses that cause vesicular stomatitis disease in cattle, horses and pigs, as well as sporadic human cases of acute febrile illness. Therefore, diagnosis of VSV infections by reliable laboratory techniques is important to allow a proper case management and implementation of strategies for the containment of virus spread. We show here a sensitive and reproducible real-time reverse transcriptase polymerase chain reaction (RT-PCR) for detection and quantification of VSV. The assay was evaluated with arthropods and serum samples obtained from horses, cattle and patients with acute febrile disease. The real-time RT-PCR amplified the Piry, Carajas, Alagoas and Indiana Vesiculovirus at a melting temperature 81.02 ± 0.8ºC, and the sensitivity of assay was estimated in 10 RNA copies/mL to the Piry Vesiculovirus. The viral genome has been detected in samples of horses and cattle, but not detected in human sera or arthropods. Thus, this assay allows a preliminary differential diagnosis of VSV infections.
Resumo:
The photochemistry of pesticides triadimenol and triadimefon was studied on cellulose and beta-cyclodextrin (beta-CD) in controlled and natural conditions, using diffuse reflectance techniques and chromatographic analysis. The photochemistry of triadimenol occurs from the chlorophenoxyl moiety, while the photodegradation of triadimefon also involves the carbonyl group. The formation of 4-chlorophenoxyl radical is one of the major reaction pathways for both pesticides and leads to 4-chlorophenol. Triadimenol also undergoes photooxidation and dechlorination, leading to triadimefon and dechlorinated triadimenol, respectively. The other main reaction process of triadimefon involves alpha-cleavage from the carbonyl group, leading to decarbonylated compounds. Triadimenol undergoes photodegradation at 254 nm but was found to be stable at 313 nm, while triadimefon degradates in both conditions. Both pesticides undergo photochemical decomposition under solar radiation, being the initial degradation of rate per unit area of triadimefon 1 order of magnitude higher than the observed for triadimenol in both supports. The degradation rates of the pesticides were somewhat lower in beta-CD than on cellulose. Photoproduct distribution of triadimenol and triadimefon is similar for the different irradiation conditions, indicating an intramolecular energy transfer from the chlorophenoxyl moiety to the carbonyl group in the latter pesticide.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
A NOx reduction efficiency higher than 95% with NH3 slip less than 30 ppm is desirable for heavy-duty diesel (HDD) engines using selective catalytic reduction (SCR) systems to meet the US EPA 2010 NOx standard and the 2014-2018 fuel consumption regulation. The SCR performance needs to be improved through experimental and modeling studies. In this research, a high fidelity global kinetic 1-dimensional 2-site SCR model with mass transfer, heat transfer and global reaction mechanisms was developed for a Cu-zeolite catalyst. The model simulates the SCR performance for the engine exhaust conditions with NH3 maldistribution and aging effects, and the details are presented. SCR experimental data were collected for the model development, calibration and validation from a reactor at Oak Ridge National Laboratory (ORNL) and an engine experimental setup at Michigan Technological University (MTU) with a Cummins 2010 ISB engine. The model was calibrated separately to the reactor and engine data. The experimental setup, test procedures including a surrogate HD-FTP cycle developed for transient studies and the model calibration process are described. Differences in the model parameters were determined between the calibrations developed from the reactor and the engine data. It was determined that the SCR inlet NH3 maldistribution is one of the reasons causing the differences. The model calibrated to the engine data served as a basis for developing a reduced order SCR estimator model. The effect of the SCR inlet NO2/NOx ratio on the SCR performance was studied through simulations using the surrogate HD-FTP cycle. The cumulative outlet NOx and the overall NOx conversion efficiency of the cycle are highest with a NO2/NOx ratio of 0.5. The outlet NH3 is lowest for the NO2/NOx ratio greater than 0.6. A combined engine experimental and simulation study was performed to quantify the NH3 maldistribution at the SCR inlet and its effects on the SCR performance and kinetics. The uniformity index (UI) of the SCR inlet NH3 and NH3/NOx ratio (ANR) was determined to be below 0.8 for the production system. The UI was improved to 0.9 after installation of a swirl mixer into the SCR inlet cone. A multi-channel model was developed to simulate the maldistribution effects. The results showed that reducing the UI of the inlet ANR from 1.0 to 0.7 caused a 5-10% decrease in NOx reduction efficiency and 10-20 ppm increase in the NH3 slip. The simulations of the steady-state engine data with the multi-channel model showed that the NH3 maldistribution is a factor causing the differences in the calibrations developed from the engine and the reactor data. The Reactor experiments were performed at ORNL using a Spaci-IR technique to study the thermal aging effects. The test results showed that the thermal aging (at 800°C for 16 hours) caused a 30% reduction in the NH3 stored on the catalyst under NH3 saturation conditions and different axial concentration profiles under SCR reaction conditions. The kinetics analysis showed that the thermal aging caused a reduction in total NH3 storage capacity (94.6 compared to 138 gmol/m3), different NH3 adsorption/desorption properties and a decrease in activation energy and the pre-exponential factor for NH3 oxidation, standard and fast SCR reactions. Both reduction in the storage capability and the change in kinetics of the major reactions contributed to the change in the axial storage and concentration profiles observed from the experiments.
Resumo:
Buildings and other infrastructures located in the coastal regions of the US have a higher level of wind vulnerability. Reducing the increasing property losses and causalities associated with severe windstorms has been the central research focus of the wind engineering community. The present wind engineering toolbox consists of building codes and standards, laboratory experiments, and field measurements. The American Society of Civil Engineers (ASCE) 7 standard provides wind loads only for buildings with common shapes. For complex cases it refers to physical modeling. Although this option can be economically viable for large projects, it is not cost-effective for low-rise residential houses. To circumvent these limitations, a numerical approach based on the techniques of Computational Fluid Dynamics (CFD) has been developed. The recent advance in computing technology and significant developments in turbulence modeling is making numerical evaluation of wind effects a more affordable approach. The present study targeted those cases that are not addressed by the standards. These include wind loads on complex roofs for low-rise buildings, aerodynamics of tall buildings, and effects of complex surrounding buildings. Among all the turbulence models investigated, the large eddy simulation (LES) model performed the best in predicting wind loads. The application of a spatially evolving time-dependent wind velocity field with the relevant turbulence structures at the inlet boundaries was found to be essential. All the results were compared and validated with experimental data. The study also revealed CFD’s unique flow visualization and aerodynamic data generation capabilities along with a better understanding of the complex three-dimensional aerodynamics of wind-structure interactions. With the proper modeling that realistically represents the actual turbulent atmospheric boundary layer flow, CFD can offer an economical alternative to the existing wind engineering tools. CFD’s easy accessibility is expected to transform the practice of structural design for wind, resulting in more wind-resilient and sustainable systems by encouraging optimal aerodynamic and sustainable structural/building design. Thus, this method will help ensure public safety and reduce economic losses due to wind perils.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.