953 resultados para safety analysis


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Assessing the safety of existing timber structures is of paramount importance for taking reliable decisions on repair actions and their extent. The results obtained through semi-probabilistic methods are unrealistic, as the partial safety factors present in codes are calibrated considering the uncertainty present in new structures. In order to overcome these limitations, and also to include the effects of decay in the safety analysis, probabilistic methods, based on Monte-Carlo simulation are applied here to assess the safety of existing timber structures. In particular, the impact of decay on structural safety is analyzed and discussed, using a simple structural model, similar to that used for current semi-probabilistic analysis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND: Stents are an alternative treatment to carotid endarterectomy for symptomatic carotid stenosis, but previous trials have not established equivalent safety and efficacy. We compared the safety of carotid artery stenting with that of carotid endarterectomy. METHODS: The International Carotid Stenting Study (ICSS) is a multicentre, international, randomised controlled trial with blinded adjudication of outcomes. Patients with recently symptomatic carotid artery stenosis were randomly assigned in a 1:1 ratio to receive carotid artery stenting or carotid endarterectomy. Randomisation was by telephone call or fax to a central computerised service and was stratified by centre with minimisation for sex, age, contralateral occlusion, and side of the randomised artery. Patients and investigators were not masked to treatment assignment. Patients were followed up by independent clinicians not directly involved in delivering the randomised treatment. The primary outcome measure of the trial is the 3-year rate of fatal or disabling stroke in any territory, which has not been analysed yet. The main outcome measure for the interim safety analysis was the 120-day rate of stroke, death, or procedural myocardial infarction. Analysis was by intention to treat (ITT). This study is registered, number ISRCTN25337470. FINDINGS: The trial enrolled 1713 patients (stenting group, n=855; endarterectomy group, n=858). Two patients in the stenting group and one in the endarterectomy group withdrew immediately after randomisation, and were not included in the ITT analysis. Between randomisation and 120 days, there were 34 (Kaplan-Meier estimate 4.0%) events of disabling stroke or death in the stenting group compared with 27 (3.2%) events in the endarterectomy group (hazard ratio [HR] 1.28, 95% CI 0.77-2.11). The incidence of stroke, death, or procedural myocardial infarction was 8.5% in the stenting group compared with 5.2% in the endarterectomy group (72 vs 44 events; HR 1.69, 1.16-2.45, p=0.006). Risks of any stroke (65 vs 35 events; HR 1.92, 1.27-2.89) and all-cause death (19 vs seven events; HR 2.76, 1.16-6.56) were higher in the stenting group than in the endarterectomy group. Three procedural myocardial infarctions were recorded in the stenting group, all of which were fatal, compared with four, all non-fatal, in the endarterectomy group. There was one event of cranial nerve palsy in the stenting group compared with 45 in the endarterectomy group. There were also fewer haematomas of any severity in the stenting group than in the endarterectomy group (31 vs 50 events; p=0.0197). INTERPRETATION: Completion of long-term follow-up is needed to establish the efficacy of carotid artery stenting compared with endarterectomy. In the meantime, carotid endarterectomy should remain the treatment of choice for patients suitable for surgery. FUNDING: Medical Research Council, the Stroke Association, Sanofi-Synthélabo, European Union.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND: The ongoing Ebola outbreak led to accelerated efforts to test vaccine candidates. On the basis of a request by WHO, we aimed to assess the safety and immunogenicity of the monovalent, recombinant, chimpanzee adenovirus type-3 vector-based Ebola Zaire vaccine (ChAd3-EBO-Z). METHODS: We did this randomised, double-blind, placebo-controlled, dose-finding, phase 1/2a trial at the Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland. Participants (aged 18-65 years) were randomly assigned (2:2:1), via two computer-generated randomisation lists for individuals potentially deployed in endemic areas and those not deployed, to receive a single intramuscular dose of high-dose vaccine (5 × 10(10) viral particles), low-dose vaccine (2·5 × 10(10) viral particles), or placebo. Deployed participants were allocated to only the vaccine groups. Group allocation was concealed from non-deployed participants, investigators, and outcome assessors. The safety evaluation was not masked for potentially deployed participants, who were therefore not included in the safety analysis for comparison between the vaccine doses and placebo, but were pooled with the non-deployed group to compare immunogenicity. The main objectives were safety and immunogenicity of ChAd3-EBO-Z. We did analysis by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT02289027. FINDINGS: Between Oct 24, 2014, and June 22, 2015, we randomly assigned 120 participants, of whom 18 (15%) were potentially deployed and 102 (85%) were non-deployed, to receive high-dose vaccine (n=49), low-dose vaccine (n=51), or placebo (n=20). Participants were followed up for 6 months. No vaccine-related serious adverse events were reported. We recorded local adverse events in 30 (75%) of 40 participants in the high-dose group, 33 (79%) of 42 participants in the low-dose group, and five (25%) of 20 participants in the placebo group. Fatigue or malaise was the most common systemic adverse event, reported in 25 (62%) participants in the high-dose group, 25 (60%) participants in the low-dose group, and five (25%) participants in the placebo group, followed by headache, reported in 23 (57%), 25 (60%), and three (15%) participants, respectively. Fever occurred 24 h after injection in 12 (30%) participants in the high-dose group and 11 (26%) participants in the low-dose group versus one (5%) participant in the placebo group. Geometric mean concentrations of IgG antibodies against Ebola glycoprotein peaked on day 28 at 51 μg/mL (95% CI 41·1-63·3) in the high-dose group, 44·9 μg/mL (25·8-56·3) in the low-dose group, and 5·2 μg/mL (3·5-7·6) in the placebo group, with respective response rates of 96% (95% CI 85·7-99·5), 96% (86·5-99·5), and 5% (0·1-24·9). Geometric mean concentrations decreased by day 180 to 25·5 μg/mL (95% CI 20·6-31·5) in the high-dose group, 22·1 μg/mL (19·3-28·6) in the low-dose group, and 3·2 μg/mL (2·4-4·9) in the placebo group. 28 (57%) participants given high-dose vaccine and 31 (61%) participants given low-dose vaccine developed glycoprotein-specific CD4 cell responses, and 33 (67%) and 35 (69%), respectively, developed CD8 responses. INTERPRETATION: ChAd3-EBO-Z was safe and well tolerated, although mild to moderate systemic adverse events were common. A single dose was immunogenic in almost all vaccine recipients. Antibody responses were still significantly present at 6 months. There was no significant difference between doses for safety and immunogenicity outcomes. This acceptable safety profile provides a reliable basis to proceed with phase 2 and phase 3 efficacy trials in Africa. FUNDING: Swiss State Secretariat for Education, Research and Innovation (SERI), through the EU Horizon 2020 Research and Innovation Programme.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Techniques for the coherent generation and detection of electromagnetic radiation in the far infrared, or terahertz, region of the electromagnetic spectrum have recently developed rapidly and may soon be applied for in vivo medical imaging. Both continuous wave and pulsed imaging systems are under development, with terahertz pulsed imaging being the more common method. Typically a pump and probe technique is used, with picosecond pulses of terahertz radiation generated from femtosecond infrared laser pulses, using an antenna or nonlinear crystal. After interaction with the subject either by transmission or reflection, coherent detection is achieved when the terahertz beam is combined with the probe laser beam. Raster scanning of the subject leads to an image data set comprising a time series representing the pulse at each pixel. A set of parametric images may be calculated, mapping the values of various parameters calculated from the shape of the pulses. A safety analysis has been performed, based on current guidelines for skin exposure to radiation of wavelengths 2.6 µm–20 mm (15 GHz–115 THz), to determine the maximum permissible exposure (MPE) for such a terahertz imaging system. The international guidelines for this range of wavelengths are drawn from two U.S. standards documents. The method for this analysis was taken from the American National Standard for the Safe Use of Lasers (ANSI Z136.1), and to ensure a conservative analysis, parameters were drawn from both this standard and from the IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields (C95.1). The calculated maximum permissible average beam power was 3 mW, indicating that typical terahertz imaging systems are safe according to the current guidelines. Further developments may however result in systems that will exceed the calculated limit. Furthermore, the published MPEs for pulsed exposures are based on measurements at shorter wavelengths and with pulses of longer duration than those used in terahertz pulsed imaging systems, so the results should be treated with caution.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The neutral wire in most existing power flow and fault analysis software is usually merged into phase wires using Kron's reduction method. In some applications, such as fault analysis, fault location, power quality studies, safety analysis, loss analysis etc., knowledge of the neutral wire and ground currents and voltages could be of particular interest. A general short-circuit analysis algorithm for three-phase four-wire distribution networks, based on the hybrid compensation method, is presented. In this novel use of the technique, the neutral wire and assumed ground conductor are explicitly represented. A generalised fault analysis method is applied to the distribution network for conditions with and without embedded generation. Results obtained from several case studies on medium- and low-voltage test networks with unbalanced loads, for isolated and multi-grounded neutral scenarios, are presented and discussed. Simulation results show the effects of neutrals and system grounding on the operation of the distribution feeders.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Burn-up credit analyses are based on depletion calculations that provide an accurate prediction of spent fuel isotopic contents, followed by criticality calculations to assess keff

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A Probabilistic Safety Assessment (PSA) is being developed for a steam-methane reforming hydrogen production plant linked to a High-Temperature Gas Cooled Nuclear Reactor (HTGR). This work is based on the Japan Atomic Energy Research Institute’s (JAERI) High Temperature Test Reactor (HTTR) prototype in Japan. This study has two major objectives: calculate the risk to onsite and offsite individuals, and calculate the frequency of different types of damage to the complex. A simplified HAZOP study was performed to identify initiating events, based on existing studies. The initiating events presented here are methane pipe break, helium pipe break, and PPWC heat exchanger pipe break. Generic data was used for the fault tree analysis and the initiating event frequency. Saphire was used for the PSA analysis. The results show that the average frequency of an accident at this complex is 2.5E-06, which is divided into the various end states. The dominant sequences result in graphite oxidation which does not pose a health risk to the population. The dominant sequences that could affect the population are those that result in a methane explosion and occur 6.6E-8/year, while the other sequences are much less frequent. The health risk presents itself if there are people in the vicinity who could be affected by the explosion. This analysis also demonstrates that an accident in one of the plants has little effect on the other. This is true given the design base distance between the plants, the fact that the reactor is underground, as well as other safety characteristics of the HTGR. Sensitivity studies are being performed in order to determine where additional and improved data is needed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Steam Generator Tube Rupture (SGTR) sequences in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are a special kind of transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path from the reactor coolant system to the environment. The first methodology used to perform the Deterministic Safety Analysis (DSA) of a SGTR did not credit the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that period of time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that the operators usually take more than 30 min to stop the leakage in actual sequences. Some methodologies were raised to overcome that fact, considering operator actions from the beginning of the transient, as it is done in Probabilistic Safety Analysis. This paper presents the results of comparing different assumptions regarding the single failure criteria and the operator action taken from the most common methodologies included in the different Deterministic Safety Analysis. One single failure criteria that has not been analysed previously in the literature is proposed and analysed in this paper too. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP) with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The behaviour of the reactor is quite diverse depending on the different assumptions made regarding the operator actions. On the other hand, although there are high conservatisms included in the hypothesis, as the single failure criteria, all the results are quite far from the regulatory limits. In addition, some improvements to the Emergency Operating Procedures to minimize the offsite release from the damaged SG in case of a SGTR are outlined taking into account the offsite dose sensitivity results.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Formal methods have significant benefits for developing safety critical systems, in that they allow for correctness proofs, model checking safety and liveness properties, deadlock checking, etc. However, formal methods do not scale very well and demand specialist skills, when developing real-world systems. For these reasons, development and analysis of large-scale safety critical systems will require effective integration of formal and informal methods. In this paper, we use such an integrative approach to automate Failure Modes and Effects Analysis (FMEA), a widely used system safety analysis technique, using a high-level graphical modelling notation (Behavior Trees) and model checking. We inject component failure modes into the Behavior Trees and translate the resulting Behavior Trees to SAL code. This enables us to model check if the system in the presence of these faults satisfies its safety properties, specified by temporal logic formulas. The benefit of this process is tool support that automates the tedious and error-prone aspects of FMEA.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND & AIMS: A sustained virologic response (SVR) to therapy for hepatitis C virus (HCV) infection is defined as the inability to detect HCV RNA 24 weeks after completion of treatment. Although small studies have reported that the SVR is durable and lasts for long periods, it has not been conclusively shown. METHODS: The durability of treatment responses was examined in patients originally enrolled in one of 9 randomized multicenter trials (n = 1343). The study included patients who received pegylated interferon (peginterferon) alfa-2a alone (n = 166) or in combination with ribavirin (n = 1077, including 79 patients with normal alanine aminotransferase levels and 100 patients who were coinfected with human immunodeficiency virus and HCV) and whose serum samples were negative for HCV RNA (<50 IU/mL) at their final assessment. Patients were assessed annually, from the date of last treatment, for a mean of 3.9 years (range, 0.8-7.1 years). RESULTS: Most patients (99.1%) who achieved an SVR had undetectable levels of HCV RNA in serum samples throughout the follow-up period. Serum samples from 0.9% of the patients contained HCV RNA a mean of 1.8 years (range, 1.1-2.9 years) after treatment ended. It is not clear if these patients were reinfected or experienced a relapse. CONCLUSIONS: In a large cohort of patients monitored for the durability of an SVR, the SVR was maintained for almost 4 years after treatment with peginterferon alfa-2a alone or in combination with ribavirin. In patients with chronic hepatitis C infection, the SVR is durable and these patients should be considered as cured.