974 resultados para Rarefaction curve


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Since generic drugs have the same therapeutic effect as the original formulation but at generally lower costs, their use should be more heavily promoted. However, a considerable number of barriers to their wider use have been observed in many countries. The present study examines the influence of patients, physicians and certain characteristics of the generics' market on generic substitution in Switzerland.Methods: We used reimbursement claims' data submitted to a large health insurer by insured individuals living in one of Switzerland's three linguistic regions during 2003. All dispensed drugs studied here were substitutable. The outcome (use of a generic or not) was modelled by logistic regression, adjusted for patients' characteristics (gender, age, treatment complexity, substitution groups) and with several variables describing reimbursement incentives (deductible, co-payments) and the generics' market (prices, packaging, co-branded original, number of available generics, etc.).Results: The overall generics' substitution rate for 173,212 dispensed prescriptions was 31%, though this varied considerably across cantons. Poor health status (older patients, complex treatments) was associated with lower generic use. Higher rates were associated with higher out-of-pocket costs, greater price differences between the original and the generic, and with the number of generics on the market, while reformulation and repackaging were associated with lower rates. The substitution rate was 13% lower among hospital physicians. The adoption of the prescribing practices of the canton with the highest substitution rate would increase substitution in other cantons to as much as 26%.Conclusions: Patient health status explained a part of the reluctance to substitute an original formulation by a generic. Economic incentives were efficient, but with a moderate global effect. The huge interregional differences indicated that prescribing behaviours and beliefs are probably the main determinant of generic substitution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the case of a drug interaction between methotrexate (MTX) and chloral hydrate (CH) observed in a child treated for acute leukemia. Significantly slower MTX clearance and increased MTX exposure occurred on the first three courses of a high-dose chemotherapy when co-administered with CH despite normal renal function, adequate hydration, and alkalinization. Mean MTX area under the curve associated with CH administration was 1,134 µmol hours/L, compared to 608 µmol hours/L after discontinuation of CH. This interaction possibly resulted from a competition between anionic CH metabolites and MTX for renal tubular excretion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To date, there is no widely accepted clinical scale to monitor the evolution of depressive symptoms in demented patients. We assessed the sensitivity to treatment of a validated French version of the Health of the Nation Outcome Scale (HoNOS) 65+ compared to five routinely used scales. Thirty elderly inpatients with ICD-10 diagnosis of dementia and depression were evaluated at admission and discharge using paired t-test. Using the Brief Psychiatric Rating Scale (BPRS) "depressive mood" item as gold standard, a receiver operating characteristic curve (ROC) analysis assessed the validity of HoNOS65+F "depressive symptoms" item score changes. Unlike Geriatric Depression Scale, Mini Mental State Examination and Activities of Daily Living scores, BPRS scores decreased and Global Assessment Functioning Scale score increased significantly from admission to discharge. Amongst HoNOS65+F items, "behavioural disturbance", "depressive symptoms", "activities of daily life" and "drug management" items showed highly significant changes between the first and last day of hospitalization. The ROC analysis revealed that changes in the HoNOS65+F "depressive symptoms" item correctly classified 93% of the cases with good sensitivity (0.95) and specificity (0.88) values. These data suggest that the HoNOS65+F "depressive symptoms" item may provide a valid assessment of the evolution of depressive symptoms in demented patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND AND AIM: There is an ongoing debate on which obesity marker better predicts cardiovascular disease (CVD). In this study, the relationships between obesity markers and high (>5%) 10-year risk of fatal CVD were assessed. METHODS AND RESULTS: A cross-sectional study was conducted including 3047 women and 2689 men aged 35-75years. Body fat percentage was assessed by tetrapolar bioimpedance. CVD risk was assessed using the SCORE risk function and gender- and age-specific cut points for body fat were derived. The diagnostic accuracy of each obesity marker was evaluated through receiver operating characteristics (ROC) analysis. In men, body fat presented a higher correlation (r=0.31) with 10-year CVD risk than waist/hip ratio (WHR, r=0.22), waist (r=0.22) or BMI (r=0.19); the corresponding values in women were 0.18, 0.15, 0.11 and 0.05, respectively (all p<0.05). In both genders, body fat showed the highest area under the ROC curve (AUC): in men, the AUC (95% confidence interval) were 76.0 (73.8-78.2), 67.3 (64.6-69.9), 65.8 (63.1-68.5) and 60.6 (57.9-63.5) for body fat, WHR, waist and BMI, respectively. In women, the corresponding values were 72.3 (69.2-75.3), 66.6 (63.1-70.2), 64.1 (60.6-67.6) and 58.8 (55.2-62.4). The use of the body fat percentage criterion enabled the capture of three times more subjects with high CVD risk than the BMI criterion, and almost twice as much as the WHR criterion. CONCLUSION: Obesity defined by body fat percentage is more related with 10-year risk of fatal CVD than obesity markers based on WHR, waist or BMI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ecology of mosquitoes were studied (Diptera: Culicidae) in areas of Serra do Mar State Park, State of São Paulo, Brazil. Systematized monthly human bait collections were made three times a day, for periods of 2 or 3 h each, in sylvatic and rural areas for 24 consecutive months (January 1991 to December 1992). A total of 24,943 specimens of adult mosquitoes belonging to 57 species were collected during 622 collective periods. Coquillettidia chrysonotum was the most frequent collected mosquito (45.8%) followed by Aedes serratus (6.8%), Cq. venezuelensis (6.5%), Psorophora ferox (5.2) and Ps. albipes (3.1%). The monthly averages of temperature and relative humidity were inserted in the ten-year average limits of maximum and minimum of the previous ten-years. Rainfall accompanied the curve of the ten-year averages. Those climatic factors were influential in the incidence of some species; temperature: Anopheles cruzii, An. mediopunctatus, Ae. scapularis, Ae. fulvus, Cq. chrysonotum, Cq. venezuelensis, Runchomyia reversa, Wyeomyia dyari, Wy. confusa, Wy. shannoni, Wy. theobaldi and Limatus flavisetosus; relative humidity: Ae. serratus, Ae. scapularis, Cq. venezuelensis and Ru. reversa; rainfall: An. cruzii, Ae. scapularis, Ae. fulvus, Cq. venezuelensis Ru. reversa, Wy. theobaldi and Li. flavisetosus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: It is generally assumed that the biodistribution and pharmacokinetics of radiolabelled antibodies remain similar between dosimetric and therapeutic injections in radioimmunotherapy. However, circulation half-lives of unlabelled rituximab have been reported to increase progressively after the weekly injections of standard therapy doses. The aim of this study was to evaluate the evolution of the pharmacokinetics of repeated 131I-rituximab injections during treatment with unlabelled rituximab in patients with non-Hodgkin's lymphoma (NHL). METHODS: Patients received standard weekly therapy with rituximab (375 mg/m2) for 4 weeks and a fifth injection at 7 or 8 weeks. Each patient had three additional injections of 185 MBq 131I-rituximab in either treatment weeks 1, 3 and 7 (two patients) or weeks 2, 4 and 8 (two patients). The 12 radiolabelled antibody injections were followed by three whole-body (WB) scintigraphic studies during 1 week and blood sampling on the same occasions. Additional WB scans were performed after 2 and 4 weeks post 131I-rituximab injection prior to the second and third injections, respectively. RESULTS: A single exponential radioactivity decrease for WB, liver, spleen, kidneys and heart was observed. Biodistribution and half-lives were patient specific, and without significant change after the second or third injection compared with the first one. Blood T(1/2)beta, calculated from the sequential blood samples and fitted to a bi-exponential curve, was similar to the T(1/2) of heart and liver but shorter than that of WB and kidneys. Effective radiation dose calculated from attenuation-corrected WB scans and blood using Mirdose3.1 was 0.53+0.05 mSv/MBq (range 0.48-0.59 mSv/MBq). Radiation dose was highest for spleen and kidneys, followed by heart and liver. CONCLUSION: These results show that the biodistribution and tissue kinetics of 131I-rituximab, while specific to each patient, remained constant during unlabelled antibody therapy. RIT radiation doses can therefore be reliably extrapolated from a preceding dosimetry study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

QUESTIONS UNDER STUDY AND PRINCIPLES: Estimating glomerular filtration rate (GFR) in hospitalised patients with chronic kidney disease (CKD) is important for drug prescription but it remains a difficult task. The purpose of this study was to investigate the reliability of selected algorithms based on serum creatinine, cystatin C and beta-trace protein to estimate GFR and the potential added advantage of measuring muscle mass by bioimpedance. In a prospective unselected group of patients hospitalised in a general internal medicine ward with CKD, GFR was evaluated using inulin clearance as the gold standard and the algorithms of Cockcroft, MDRD, Larsson (cystatin C), White (beta-trace) and MacDonald (creatinine and muscle mass by bioimpedance). 69 patients were included in the study. Median age (interquartile range) was 80 years (73-83); weight 74.7 kg (67.0-85.6), appendicular lean mass 19.1 kg (14.9-22.3), serum creatinine 126 μmol/l (100-149), cystatin C 1.45 mg/l (1.19-1.90), beta-trace protein 1.17 mg/l (0.99-1.53) and GFR measured by inulin 30.9 ml/min (22.0-43.3). The errors in the estimation of GFR and the area under the ROC curves (95% confidence interval) relative to inulin were respectively: Cockcroft 14.3 ml/min (5.55-23.2) and 0.68 (0.55-0.81), MDRD 16.3 ml/min (6.4-27.5) and 0.76 (0.64-0.87), Larsson 12.8 ml/min (4.50-25.3) and 0.82 (0.72-0.92), White 17.6 ml/min (11.5-31.5) and 0.75 (0.63-0.87), MacDonald 32.2 ml/min (13.9-45.4) and 0.65 (0.52-0.78). Currently used algorithms overestimate GFR in hospitalised patients with CKD. As a consequence eGFR targeted prescriptions of renal-cleared drugs, might expose patients to overdosing. The best results were obtained with the Larsson algorithm. The determination of muscle mass by bioimpedance did not provide significant contributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An enzyme-linked immunosorbent assay was standardized for the detection of cryptococcal antigen in serum and cerebrospinal fluid. The system was evaluated in clinical samples from patients infected by human immunodeficiency virus with and without previous cryptococcosis diagnosis. The evaluated system is highly sensitive and specific, and when it was compared with latex agglutination there were not significant differences. A standard curve with purified Cryptococcus neoformans antigen was settled down for the antigen quantification in positive samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aimed to quantitatively describe and compare whole-body fat oxidation kinetics in cycling and running using a sinusoidal mathematical model (SIN). Thirteen moderately trained individuals (7 men and 6 women) performed two graded exercise tests, with 3-min stages and 1 km h(-1) (or 20 W) increment, on a treadmill and on a cycle ergometer. Fat oxidation rates were determined using indirect calorimetry and plotted as a function of exercise intensity. The SIN model, which includes three independent variables (dilatation, symmetry and translation) that account for main quantitative characteristics of kinetics, provided a mathematical description of fat oxidation kinetics and allowed for determination of the intensity (Fat(max)) that elicits maximal fat oxidation (MFO). While the mean fat oxidation kinetics in cycling formed a symmetric parabolic curve, the mean kinetics during running was characterized by a greater dilatation (i.e., widening of the curve, P < 0.001) and a rightward asymmetry (i.e., shift of the peak of the curve to higher intensities, P = 0.01). Fat(max) was significantly higher in running compared with cycling (P < 0.001), whereas MFO was not significantly different between modes of exercise (P = 0.36). This study showed that the whole-body fat oxidation kinetics during running was characterized by a greater dilatation and a rightward asymmetry compared with cycling. The greater dilatation may be mainly related to the larger muscle mass involved in running while the rightward asymmetry may be induced by the specific type of muscle contraction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper characterizes and evaluates the potential of three commercial CT iterative reconstruction methods (ASIR?, VEO? and iDose(4 ()?())) for dose reduction and image quality improvement. We measured CT number accuracy, standard deviation (SD), noise power spectrum (NPS) and modulation transfer function (MTF) metrics on Catphan phantom images while five human observers performed four-alternative forced-choice (4AFC) experiments to assess the detectability of low- and high-contrast objects embedded in two pediatric phantoms. Results show that 40% and 100% ASIR as well as iDose(4) levels 3 and 6 do not affect CT number and strongly decrease image noise with relative SD constant in a large range of dose. However, while ASIR produces a shift of the NPS curve apex, less change is observed with iDose(4) with respect to FBP methods. With second-generation iterative reconstruction VEO, physical metrics are even further improved: SD decreased to 70.4% at 0.5 mGy and spatial resolution improved to 37% (MTF(50%)). 4AFC experiments show that few improvements in detection task performance are obtained with ASIR and iDose(4), whereas VEO makes excellent detections possible even at an ultra-low-dose (0.3 mGy), leading to a potential dose reduction of a factor 3 to 7 (67%-86%). In spite of its longer reconstruction time and the fact that clinical studies are still required to complete these results, VEO clearly confirms the tremendous potential of iterative reconstructions for dose reduction in CT and appears to be an important tool for patient follow-up, especially for pediatric patients where cumulative lifetime dose still remains high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Roux-en-Y gastric bypass (RYGBP)-essentially a restrictive bariatric procedure-is currently considered the gold standard for the surgical treatment of morbid obesity. Open surgery in obese patients is associated with a high risk of cardiopulmonary complications, wound infection, and late incisional hernia. Laparoscopic surgery has been shown to reduce perioperative morbidity and to improve postoperative recovery for various procedures. Herein we present our results with laparoscopic RYGBP after an initial 2-year experience. METHODS: A prospective database was created in our department beginning without the first laparoscopic bariatric procedure. To provide a complete follow-up of 6 months, the results of all patients operated on between June 1999 and August 2001 were reviewed. Early surgical results, weight loss, correction of comorbidities, and improvement of quality of life were evaluated. RESULTS: A total of 107 patients were included. There were 82 women and 25 men, with a mean age of 39.7 years (range, 19-58). RYGBP was a primary procedure in 80 cases (49 morbidly obese and 31 superobese patients) and a reoperation after failure or complication of another bariatric operation in 27 cases. Mean duration of surgery was 168 min for morbidly obese patients, 196 min for surperobese patients, and 205 min for reoperated patients (p <0.01). Conversion to open surgery was necessary in two cases. A total of 22 patients (20.5%) developed complication. Nine of them (8.4%) required reoperation for leak (five cases, or 4.6%), bowel occlusion (three cases, or 2.8%), or subphrenic abscess (one case, or 0.9%). mortality was 0.9%. Major morbidity decreased over time (first two-thirds, 12.5%, last third, 2.7%). major morbidity decreased over time (first two-thirds, 12.5%; last third, 2.7%). Excess weight loss of -50% was achieved in >80% of the patients, corresponding to a loss of 15 body mass index (BMI) units in morbidly obese patients and 20 BMI units in superobese patients. In the vast majority of patients, comorbidities improved or disappeared over time and quality of life improved. CONCLUSIONS: Laparoscopic Roux-en-Y gastric bypass is feasible, but it is a very complex operation. Indeed, it is associated with a long and steep learning curve, as reflected in the high number of major complications among our first 70 patients. The learning curve probably includes between 100 and 150 patients. With increasing experience, the morbidity rate becomes more acceptable and comparable to that of open RYGBP. The results in terms of weight loss and correction of comorbidities are similar to those obtained after open surgery, at least in the short term. However, only surgeons with extensive experience in advanced laparoscopic as well as bariatric surgery should attempt this procedure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: In adults, strict control of hyperglycemia reduces mortality and morbidity. There is controversy in medical patients and neurological patients who can suffer of neuroglucopenia. Objectives: To determine prevalence and prognostic significance of hyperglycemia among critically ill non-diabetic children. To evaluate which patients will best benefit of insulin treatment. Methods: Retrospective study using blood glucose levels (GLUC: 9015 values, 923 patients) in our PICU from 01.2003 to 12.2005. 11 Patients with DKA were excluded. Overall PICU mortality was 3.7%. Hyperglycemia was defined at 6.1 mmol/L and different cutoff values (6.1, 8.3 and 11.1 mmol/l) were analyzed for glycemia at admission (GLUC). Sustained hyperglycemia was evaluated with the area under the curve normalized per hour (48h-AUC/h) for the first 48 h. The prevalence of hypo (_3mmol/L), hyperglycemia and PICU death were analyzed. Results: Trough the use of different cutoff values (_6.1, _8.3 and _11.1 mmol/l), prevalence of hyperglycemia at admission was 31.8 %, 16.8% and 10.3%; associated mortality was 2.8%, 4.0% and 15.2% respectively, significantly correlated to cutoff values (r_0.95, p_0.05). Prevalence of hypoglycemia at admission was low (0.9% with no death). 48h-AUC(mmol/L/h) was computed in 747 children (30 deaths). Prevalence of hyperglycemic 48h-AUC values was 47.5%, 17.3% and 4.0% with a respective mortality of 3.4%, 6.3% and 20.7% (r_0.97, p_0.03). For those with high GLUC and high 48h-AUC (_ 11.1 mmol/L) mortality was high (31.5%), but it decrease dramatically to 5.5% when 48h-AUC decrease spontaneously to values _8.3 mmol/L/h. Finally, when patients with severe neurological lesions (GCS_3, n_22) where excluded, increased mortality was observed only for GLUC (n_ 86) and 48h-AUC (n_26) higher than 11.1 mmol/L. Conclusions: Hyperglycemia at admission and even more sustained hyperglycemia (AUC) are highly correlated to mortality in PICU. But children who will have benefit of insulin therapy represent only 3% of our population, much lower than for adults.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Chest pain raises concern for the possibility of coronary heart disease. Scoring methods have been developed to identify coronary heart disease in emergency settings, but not in primary care. METHODS: Data were collected from a multicenter Swiss clinical cohort study including 672 consecutive patients with chest pain, who had visited one of 59 family practitioners' offices. Using delayed diagnosis we derived a prediction rule to rule out coronary heart disease by means of a logistic regression model. Known cardiovascular risk factors, pain characteristics, and physical signs associated with coronary heart disease were explored to develop a clinical score. Patients diagnosed with angina or acute myocardial infarction within the year following their initial visit comprised the coronary heart disease group. RESULTS: The coronary heart disease score was derived from eight variables: age, gender, duration of chest pain from 1 to 60 minutes, substernal chest pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the receiver operating characteristics curve was of 0.95 with a 95% confidence interval of 0.92; 0.97. From this score, 413 patients were considered as low risk for values of percentile 5 of the coronary heart disease patients. Internal validity was confirmed by bootstrapping. External validation using data from a German cohort (Marburg, n = 774) revealed a receiver operating characteristics curve of 0.75 (95% confidence interval, 0.72; 0.81) with a sensitivity of 85.6% and a specificity of 47.2%. CONCLUSIONS: This score, based only on history and physical examination, is a complementary tool for ruling out coronary heart disease in primary care patients complaining of chest pain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.