969 resultados para Validation Measures
Resumo:
Background Chronic obstructive pulmonary disease (COPD) is increasingly considered a heterogeneous condition. It was hypothesised that COPD, as currently defined, includes different clinically relevant subtypes. Methods To identify and validate COPD subtypes, 342 subjects hospitalised for the first time because of a COPD exacerbation were recruited. Three months after discharge, when clinically stable, symptoms and quality of life, lung function, exercise capacity, nutritional status, biomarkers of systemic and bronchial inflammation, sputum microbiology, CT of the thorax and echocardiography were assessed. COPD groups were identified by partitioning cluster analysis and validated prospectively against cause-specific hospitalisations and all-cause mortality during a 4 year follow-up. Results Three COPD groups were identified: group 1 (n ¼ 126, 67 years) was characterised by severe airflow limitation (postbronchodilator forced expiratory volume in 1 s (FEV 1 ) 38% predicted) and worse performance in most of the respiratory domains of the disease; group 2 (n ¼ 125, 69 years) showed milder airflow limitation (FEV 1 63% predicted); and group 3 (n ¼ 91, 67 years) combined a similarly milder airflow limitation (FEV 1 58% predicted) with a high proportion of obesity, cardiovascular disorders, iabetes and systemic inflammation. During follow-up, group 1 had more frequent hospitalisations due to COPD (HR 3.28, p < 0.001) and higher all-cause mortality (HR 2.36, p ¼ 0.018) than the other two groups, whereas group 3 had more admissions due to cardiovascular disease (HR 2.87, p ¼ 0.014). Conclusions In patients with COPD recruited at their first hospitalisation, three different COPD subtypes were identified and prospectively validated:"severe respiratory COPD","moderate respiratory COPD", and"systemic COPD'
Resumo:
BACKGROUND: The present study was designed to evaluate surgeons' strategies and adherence to preventive measures against surgical site infections (SSIs). MATERIALS AND METHODS: All surgeons participating in a prospective Swiss multicentric surveillance program for SSIs received a questionnaire developed from the 2008 National (United Kingdom) Institute for Health and Clinical Excellence (NICE) clinical guidelines on prevention and treatment of SSIs. We focused on perioperative management and surgical technique in hernia surgery, cholecystectomy, appendectomy, and colon surgery (COL). RESULTS: Forty-five of 50 surgeons contacted (90%) responded. Smoking cessation and nutritional screening are regularly propagated by 1/3 and 1/2 of surgeons, respectively. Thirty-eight percent practice bowel preparation before COL. Preoperative hair removal is routinely (90%) performed in the operating room with electric clippers. About 50% administer antibiotic prophylaxis within 30 min before incision. Intra-abdominal drains are common after COL (43%). Two thirds of respondents apply nonocclusive wound dressings that are manipulated after hand disinfection (87%). Dressings are usually changed on postoperative day (POD) 2 (75%), and wounds remain undressed on POD 2-3 or 4-5 (36% each). CONCLUSIONS: Surgeons' strategies to prevent SSIs still differ widely. The adherence to the current NICE guidelines is low for many procedures regardless of the available level of evidence. Further research should provide convincing data in order to justify standardization of perioperative management.
Resumo:
Purpose: Concerns about self-reports have led to calls for objective measures of blood alcohol concentration (BAC). The present study compared objective measures with self-reports. Methods: BAC from breath or blood samples were obtained from 272 randomly sampled injured patients who were admitted to a Swiss emergency department (ED). Self-reports were compared a) between those providing and refusing a BAC test, and b) to estimated peak BAC (EPBAC) values based on BACs using the Widmark formula. Results: Those providing BACs were significantly (P < 0.05) younger, more often male, and less often reported alcohol consumption before injury, but consumed higher quantities when drinking. Eighty-eight percent of those with BAC measures gave consistent reports (positive or negative). Significantly more patients reported consumption with negative BAC measures (N = 29) than vice versa (N = 3). Duration of consumption and times between injury and BAC measurement predicted EPBAC better than did the objective BAC measure. Conclusions: There is little evidence that patients who provide objective BAC measures deliberately conceal consumption. ED studies must rely on self-reports, eg, take the time period between injury and ED admission into account. Clearly, objective measures are of clinical relevance, eg, to provide optimal treatment in the ED. However, they may be less relevant to establishing effects in an epidemiologic sense, such as estimating risk relationships. In this respect, efforts to increase the validity and reliability of self-reports should be preferred over the collection of additional objective measures.
Resumo:
Introduction: THC-COOH has been proposed as a criterion to help to distinguish between occasional from regular cannabis users. However, to date this indicator has not been adequately assessed under experimental and real-life conditions. Methods: We carried out a controlled administration study of smoked cannabis with a placebo. Twenty-three heavy smokers and 25 occasional smokers, between 18 and 30 years of age, participated in this study [Battistella G et al., PloS one. 2013;8(1):e52545]. We collected data from a second real case study performed with 146 traffic offenders' cases in which the whole blood cannabinoid concentrations and the frequency of cannabis use were known. Cannabinoid levels were determined in whole blood using tandem mass spectrometry methods. Results: Significantly high differences in THC-COOH concentrations were found between the two groups when measured during the screening visit, prior to the smoking session, and throughout the day of the experiment. Receiver operating characteristic (ROC) curves were determined and two threshold criteria were proposed in order to distinguish between these groups: a free THC-COOH concentration below 3 μg/L suggested an occasional consumption (≤ 1 joint/week) while a concentration higher than 40 μg/L corresponded to a heavy use (≥ 10 joints/month). These thresholds were successfully tested with the second real case study. The two thresholds were not challenged by the presence of ethanol (40% of cases) and of other therapeutic and illegal drugs (24%). These thresholds were also found to be consistent with previously published experimental data. Conclusion: We propose the following procedure that can be very useful in the Swiss context but also in other countries with similar traffic policies: If the whole blood THC-COOH concentration is higher than 40 μg/L, traffic offenders must be directed first and foremost toward medical assessment of their fitness to drive. This evaluation is not recommended if the THC-COOH concentration is lower than 3 μg/L. A THC-COOH level between these two thresholds can't be reliably interpreted. In such a case, further medical assessment and follow up of the fitness to drive are also suggested, but with lower priority.
Treatment intensification and risk factor control: toward more clinically relevant quality measures.
Resumo:
BACKGROUND: Intensification of pharmacotherapy in persons with poorly controlled chronic conditions has been proposed as a clinically meaningful process measure of quality. OBJECTIVE: To validate measures of treatment intensification by evaluating their associations with subsequent control in hypertension, hyperlipidemia, and diabetes mellitus across 35 medical facility populations in Kaiser Permanente, Northern California. DESIGN: Hierarchical analyses of associations of improvements in facility-level treatment intensification rates from 2001 to 2003 with patient-level risk factor levels at the end of 2003. PATIENTS: Members (515,072 and 626,130; age >20 years) with hypertension, hyperlipidemia, and/or diabetes mellitus in 2001 and 2003, respectively. MEASUREMENTS: Treatment intensification for each risk factor defined as an increase in number of drug classes prescribed, of dosage for at least 1 drug, or switching to a drug from another class within 3 months of observed poor risk factor control. RESULTS: Facility-level improvements in treatment intensification rates between 2001 and 2003 were strongly associated with greater likelihood of being in control at the end of 2003 (P < or = 0.05 for each risk factor) after adjustment for patient- and facility-level covariates. Compared with facility rankings based solely on control, addition of percentages of poorly controlled patients who received treatment intensification changed 2003 rankings substantially: 14%, 51%, and 29% of the facilities changed ranks by 5 or more positions for hypertension, hyperlipidemia, and diabetes, respectively. CONCLUSIONS: Treatment intensification is tightly linked to improved control. Thus, it deserves consideration as a process measure for motivating quality improvement and possibly for measuring clinical performance.
Resumo:
AIMS: To validate a model for quantifying the prognosis of patients with pulmonary embolism (PE). The model was previously derived from 10 534 US patients. METHODS AND RESULTS: We validated the model in 367 patients prospectively diagnosed with PE at 117 European emergency departments. We used baseline data for the model's 11 prognostic variables to stratify patients into five risk classes (I-V). We compared 90-day mortality within each risk class and the area under the receiver operating characteristic curve between the validation and the original derivation samples. We also assessed the rate of recurrent venous thrombo-embolism and major bleeding within each risk class. Mortality was 0% in Risk Class I, 1.0% in Class II, 3.1% in Class III, 10.4% in Class IV, and 24.4% in Class V and did not differ between the validation and the original derivation samples. The area under the curve was larger in the validation sample (0.87 vs. 0.78, P=0.01). No patients in Classes I and II developed recurrent thrombo-embolism or major bleeding. CONCLUSION: The model accurately stratifies patients with PE into categories of increasing risk of mortality and other relevant complications. Patients in Risk Classes I and II are at low risk of adverse outcomes and are potential candidates for outpatient treatment.
Resumo:
Aims: The psychometric properties of the EORTC QLQ-BN20, a brain cancer-specific HRQOL questionnaire, have been previously determined in an English-speaking sample of patients. This study examined the validity and reliability of the questionnaire in a multi-national, multi-lingual study. Methods: QLQ-BN20 data were selected from two completed phase III EORTC/NCIC clinical trials in brain cancer (N=891), including 12 languages. Experimental treatments were surgery followed by radiotherapy (RT) and adjuvant PCV chemotherapy or surgery followed by concomitant RT plus temozolomide (TMZ) chemotherapy and adjuvant TMZ chemotherapy. Standard treatment consisted of surgery and postoperative RT alone. The psychometrics of the QLQ-BN20 were examined by means of multi-trait scaling analyses, reliability estimation, known groups validity testing, and responsiveness analysis. Results: All QLQ-BN20 items correlated more strongly with their own scale (r>0.70) than with other QLQ-BN20 scales. Internal consistency reliability coefficients were high (all alpha0.70). Known-groups comparisons yielded positive results, with the QLQ-BN20 distinguishing between patients with differing levels of performance status and mental functioning. Responsiveness of the questionnaire to changes over time was acceptable. Conclusion: The QLQ-BN20 demonstrates adequate psychometric properties and can be recommended for use in conjunction with the QLQ-C30 in assessing the HRQOL of brain cancer patients in international studies.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
Résumé : J'ai souvent vu des experts être d'avis contraires. Je n'en ai jamais vu aucun avoir tort. Auguste Detoeuf Propos d'O.L. Brenton, confiseur, Editions du Tambourinaire, 1948. En choisissant volontairement une problématique comptable typiquement empirique, ce travail s'est attelé à tenter de démontrer la possibilité de produire des enseignements purement comptables (ie à l'intérieur du schème de représentation de la Comptabilité) en s'interdisant l'emprunt systématique de theories clé-en-main à l'Économie -sauf quant cela s'avère réellement nécessaire et légitime, comme dans l'utilisation du CAPM au chapitre précédent. Encore une fois, rappelons que cette thèse n'est pas un réquisitoire contre l'approche économique en tant que telle, mais un plaidoyer visant à mitiger une telle approche en Comptabilité. En relation avec le positionnement épistémologique effectué au premier chapitre, il a été cherché à mettre en valeur l'apport et la place de la Comptabilité dans l'Économie par le positionnement de la Comptabilité en tant que discipline pourvoyeuse de mesures de représentation de l'activité économique. Il nous paraît clair que si l'activité économique, en tant que sémiosphère comptable directe, dicte les observations comptables, la mesure de ces dernières doit, tant que faire se peut, tenter de s'affranchir de toute dépendance à la discipline économique et aux théories-méthodes qui lui sont liées, en adoptant un mode opératoire orthogonal, rationnel et systématique dans le cadre d'axiomes lui appartenant en propre. Cette prise de position entraîne la définition d'un nouveau cadre épistémologique par rapport à l'approche positive de la Comptabilité. Cette dernière peut se décrire comme l'expression philosophique de l'investissement de la recherche comptable par une réflexion méthodique propre à la recherche économique. Afin d'être au moins partiellement validé, ce nouveau cadre -que nous voyons dérivé du constructivisme -devrait faire montre de sa capacité à traiter de manière satisfaisante une problématique classique de comptabilité empirico-positive. Cette problématique spécifique a été choisie sous la forme de traitement-validation du principe de continuité de l'exploitation. Le principe de continuité de l'exploitation postule (énonciation d'une hypothèse) et établit (vérification de l'hypothèse) que l'entreprise produit ses états financiers dans la perspective d'une poursuite normale de ses activités. Il y a rupture du principe de continuité de l'exploitation (qui devra alors être écartée au profit du principe de liquidation ou de cession) dans le cas de cessation d'activité, totale ou partielle, volontaire ou involontaire, ou la constatation de faits de nature à compromettre la continuité de l'exploitation. Ces faits concernent la situation financière, économique et sociale de l'entreprise et représentent l'ensemble des événements objectifs 33, survenus ou pouvant survenir, susceptibles d'affecter la poursuite de l'activité dans un avenir prévisible. A l'instar de tous les principes comptables, le principe de continuité de l'exploitation procède d'une considération purement théorique. Sa vérification requiert toutefois une analyse concrète, portant réellement et de manière mesurable à conséquence, raison pour laquelle il représente un thème de recherche fort apprécié en comptabilité positive, tant il peut (faussement) se confondre avec les études relatives à la banqueroute et la faillite des entreprises. Dans la pratique, certaines de ces études, basées sur des analyses multivariées discriminantes (VIDA), sont devenues pour l'auditeur de véritables outils de travail de par leur simplicité d'utilisation et d'interprétation. À travers la problématique de ce travail de thèse, il a été tenté de s'acquitter de nombreux objectifs pouvant être regroupés en deux ensembles : celui des objectifs liés à la démarche méthodologique et celui relevant de la mesure-calibration. Ces deux groupes-objectifs ont permis dans une dernière étape la construction d'un modèle se voulant une conséquence logique des choix et hypothèses retenus.
Resumo:
Paperin pinnan karheus on yksi paperin laatukriteereistä. Sitä mitataan fyysisestipaperin pintaa mittaavien laitteiden ja optisten laitteiden avulla. Mittaukset vaativat laboratorioolosuhteita, mutta nopeammille, suoraan linjalla tapahtuville mittauksilla olisi tarvetta paperiteollisuudessa. Paperin pinnan karheus voidaan ilmaista yhtenä näytteelle kohdistuvana karheusarvona. Tässä työssä näyte on jaettu merkitseviin alueisiin, ja jokaiselle alueelle on laskettu erillinen karheusarvo. Karheuden mittaukseen on käytetty useita menetelmiä. Yleisesti hyväksyttyä tilastollista menetelmää on käytetty tässä työssä etäisyysmuunnoksen lisäksi. Paperin pinnan karheudenmittauksessa on ollut tarvetta jakaa analysoitava näyte karheuden perusteella alueisiin. Aluejaon avulla voidaan rajata näytteestä selvästi karheampana esiintyvät alueet. Etäisyysmuunnos tuottaa alueita, joita on analysoitu. Näistä alueista on muodostettu yhtenäisiä alueita erilaisilla segmentointimenetelmillä. PNN -menetelmään (Pairwise Nearest Neighbor) ja naapurialueiden yhdistämiseen perustuvia algoritmeja on käytetty.Alueiden jakamiseen ja yhdistämiseen perustuvaa lähestymistapaa on myös tarkasteltu. Segmentoitujen kuvien validointi on yleensä tapahtunut ihmisen tarkastelemana. Tämän työn lähestymistapa on verrata yleisesti hyväksyttyä tilastollista menetelmää segmentoinnin tuloksiin. Korkea korrelaatio näiden tulosten välillä osoittaa onnistunutta segmentointia. Eri kokeiden tuloksia on verrattu keskenään hypoteesin testauksella. Työssä on analysoitu kahta näytesarjaa, joidenmittaukset on suoritettu OptiTopolla ja profilometrillä. Etäisyysmuunnoksen aloitusparametrit, joita muutettiin kokeiden aikana, olivat aloituspisteiden määrä ja sijainti. Samat parametrimuutokset tehtiin kaikille algoritmeille, joita käytettiin alueiden yhdistämiseen. Etäisyysmuunnoksen jälkeen korrelaatio oli voimakkaampaa profilometrillä mitatuille näytteille kuin OptiTopolla mitatuille näytteille. Segmentoiduilla OptiTopo -näytteillä korrelaatio parantui voimakkaammin kuin profilometrinäytteillä. PNN -menetelmän tuottamilla tuloksilla korrelaatio oli paras.
Resumo:
BACKGROUND AND PURPOSE: The ASTRAL score was recently introduced as a prognostic tool for acute ischemic stroke. It predicts 3-month outcome reliably in both the derivation and the validation European cohorts. We aimed to validate the ASTRAL score in a Chinese stroke population and moreover to explore its prognostic value to predict 12-month outcome. METHODS: We applied the ASTRAL score to acute ischemic stroke patients admitted to 132 study sites of the China National Stroke Registry. Unfavorable outcome was assessed as a modified Rankin Scale score >2 at 3 and 12 months. Areas under the curve were calculated to quantify the prognostic value. Calibration was assessed by comparing predicted and observed probability of unfavorable outcome using Pearson correlation coefficient. RESULTS: Among 3755 patients, 1473 (39.7%) had 3-month unfavorable outcome. Areas under the curve for 3 and 12 months were 0.82 and 0.81, respectively. There was high correlation between observed and expected probability of unfavorable 3- and 12-month outcome (Pearson correlation coefficient: 0.964 and 0.963, respectively). CONCLUSIONS: ASTRAL score is a reliable tool to predict unfavorable outcome at 3 and 12 months after acute ischemic stroke in the Chinese population. It is a useful tool that can be readily applied in clinical practice to risk-stratify acute stroke patients.
Resumo:
BACKGROUND: Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS: We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS: 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS: In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.